id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.11506 | Matching Table Metadata with Business Glossaries Using Large Language
Models | Enterprises often own large collections of structured data in the form of
large databases or an enterprise data lake. Such data collections come with
limited metadata and strict access policies that could limit access to the data
contents and, therefore, limit the application of classic retrieval and
analysis solutions. As a result, there is a need for solutions that can
effectively utilize the available metadata. In this paper, we study the problem
of matching table metadata to a business glossary containing data labels and
descriptions. The resulting matching enables the use of an available or curated
business glossary for retrieval and analysis without or before requesting
access to the data contents. One solution to this problem is to use
manually-defined rules or similarity measures on column names and glossary
descriptions (or their vector embeddings) to find the closest match. However,
such approaches need to be tuned through manual labeling and cannot handle many
business glossaries that contain a combination of simple as well as complex and
long descriptions. In this work, we leverage the power of large language models
(LLMs) to design generic matching methods that do not require manual tuning and
can identify complex relations between column names and glossaries. We propose
methods that utilize LLMs in two ways: a) by generating additional context for
column names that can aid with matching b) by using LLMs to directly infer if
there is a relation between column names and glossary descriptions. Our
preliminary experimental results show the effectiveness of our proposed
methods. | Elita Lobo, Oktie Hassanzadeh, Nhan Pham, Nandana Mihindukulasooriya, Dharmashankar Subramanian, Horst Samulowitz | 2023-09-08T02:23:59Z | http://arxiv.org/abs/2309.11506v1 | # Matching Table Metadata with Business Glossaries Using Large Language Models
###### Abstract
Enterprises often own large collections of structured data in the form of large databases or an enterprise data lake. Such data collections come with limited metadata and strict access policies that could limit access to the data contents and, therefore, limit the application of classic retrieval and analysis solutions. As a result, there is a need for solutions that can effectively utilize the available metadata. In this paper, we study the problem of matching table metadata to a business glossary containing data labels and descriptions. The resulting matching enables the use of an available or curated business glossary for retrieval and analysis without or before requesting access to the data contents. One solution to this problem is to use manually-defined rules or similarity measures on column names and glossary descriptions (or their vector embeddings) to find the closest match. However, such approaches need to be tuned through manual labeling and cannot handle many business glossaries that contain a combination of simple as well as complex and long descriptions. In this work, we leverage the power of large language models (LLMs) to design generic matching methods that do not require manual tuning and can identify complex relations between column names and glossaries. We propose methods that utilize LLMs in two ways: a) by generating additional context for column names that can aid with matching b) by using LLMs to directly infer if there is a relation between column names and glossary descriptions. Our preliminary experimental results show the effectiveness of our proposed methods.
1IBM Research, Yorktown Heights, NY, United States
2University of Massachusetts Amherst, MA, United States
## 1 Introduction
Large collections of structured tabular data that businesses possess can be invaluable resources for various analytic tasks. Traditionally, such data collections are gathered in large databases or data warehouses, along with mechanisms of collecting and maintaining metadata with well-curated schemas, data catalogs, or master data as a part of a master data management solution. In practice, the overhead of maintaining accurate metadata may be prohibitively difficult and expensive. More recently, enterprises are moving toward collecting all their data in data lakes without any requirements or strict enforcement of metadata availability or quality.
As a result, there is a need for solutions that can effectively use limited metadata, such as column headers, and automatically generate useful metadata. Most organizations maintain some business glossary [1] with a set of concepts that are relevant to the business processes. If table columns can be annotated with business glossary terms, it helps downstream tasks such as data discovery, data integration, or performing advanced analytics.
The task of mapping table columns to a business glossary is similar to the task of annotating a table column with an ontology or knowledge graph concept, which is referred to as the Column Type Annotation (CTA) task [2]. However, to our knowledge, prior work has not considered further restricting the task to using only the table metadata (table name and column headers) and business glossary containing labels and descriptions only. The problem we study in this paper is inspired by our ongoing work on implementing an automated semantic layer for enterprise data lakes [3], and has the following characteristics: 1) we do not have access to data contents due to access restrictions common in enterprise data lakes; 2) we have tabular data with no metadata other than column headers, which is a result of large data imports from highly heterogeneous sources or automated table extraction pipelines; and 3) there is no or very little training data, as the process of manually labeling table columns with business glossary terms is laborious and requires domain expertise. Figure 1 shows a few example column headers along with their context (other column headers in the same table) and their associated business glossary terms.
In the absence of rich metadata and an ontology, the matching process can only rely on the header labels and glossary labels and descriptions. Essentially, the problem becomes a string or text similarity matching problem. Prior work has studied various flavors of such matching methods for record matching in databases [4, 5] as well as ontology alignment [6]. Such methods rely on either _syntactic_ matching methods, which rely on common tokens and substrings between the terms that should be matched, or _semantic_ matching methods, which rely on the availability of a dictionary of terms along with lists of related terms such as synonyms, hypernyms, and hyponyms. In our setup, we often need to match terms with very little syntactic similarity, and we do not have access to a dictionary that could enable semantic matching. Column headers in tabular data are often cryptic terms, and business glossaries use terminology very specific to a particular enterprise. More recent work has proposed the use of vector representations of terms in the form of embeddings [7]; however, such methods require domain-specific training data and tuning.
In this paper, we propose a novel matching solution that relies on the power of Large Language Models (LLMs) to enable the matching of table columns with glossaries when column headers are not very descriptive, and glossary terms do not have a close syntactic similarity to the column headers, and little or no training data is available. In what follows, we first discuss
Figure 1: Example column headers and their associated business glossary terms
related work. We then present the problem description, and then we present the details of our solution. In section 5, we present the results of our experiments using real-world enterprise data and business glossaries. We end the paper by outlining a number of lessons learned and avenues for future work.
## 2 Related Work
A core problem in semantic table understanding [8] is column type annotation, i.e., annotating table columns with a type from an ontology, which enables many business intelligence tasks such as semantic retrieval, data exploration, and knowledge discovery. SemTab challenge [2], which aims at benchmarking systems dealing with the tabular data to KG matching problem, provides several datasets in which the column type annotation task can be evaluated. In the SemTab challenge, this task is formulated as an unsupervised task where participating systems are not given training data.
Column type annotations using table dataMTab [9], JenTab [10], and DAGOBAH [11], are an example of the systems that participated in the SemTab challenge comprising three KG matching tasks, namely, cell to KG entity (CEA task), column to KG class (CTA task), and column pair to KG property (CPA task). As these systems typically solve the three tasks in a joint manner, they follow a pipeline architecture. The first step links cell mentions to entities within the target ontology. The second step predicts the most likely type for the query column based on the linking results. MTab and DAGOBAH also use additional information from the graph, such as entity relations, to improve cell linking accuracy. It is a requirement for these systems to have cell values that can be mapped to KG entities, which might not be the case in most industry tables.
Column type annotations with only using metadataThe problem setup that is studied in this paper differs from the traditional column type annotation task. In our setup, the system that is performing the matching between the table column and glossary concepts will only have access to the table metadata (i.e., table name and column headers) but not the actual table data (i.e., cell values). This problem setup has similarities with the ontology matching methods that rely purely on string similarity measures. String similarity measures have been studied extensively for various matching tasks, including in ontology alignment [6]. Syntactic measures of similarity measure how close two strings are based on measuring the overlap between tokens or substrings in two strings or measures based on the number of character edit operations (e.g., removal or replacement) that can transform one string to another. Examples of such methods are edit similarity and Jaro-Winkler [12]. While such approaches have shown very promising performance in various matching tasks, they are inherently not capable of differentiating between strings that are syntactically very similar but semantically dissimilar. Classic semantic measures rely on resources such as WordNet [13] containing related terms. The application of those methods is limited to when such resources are available. More recently, methods that rely on vector representations for semantic similarity have shown superior performance in various tasks. Initial approaches relied on word2vec [14], which can handle semantic matching
between words and short phrases. More recently, word embeddings [15, 16, 17, 18, 19] and sentence embeddings [7] have shown promising performance in semantic textual similarity tasks. As we will show in our experiments, business glossaries often have very similar labels and descriptions that these sentence transformer-based approaches alone cannot effectively differentiate between.
## 3 Preliminaries
### Problem Setup
We assume a setting in which we have only superficial tabular metadata corresponding to any chosen table in the form of a list of \(n\) superficial metadata fields, say \(M=\{(M_{i})_{i=1}^{n}\}\). In practice, this list could contain a main column name of interest that must be matched with the correct business concept in a glossary, along with the other column names in the table under question. We also assume access to a business glossary that consists of \(m\) glossary items \(\mathcal{G}=\{(l_{j},d_{j})\}_{j=1}^{m}\). Here, \(l_{j}\) and \(d_{j}\forall j\in[1,m]\) represent respectively the _label_ and _description_ of the \(j^{th}\) glossary item. For example, the glossary could be a list of tuples containing labels and descriptions of various business concepts. Given such superficial metadata \(M\), the task is to find its closest glossary item match.
In this paper, we consider a relaxed version of the glossary matching problem, where the task is to select \(k\) glossary items for any given metadata \(M\) such that it maximizes the probability of **Hit@k** for any given \(M\). A **Hit@k** for a given metadata represents a Boolean variable that takes the value _one_ if the selected \(k\) glossary items contain the closest match of the metadata and _zero_ otherwise. Finally, we also assume that we have a human feedback bank available in the form of \(l\) tuples \(\mathcal{H}=\{(M_{k},G_{k})_{k=1}^{l}\}\) where \(M_{k}\) represents some metadata and \(G_{k}\in\mathcal{G}\) represents the correct glossary match. We will use the human feedback bank \(\mathcal{H}\) to construct task demonstrations for the In-Context Learning approach described in the next section.
### Large Language Models
Recent work [20, 21, 22, 23, 24] has demonstrated that Large Language Models (LLMs) perform extraordinarily well on instruction-based tasks as long as these tasks can be represented in natural language. LLMs are transformer models with billions of parameters, trained on large data corpora and fine-tuned on instructions-based tasks, including classification tasks, generation tasks, and question-answering tasks. An LLM takes as input a prompt containing the description of the task along with additional context represented using natural language and outputs the results of the task in natural language. In this work, we leverage LLMs, specifically, Flan-t5-models [25] to obtain more accurate metadata for business glossary matching. Since LLMs have been trained on large data corpora, they can identify complex relations and patterns between different objects in natural language and, thereby, can be used to obtain more accurate matching.
### In-Context Learning
Fine-tuning LLMs for new tasks or datasets is often very computationally expensive and requires large amounts of data, which is often not feasible. A common approach to evade this problem is by appending one (one-shot) or multiple (multi-shot) demonstrations of the task in natural language to the prompt. This is commonly known as the One-shot or Multi-shot In-Context Learning (ICL) or In-Context Prompting method [26]. Figure 2 shows an example of multi-shot in-context learning on a classification task. In-context learning is known to have worked well for several new problems in the prior literature. This work uses the human feedback bank to generate relevant demonstrations for in-context learning. We conjecture that ICL, with good demonstrations, can improve the performance of the glossary matching task without additional fine-tuning.
## 4 Methodology
We propose two different classes of methods a) Metadata Description to Glossary Matching (MDGM) b) Direct Metadata to Glossary Matching (DMGM) that retrieve a set of \(k\) glossary items for any given metadata such that it contains the glossary match with high probability. In MDGM methods, we use the Large Language Models to obtain a metadata description and use the description to retrieve \(k\) glossary items that are most similar to the given description in some latent space. On the other hand, in the DMGM methods, we use LLMs to directly match metadata to business glossaries. More specifically, we treat the metadata to glossary matching problem as either a Boolean classification or a Multi-class classification problem. We design special prompts to LLMs to _directly_ infer which glossary items are potential descriptions of the metadata and choose the top-\(k\) glossary items most likely to be the description of the given metadata. Although MDGM methods seem like an indirect approach to glossary matching, they can be useful when the glossary constantly changes during test time, and direct inference over large glossaries is expensive. We show in section 5.2 that MDGM methods tend to outperform DMGM methods.
### Column Description to Glossary Matching
We now describe various techniques we propose for generating descriptions of metadata for the metadata to business glossary matching problem.
Figure 2: Example prompt for tuned and pre-tuned Flan-t5 models used in MDG-MICL (2-shots) method.
#### 4.1.1 Metadata Description Generation via Multi-Shot In-Context Learning (MDG-MICL)
Since LLMs are trained on large data corpora, they can generate good descriptions of any concept they may have seen during the training period. In this method, we leverage this knowledge of LLM to generate descriptions of the given metadata. We design a special prompt instructing the LLM to generate a metadata description. Further, we use ICL to improve the quality and control the format of the description generated by the LLM. Figure 2 shows an example of the ICL prompt used for this task with tuned Flan-t5-xl and Flan-t5-xxl models. To construct demonstrations for ICL, we proceed as follows. Using the Sentence-BERT (SBERT) sentence embeddings [7], we first generate sentence embeddings of the metadata and all the descriptions corresponding to the glossary items in the human-feedback bank, \(\mathcal{H}\). We use the cosine similarity metric to find \(e\) glossary item descriptions from \(\mathcal{H}\) closest to the metadata in the SBERT sentence embedding space. We construct demonstrations from these \(e\) glossary items in \(\mathcal{H}\), and append them to the prompt, in response to which the LLM generates a description. We obtain the final description by appending the table name and metadata to the LLM-generated description. We embed this description using SBERT and obtain the top-\(k\) glossary items by computing the cosine similarity metric between this embedding and the sentence embeddings of all the glossary item descriptions. The procedure of computing top-\(k\) glossary items from the glossary set \(\mathcal{G}\) for a given metadata description using the SBERT sentence embeddings and cosine similarity metric is used in several subsequent methods; for the sake of brevity, we will here on refer to this procedure as the SBERT \(k-\)nearest neighbors method.
#### 4.1.2 Metadata Description Generation via Classification (MDG-CI)
As discussed in section 3, LLMs can also identify complex relations between various concepts in natural language. Therefore, in this method, we leverage LLMs to directly select the best description from the set of glossary descriptions in \(\mathcal{G}\) using a classification-based technique.
Specifically, for a given metadata and glossary set \(\mathcal{G}\), we construct a binary classification prompt against each glossary item in the set \(\mathcal{G}\), that queries the LLM on whether the given glossary item is a potential description of the metadata. Figure 2 shows an example of the classification prompt used for this task. Note that when the glossary set \(\mathcal{G}\) is too large, it may result in high inference costs. We can mitigate these high costs by first shortlisting the top \(k_{1},(k_{1}>k)\) glossary item from the glossary set \(\mathcal{G}\) using the SBERT \(k-\)nearest neighbors method before proceeding with the classification prompts.
The final description of the metadata corresponds to the glossary description for which the
Figure 3: Example prompt for tuned Flan-t5-xl and pre-tuned Flan-t5-xxl models used in MDG-CI method.
classification response was positive with the highest log probability score of the classification task, appended to the table name and the metadata itself. If no such glossary item exists, we use the metadata itself as the description. Once the metadata description is generated, we select the top-\(k\) glossary items from the glossary set \(\mathcal{G}\) using SBERT \(k-\)nearest neighbors described in MDG-MICL method.
#### 4.1.3 Metadata Description Generation via Multiple Choice Question Answering (MDG-MCQA)
An alternative way of generating descriptions using LLMs is by using a Multiple Choice Question Answer (MCQA) prompt, as shown in Figure 4, that instructs the LLM to choose the best description of the metadata amongst the descriptions of selected glossary items. Although this may seem counterintuitive, we observe in our experiments that using the description of the selected glossary item to find the top-\(k\) glossary items instead of simply returning the glossary item corresponding to the selected description results in a higher Hit@5 rate. Similar to previous methods, we shortlist the top \(k_{1}(k_{1}>k)\) glossary items that are closest to the metadata in sentence-embedding space and use them as choices in our Multiple Choice Question Answer prompt. We also add "None of the above" to the list of choices in the MCQA prompt. Finally, we append the metadata and the table name to the description of the LLM-selected glossary item and use it to find the top-\(k\) glossary items using the SBERT \(k-\)nearest neighbors method. Note that when the LLM selects the "None of the above" option, we use the metadata itself as the description.
### Direct Metadata to Glossary Matching
#### 4.2.1 Direct Inference via Classification (DI-CI)
This method is a variant of the MDG-CI method that uses LLM to directly select the top-\(k\) glossary items for any given metadata without needing to generate a description of the metadata. Similar to previous methods, we shortlist the top \(k_{1}\left(k_{1}>k\right)\) glossary items closest to the metadata in the sentence-embedding space. For each of these glossary items, we construct binary classification prompts that query the LLM on whether the description of the glossary item matches the metadata. An example prompt is shown in Figure 5. Among all glossary items with positive responses, the glossary items with top-\(k\) highest log probability scores are
Figure 4: Prompt for Flan-t5-xl and pre-tuned Flan-t5-xxl models used in MDG-MCQA method.
selected. It is important to note that this method may return less than \(k\) glossary items. Such missing items are assumed to be incorrect matches while computing the **Hit@k** metric.
#### 4.2.2 Direct Inference via Multiple Choice Question Answering (DI-MCQA)
We consider another variation of the MDG-MCQA method that computes a single best match of the given metadata without generating a description of the metadata. In this method, we follow the same procedure as MDG-MCQA to construct Multi-Choice Classification prompts, as shown in Figure 6, and return the glossary item corresponding to the description selected by the LLM. Since this method always returns a single glossary item, we will assume that the \(k-1\) missing glossary items are incorrect matches while computing the **Hit@k** metric.
## 5 Experiments
In this section, we empirically evaluate and compare the performance of all the MDGM and DMGM methods proposed in section 4. Specifically, we investigate the following two questions a) Which of the proposed methods: MDG-MICL (0-shot, 1-shot, 2-shots), MDG-Cl, MDG-MCQA, DI-Cl, and DI-MCQA, best leverage LLM in solving the metadata to glossary matching task, and b) is it possible to obtain more accurate matching, i.e., higher **Hit@5** and **Hit@1** using LLMs than using basic similarity-score based matching methods? Our preliminary results show that LLMs are indeed effective in improving glossary matching accuracy.
Figure 5: Example prompt for all pre-tuned and tuned Flan-t5 models used in DI-CI method.
Figure 6: Example prompt for Flan-t5-xl and pre-tuned Flan-t5-xxl models used in DI-MCQA method.
### Experimental setup
In all our experiments, the metadata consists of the column name of interest and the other column names in the table. The glossary is a list of tuples where each tuple consists of a label and a description of the label. Multiple column names may be matched to the same label. Furthermore, the label of the matched glossary item for each column name may not coincide with the column name. We evaluate our methods on the Flan-T5-XL, Flan-T5-XXL [27, 25], and a Flan-T5-XL model that we fine-tune on the training dataset using the supervised fine-tuning method for LLMs [28]. For each method and LLM model, we experiment with 4-6 different prompt templates. However, due to lack of space, we only provide examples of the prompt templates that achieved the highest hit@5 rate (\(2\),\(3\),\(4\),\(5\),\(6\)).
We use the all-mpnet-base-v2 SBERT model from the sentence-transformers library[7] in all experiments. We evaluate each method based on their **Hit@5** and **Hit@1** rates on the test dataset, i.e., the empirical mean of **Hit@5** and **Hit@1** computed on the test dataset. These measures were chosen to reflect our goal of having the correct glossary item as the top or within the top 5 glossary items returned to the user on a GUI [3]. We compare these scores against ones produced by a baseline, which computes the top-\(k\) glossary items based on the cosine similarity of the sentence embedding between the column name and the descriptions in the glossary.
MDE DatasetThe MDE Dataset is an IBM-internal benchmark developed by annotating the column names of the "Customer Insight" example database of "IBM InfoSphere Warehouse Pack" [29] with theevaluateglossary terms from the IBM Knowledge to fine-tune Financial Services (IBM KAFS) [30] glossary. The Customer Insight database consists of 26 tables with 688 columns. The column names contain cryptic codes and abbreviations to reflect realistic tables commonly seen in client engagements. The IBM KAFS glossary contains 9,137 business terms with their labels and descriptions. Out of 688 columns, 488 have suitable matching terms in the glossary, and the rest are annotated as null mappings and ignored in the evaluation. We split the mappings into train, test, and demonstration splits with 208, 212, and 68 columns, respectively. These splits contain tuples of the form \((column\_name,other\_column\_names, glossary\_item)\). The training, test, and demonstration splits are used for fine-tuning the LLM model, evaluating the method, and as a proxy for human feedback.
### Experimental Results
Table 1 shows the **Hit@5** and **Hit@1** rates achieved by different methods on the MDE dataset with different LLM models. As expected, the **Hit@5** rate increases with the number of demonstrations in the MDG-MICL method. This result suggests that it may be beneficial to use in-context learning whenever demonstrations are readily available. However, we do not observe a similar trend for _Hit@1_ rate. We believe this is due to the difficulty of matching metadata to a single glossary item when multiple glossary items have similar descriptions. Overall, MDG-MICL achieves the highest Hit@5 and **Hit@1** scores and significantly outperforms the baseline method when two demonstrations are provided in the prompt. Meanwhile, we observe
that the DI-Cl and DI-MCQA methods achieve the worst **Hit@5** rates. We conjecture that this may be due to the underlying biases of LLMs towards certain class labels in classification and question-answering tasks as observed in prior works [31, 32]. These biases can be corrected using various calibration techniques [31, 32], which we leave for future work. It is also important to note that the DI-MCQA method selects a single best glossary match and, thus, more likely fails to select the correct item when multiple glossary items have similar descriptions. These preliminary results indicate that LLMs alone may not improve the matching accuracy. Finally, although MDG-Cl and MDG-MCQA use the same classification prompt and Multiple Choice Question Answer prompts as DI-Cl and DI-MCQA, they achieve higher **Hit@5** and **Hit@1** than the latter methods. We believe that this is because these methods select glossary items whose descriptions are similar to the closest glossary match and, thus, tend to perform better.
## 6 Discussion and Future work
This paper proposes two different classes of methods, i.e., MDGM and DMGM, that leverage LLMs for solving the metadata to glossary matching problem. MDGM methods use LLMs to generate good metadata descriptions, which we couple with similarity-based metrics for more refined matches. This class of methods is necessary when the glossary is likely to change during test time frequently, and repeated inferences are expensive. The second class of methods (DMGM) utilizes LLMs to infer which glossary items are potential matches directly. These methods are helpful when the metadata is too complex, and there is a significant difference between the description of the metadata and that of its closest glossary match. Although we have shown that many of these methods can potentially obtain more accurate glossary matching, we can further improve them in several ways. Our experiments show that DMGM methods perform poorly compared to MDGM methods. This may be due to the undesirable biases towards specific class labels that LLMs learned during training. One approach to mitigating these biases
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Flan-t5-xl} & \multicolumn{2}{c}{Flan-t5-xl} & \multicolumn{2}{c}{Tuned Flan-t5-xl} \\ & **Hit@1** & **Hit@5** & **Hit@1** & **Hit@5** & **Hit@1** & **Hit@5** \\ \hline Baseline Method & 0.4575 & 0.7406 & 0.4575 & 0.7406 & 0.4575 & 0.7406 \\ MDG-MICL (0-shot) & **0.5** & **0.7877** & **0.4858** & **0.7689** & 0.4434 & **0.7594** \\ MDG-MICL (1-shot) & **0.4764** & **0.7972** & **0.4953** & **0.8113** & 0.467 & **0.75** \\ MDG-MICL (2-shots) & **0.5047** & **0.8133** & **0.4858** & **0.816** & **0.5** & **0.7877** \\ MDG-Cl & 0.434 & **0.7453** & **0.5** & **0.7642** & 0.4575 & **0.7547** \\ MDG-MCQA & **0.5708** & **0.7547** & **0.5755** & **0.7453** & **0.5753** & **0.7642** \\ DI-CI & 0.3585 & 0.4717 & **0.4811** & 0.6934 & 0.3632 & 0.4953 \\ DI-MCQA & **0.5519** & 0.5519 & **0.5708** & 0.5708 & **0.5519** & 0.5519 \\ \hline \hline \end{tabular}
\end{table}
Table 1: This table shows the **Hit@5** and **Hit@1** scores achieved by the Baseline, MDG-MICL (0,1,2)-shots, MDG-Cl, MDG-MCQA, DI-Cl and DI-MCQA methods on Flan-T5-XL, Flan-T5-XXL and Tuned Flan-T5-XL models. The highlighted numbers indicate that the corresponding methods have outperformed the baseline method in terms of the **Hit@5/Hit@1** value. MDG-MICL consistently achieves the highest **Hit@5** and **Hit@1**, whereas DI-Cl and DI-MCQA have the worst **Hit@5**.
is using various calibration techniques [31, 32, 33]. Providing several positive and negative demonstrations in the classification and multiple-choice question-answer prompts may also help mitigate LLMs' default biases [26]. We can further improve the MDGM methods that generate descriptions of metadata by constraining LLMs to sample words mainly from the glossary or providing LLMs with the top-\(k\) glossary items and prompting LLMs to generate descriptions similar to those of the glossary items. This can be achieved by using the Constrained Beam Search algorithm [34] or simply manipulating the output distribution of LLMs before sampling such that it assigns higher weights to words from the glossary. It may also be helpful to use various prompt-tuning and prompt-editing methods [35] to further improve the efficiency of the prompts used with LLMs. Although these directions remain intriguing, they warrant more in-depth empirical study, which we leave for future work.
|
2309.07364 | Hodge-Aware Contrastive Learning | Simplicial complexes prove effective in modeling data with multiway
dependencies, such as data defined along the edges of networks or within other
higher-order structures. Their spectrum can be decomposed into three
interpretable subspaces via the Hodge decomposition, resulting foundational in
numerous applications. We leverage this decomposition to develop a contrastive
self-supervised learning approach for processing simplicial data and generating
embeddings that encapsulate specific spectral information.Specifically, we
encode the pertinent data invariances through simplicial neural networks and
devise augmentations that yield positive contrastive examples with suitable
spectral properties for downstream tasks. Additionally, we reweight the
significance of negative examples in the contrastive loss, considering the
similarity of their Hodge components to the anchor. By encouraging a stronger
separation among less similar instances, we obtain an embedding space that
reflects the spectral properties of the data. The numerical results on two
standard edge flow classification tasks show a superior performance even when
compared to supervised learning techniques. Our findings underscore the
importance of adopting a spectral perspective for contrastive learning with
higher-order data. | Alexander Möllers, Alexander Immer, Vincent Fortuin, Elvin Isufi | 2023-09-14T00:40:07Z | http://arxiv.org/abs/2309.07364v1 | # Hodge-Aware Contrastive Learning
###### Abstract
Simplicial complexes prove effective in modeling data with multiway dependencies, such as data defined along the edges of networks or within other higher-order structures. Their spectrum can be decomposed into three interpretable subspaces via the Hodge decomposition, resulting foundational in numerous applications. We leverage this decomposition to develop a contrastive self-supervised learning approach for processing simplicial data and generating embeddings that encapsulate specific spectral information. Specifically, we encode the pertinent data invariances through simplicial neural networks and devise augmentations that yield positive contrastive examples with suitable spectral properties for downstream tasks. Additionally, we reweight the significance of negative examples in the contrastive loss, considering the similarity of their Hodge components to the anchor. By encouraging a stronger separation among less similar instances, we obtain an embedding space that reflects the spectral properties of the data. The numerical results on two standard edge flow classification tasks show a superior performance even when compared to supervised learning techniques. Our findings underscore the importance of adopting a spectral perspective for contrastive learning with higher-order data.
Alexander Mollers, Learning, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway dependencies, Multiway dependencies Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway, Multiway dependencies, Multiway dependencies Multiway, Multiway dependencies, Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway, Multiway dependencies, Multiway dependencies, Multiway dependencies Multiway, Multiway dependencies, Multiway dependencies Multiway, Multiway dependencies Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway Multiway Multiway Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway Multiway, Multiway Multiway dependencies Multiway Multiway, Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway Multiway, Multiway dependencies Multiway Multiway Multiway, Multiway dependencies Multiway Multiway Multiway, Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway dependencies Multiway Multiway Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway dependencies Multiway Multiway, Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway, Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway, Multiway, Multiway, Multiway dependencies Multiway, Multiway, Multiway, Multiway, Multiway, Multiway dependencies Multiway, Multi
\[\begin{split}\mathbf{L}_{0}&=\mathbf{B}_{1}\mathbf{B}_{1}^{ \top},\\ \mathbf{L}_{1}&=\mathbf{L}_{1,\ell}+\mathbf{L}_{1,u}:= \mathbf{B}_{1}^{\top}\mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{B}_{2}^{\top},\\ \mathbf{L}_{2}&=\mathbf{B}_{2}^{\top}\mathbf{B}_{2} \end{split} \tag{2}\]
where \(\mathbf{L}_{0}\), \(\mathbf{L}_{1}\), \(\mathbf{L}_{2}\) represent the neighborhood relationships between nodes, edges, and triangles, respectively. Moreover, \(\mathbf{L}_{0}\) coincides with the standard graph Laplacian [3]. The lower-Laplacian \(\mathbf{L}_{1,\ell}=\mathbf{B}_{1}^{\top}\mathbf{B}_{1}\) and upper-Laplacian \(\mathbf{L}_{1,u}=\mathbf{B}_{2}\mathbf{B}_{2}^{\top}\) split the edge adjacencies into relations that are due to common vertices and common triangles, respectively.
A \(k-\)simplicial signal \(x^{k}\) is a mapping from a \(k-\)simplex to the set of real numbers that formalizes the simplicial data. We collect all \(k-\) signals in vector \(\mathbf{x}^{k}=[x_{1}^{k},\ldots,x_{N_{k}}^{k}]^{\top}\), where \(x_{i}^{k}\) is the signal on the \(i\)th simplex and \(N_{k}\) is the total number of \(k-\)simplices. For instance, we denote an edge flow as \(\mathbf{x}^{1}=[x_{1}^{1},\ldots,x_{N_{1}}^{1}]^{\top}\) with \(x_{e}^{1}\) being the flow on the edge \(e=(m,n)\) in \(\mathcal{S}^{1}\). In the sequel, we focus on edge flows to ease exposition and because of their wider applicability; thus we drop the superscript and denote them as \(\mathbf{x}\).
### The Hodge Decomposition
SCs allow for a spectral processing of simplicial signals via the Hodge decomposition, which decomposes the space \(\mathbb{R}^{N_{k}}\) as:
\[\mathbb{R}^{N_{k}}=\text{im}(\mathbf{B}_{k}^{\top})\oplus\text{im}(\mathbf{B} _{k+1})\oplus\text{ker}(\mathbf{L}_{k}) \tag{1}\]
where \(\text{im}(\cdot)\) and \(\text{ker}(\cdot)\) are the image and kernels spaces of a matrix and \(\oplus\) is the direct sum of vector spaces [4, 7]. Accordingly, we can decompose any edge flow \(\mathbf{x}\), into three parts \(\mathbf{x}=\mathbf{x}_{\text{G}}+\mathbf{x}_{\text{C}}+\mathbf{x}_{\text{H}}\) each living in an orthogonal subspace known as the gradient space \(\mathbf{x}_{\text{G}}\in\text{im}(\mathbf{B}_{1}^{\top})\), the curl space \(\mathbf{x}_{\text{C}}\in\text{im}(\mathbf{B}_{2})\), and the harmonic space \(\mathbf{x}_{\text{H}}\in\text{ker}(\mathbf{L}_{1})\). In turn, the 1-Hodge Laplacian can be eigen-decomposed as \(\mathbf{L}_{1}=\mathbf{U}\mathbf{A}\mathbf{U}^{\top}\) with eigenvectors \(\mathbf{U}\) and eigenvalues \(\mathbf{\Lambda}\). By grouping the eigenvectors as \(\mathbf{U}=[\mathbf{U}_{\text{G}},\mathbf{U}_{\text{C}},\mathbf{U}_{\text{H}}]\) and projecting the edge flow onto them gives rise to the embeddings \(\tilde{\mathbf{x}}_{\text{G}}=\mathbf{U}_{\text{G}}^{\top}\mathbf{x}\), \(\tilde{\mathbf{x}}_{\text{C}}=\mathbf{U}_{\text{C}}^{\top}\mathbf{x}\), \(\tilde{\mathbf{x}}_{\text{H}}=\mathbf{U}_{\text{H}}^{\top}\mathbf{x}\). These embeddings encode different interpretable properties of the data, which we will exploit in Sec. 4 to design augmentations. Refer to [4] for more detail on the role of the Hodge decomposition on processing simplicial data.
### Simplicial Convolutional Neural Networks
Simplicial convolutional filters are linear parametric mappings that can process simplicial signals [7]. For an input edge flow \(\mathbf{x}\), the filtered signal is \(\mathbf{y}=\mathbf{H}(\mathbf{L}_{1})\mathbf{x}\) with simplicial filtering matrix:
\[\mathbf{H}(\mathbf{L}_{1}):=\left(\epsilon\mathbf{I}+\sum_{l_{1}=0}^{L_{1}} \alpha_{l_{1}}\mathbf{L}_{1,\ell}^{1_{1}}+\sum_{l_{2}=0}^{L_{2}}\beta_{l_{2}} \mathbf{L}_{1,u}^{l_{2}}\right). \tag{2}\]
Here, \(\{\epsilon,\alpha_{l_{1}},\beta_{l_{2}}\}\) are filter coefficients and \(\mathbf{L}_{1,\ell}\), \(\mathbf{L}_{1,u}\) propagate the edge signal via their respective neighbourhood relations. The filter is local in the sense that it moves information by at most \(L=\max\{L_{1},L_{2}\}\) hops across the simplicial structure. Moreover, by exploiting the recursions \(\mathbf{L}_{1,\ell}^{1_{1}}\mathbf{x}=\mathbf{L}_{1,\ell}(\mathbf{L}_{1,\ell} ^{1_{1}-1}\mathbf{x})\), \(\mathbf{L}_{1,u}^{2}\mathbf{x}^{k}=\mathbf{L}_{ku}(\mathbf{L}_{1,u}^{l_{2}-1} \mathbf{x})\) and since the Laplacian matrices are sparse, we can obtain the output with a cost of order \(\mathcal{O}(LN_{1})\). The filter part related to the lower Laplacian acts on the signal gradient embedding, whereas that related to the upper Laplacian acts on the curl embedding. All the parameters contribute to processing the signal harmonic embedding. Refer to [7, Sec. IV] for the specific relation of filter (2) and decomposition (1).
The constant number of parameters and the linear computational complexity in the number of edges, make the filter (2) an appealing solution to learn representations from edge flows. To also learn nonlinear mappings, simplicial convolutional neural networks (SCNNs) have been developed as layered structures interleaving filters with pointwise non-linearities [13]. Specifically, given the SCNN input \(\mathbf{x}_{0}:=\mathbf{x}\), the propagation rule at each layer \(t\) is:
\[\mathbf{x}_{t}=\sigma\big{(}\mathbf{H}_{t}(\mathbf{L}_{1})\mathbf{x}_{t-1} \big{)} \tag{3}\]
where \(\sigma(\cdot)\) is a pointwise nonlinearity (e.g., ReLU). The final layer constitutes the SCNN output which provides its embedding. Compactly, we will denote the SCNN input-output relation as \(\mathbf{h}:=f_{\mathcal{H}}(\mathbf{x})\), where \(\mathbf{h}\) is referred to as the SCNN embedding and set \(\mathcal{H}:=\{\epsilon_{t},\{\alpha_{\ell_{1}},t\},\{\beta_{\ell_{2},t}\}\}_{t}\) collects the parameters of all layers. Furthermore, depending on the setting, a readout function is applied to \(\mathbf{h}\) to transform it for the task at hand (e.g., binary classification).
## 3 Problem Statement
Learning representations via the SCNN is typically done in a supervised manner, but we often have just a few labeled examples or we are missing labels at all. To tackle this challenge, we resort to contrastive learning and propose a self-supervised learning approach for simplicial complexes. Our problem statement reads as:
_Given a set of unlabeled edge flows, we want to train a simplicial convolutional neural network in a self-supervised manner to generate embeddings that reflect the Hodge-properties of the data and that can be used in a downstream task._
We approach this problem by training the network with the contrastive InfoNCE loss and by designing augmentations that preserve the desired spectral properties. We also reweight the negative samples in the loss to push apart spectrally-different embeddings.
## 4 Simplicial Contrastive Learning
To train an SCNN in a self-supervised manner, we resort to the contrastive learning framework [15, 16]. In the simplicial setting, this principle suggests that for each edge flow datum \(\mathbf{x}\) we create both positive and negative examples and train the SCNN (a.k.a. the encoder) to map the positive embeddings close to each other and the negative ones farther apart. Specifically, we consider an edge flow datum \(\mathbf{x}\) with its final representation \(\mathbf{z}=g_{\mathcal{H}}(\mathbf{h})=g_{\mathcal{H}}(f_{\mathcal{H}}(\mathbf{x }))\), where \(f_{\mathcal{H}}(\cdot)\) is the SCNN [cf. (3)] and \(g_{\mathcal{H}}(\cdot)\) is a parametric map (typically a fully connected layer). Then, we create a
Figure 1: InfoNCE learning with augmentations. The data point (anchor) \(\mathbf{x}\) submits two transformations \(\mathcal{T}_{1\backslash 2}(\cdot)\) to generate positive augmented examples. The latter are first passed through the SCNN \(f_{\mathcal{H}}(\cdot)\) to generate the simplicial embeddings \(\mathbf{h}\) (cf. (3)) and then to a parametric map \(g_{\mathcal{H}}(\cdot)\) to obtain the final representation \(\mathbf{z}\). These representations are contrasted in loss (4) to train both \(f_{\mathcal{H}}(\cdot)\) and \(g_{\mathcal{H}}(\cdot)\).
positive pair for \(\mathbf{x}\) via two topological augmentations \(\mathbf{x}^{\prime}_{i}=\mathcal{T}_{1}(\mathbf{x})\) and \(\mathbf{x}^{\prime}_{j}=\mathcal{T}_{2}(\mathbf{x})\) with respective representations \(\mathbf{z}^{\prime}_{i}=g_{\mathcal{H}}(f_{\mathcal{H}}(\mathbf{x}^{\prime}_{i} ))\) and \(\mathbf{z}^{\prime}_{j}=g_{\mathcal{H}}(f_{\mathcal{H}}(\mathbf{x}^{\prime}_{j} ))\). The negative examples w.r.t. to \(\mathbf{x}\) consists of other edge flows from the dataset \(\mathbf{x}_{m}\neq\mathbf{x}\) and their augmentations. With reference to Fig. 1, the overall network is trained to minimize the so-called temperature-scaled InfoNCE objective:
\[\mathcal{L}_{\text{InfoNCE}}\,=-\sum_{\left(\mathbf{x}^{\prime}_{i},\mathbf{x }^{\prime}_{j}\right)\in\mathcal{P}}\log\left(\frac{e^{\text{sim}\left( \mathbf{z}^{\prime}_{i},\mathbf{z}^{\prime}_{j}\right)/\tau}}{\sum_{m=1}^{M}e ^{\text{sim}\left(\mathbf{z}^{\prime}_{i},\mathbf{z}_{m}\right)/\tau}}\right) \tag{4}\]
where \(\mathbb{P}\) is the set of all positive pairs in the data, \(\text{sim}(\boldsymbol{u},\boldsymbol{v})=\boldsymbol{u}^{\top}\boldsymbol{v} /\|\boldsymbol{u}\|_{2}\|\boldsymbol{v}\|_{2}\) is the cosine similarity, \(\tau\) is a temperature parameter, and \(M\) is the number of negative examples \(\mathbf{x}_{m}\) with representations \(\mathbf{z}_{m}\). The numerator encourages the network to map the positive embeddings close to each other, while the denominator repulses the negative embedding \(\mathbf{z}_{m}\) from the positive one \(\mathbf{z}^{\prime}_{i}\).
The InfoNCE optimizes a lower bound on the mutual information between the representations of the positive pairs [14]. Hence, augmentations should only preserve the information necessary to perform well on a downstream task [17]. Consequently, the irrelevant information is destroyed and the mutual information encoded in the representations is optimized for the task at hand. Common augmentations in contrastive learning are stochastic and include masking part of the data (pixels, vertices, edges, node features, etc.), which expresses the belief that the most important parts of the data are preserved when a few connections or feature values are removed [16, 18]. While this preserves, in probability, crucial information and works well empirically, note that the most relevant parts of a data point may not always remain intact. To mitigate this, the dropout probabilities are often chosen such that the most important properties of the data are conserved more often (e.g., in graphs nodes with a high centrality) instead of randomly removing information [19, 20, 21, 22]. With this in place, we propose the following augmentation method.
**Edge flow masking.** This method masks edge flows with probabilities \(\mathbf{p}\) to generate a positive example. That is, \(\mathbf{x}^{\prime}=\mathcal{T}(\mathbf{x}):=\mathbf{x}\circ\mathbf{e}\) where \(\mathbf{e}\) is a random Bernoulli vector with entry \(\mathbf{e}_{i}\sim Ber(\mathbf{p}_{i})\) and \(\circ\) is the elementwise product. The standard approach is to pick the same masking probability for all edges.
Such an augmentation is more effective in settings with binary flow types \(\{-,1,1\}\) or when a zero value is not indicative. This is because the masked flow attains a zero value and the augmentation effectively drops parts of the flow. Next, we show how to optimize the masking probabilities w.r.t. flow Hodge-related information.
### Hodge-Aware Spectral Augmentations
Simplicial data often have particular properties in one of the three Hodge embeddings [3, 23], which may be wrongly affected by augmentations if ignored. Hence, following the InfoMin principle simplicial augmentations should destroy information on irrelevant Hodge embeddings and preserve it on the others. To account for the latter, we cast the dropout probabilities as a stochastic optimization problem, considering the expected value of the difference of the generated Hodge embeddings to the embeddings of the anchor. Specifically, consider the embeddings of the example \(\tilde{\mathbf{x}}^{\prime}_{\text{G}}=\mathbf{U}^{\top}_{\text{G}}\mathbf{x }^{\prime}\), \(\tilde{\mathbf{x}}^{\prime}_{\text{C}}=\mathbf{U}^{\top}_{\text{C}}\mathbf{x }^{\prime}\), \(\tilde{\mathbf{x}}^{\prime}_{\text{H}}=\mathbf{U}^{\top}_{\text{H}}\mathbf{x }^{\prime}\). Then the expressions \(\mathcal{L}_{\text{G}}(\mathbf{p})=\mathbb{E}[\|\tilde{\mathbf{x}}_{\text{G}}- \tilde{\mathbf{x}}^{\prime}_{\text{G}}\|^{2}_{2}]\), \(\mathcal{L}_{\text{C}}(\mathbf{p})=\mathbb{E}[\|\tilde{\mathbf{x}}_{\text{G}} -\tilde{\mathbf{x}}^{\prime}_{\text{G}}\|^{2}_{2}]\), \(\mathcal{L}_{\text{H}}(\mathbf{p})=\mathbb{E}[\|\tilde{\mathbf{x}}_{\text{H}} -\tilde{\mathbf{x}}^{\prime}_{\text{H}}\|^{2}_{2}]\) quantify the expected quadratic differences between the original and the augmented hodge embeddings. 1 We use these distances to optimize the probabilities based on prior knowledge such that one or more of the augmented embeddings are similar while the remaining ones differ. For example, when the curl and harmonic embeddings are important, such as in the trajectory prediction task that we will touch in the experiments (Sec. 5), we could design \(\mathbf{p}\) by solving:
Footnote 1: To see that these indeed depend on \(\mathbf{p}\) use the equality \(\text{Tr}\left(\mathbf{X}\mathbf{X}^{\top}\right)=\|\mathbf{X}\|_{2}^{2}\) and recall \(\mathbb{E}\left[\mathbf{x}\circ\mathbf{e}\right]=\mathbf{x}\circ\mathbf{p}\). Then \(\mathbb{E}[\|\tilde{\mathbf{x}}_{\text{G}}-\tilde{\mathbf{x}}^{\prime}_{\text{ G}}\|^{2}_{2}]=\|\tilde{\mathbf{x}}_{\text{G}}\|^{2}_{2}-\|\tilde{\mathbf{x}}_{ \text{G}}\|^{2}_{2}\),
\[\min_{\mathbf{p}} -\mathcal{L}_{\text{G}}(\mathbf{p})+\mathcal{L}_{\text{C}}( \mathbf{p})+\mathcal{L}_{\text{H}}(\mathbf{p})\] (5a) subject to \[\mathbf{p}\in\mathcal{G}_{\mathbf{p}}:=\left\{\mathbf{p}\mid\mathbf{p}\in \left[0,1\right]^{N_{1}},\|\mathbf{p}\|_{1}\leq\epsilon_{\mathbf{p}}\right\}, \tag{5b}\]
where set \(\mathcal{G}_{\mathbf{p}}\) puts a maximum budget \(\epsilon_{\mathbf{p}}\) on the allocated drop probabilities in a sparse manner (i.e., the \(\ell_{1}-\)norm \(\|\cdot\|_{1}\) imposes sparse probabilities on a few flows). The budget \(\epsilon_{\mathbf{p}}\) is tuned as a hyperparameter. 2 By solving (5), we find the dropout probabilities \(\mathbf{p}\) that generate examples with similar curl and harmonic components to the original data point but with a different gradient component. We solve this optimization problem with projected gradient descent, projecting \(\mathbf{p}\) onto the constraint set \(\mathcal{G}_{\mathbf{p}}\) after every step.
Footnote 2: In (5a), we could also weight the contributions of the different components (e.g., \(\alpha_{\text{C}}\mathcal{L}_{\text{C}}(\mathbf{p})+\alpha_{\text{H}}\mathcal{L} _{\text{H}}(\mathbf{p})\) with scalars \(\alpha_{\text{C}},\alpha_{\text{H}}>0\) when the signals have some contribution in each of them.
### Hodge-Aware Debiasing
Problem (5) influences the embedding space by acting on the augmentation functions \(\mathcal{T}_{1\setminus 2}(\cdot)\) to generate better positive examples. To further improve the organization of the embedding space, we shall also act on the negative samples. This is known as a _debiasing_ technique and consists of reweighting the negative samples in the InfoNCE loss [24, 25, 26]. I.e., by optimizing w.r.t. the loss:
\[\mathcal{L}_{\text{weighted}}\,=-\sum_{\left(\mathbf{x}^{\prime}_{i},\mathbf{x }^{\prime}_{j}\right)\in\mathcal{P}}\log\left(\frac{e^{\text{sim}\left(\mathbf{x }^{\prime}_{i},\mathbf{x}^{\prime}_{j}\right)/\tau}}{\sum_{m=1}^{M}w(\mathbf{ x}_{i},\mathbf{x}_{m})\,\,e^{\text{sim}\left(\mathbf{x}^{\prime}_{i},\mathbf{x}_{m} \right)/\tau}}\right) \tag{6}\]
where \(w(\mathbf{x}_{i},\mathbf{x}_{m})\) is the weighting term between the anchor \(\mathbf{x}_{i}\) and the negative example \(\mathbf{x}_{m}\). For Hodge-aware learning, this weight should reflect the spectral properties of the data; thus, we would like to push spectrally different samples further away from the anchor.
We first consider a weighted embedding similarity between two data points:
\[\mathcal{S}(\tilde{\mathbf{x}}_{i},\tilde{\mathbf{x}}_{m})=\gamma_{ \text{H}}\,\,\mathrm{CD}(\tilde{\mathbf{x}}_{\text{H},i},\tilde{\mathbf{x}}_{ \text{H},m})+\gamma_{\text{G}}\,\,\mathrm{CD}(\tilde{\mathbf{x}}_{\text{G},i}, \tilde{\mathbf{x}}_{\text{G},m}) \tag{7}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\gamma_{ \text{C}}\,\,\mathrm{CD}(\tilde{\mathbf{x}}_{\text{C},i},\tilde{\mathbf{x}}_{ \text{C},m})\]
\[\mathrm{CD}(\tilde{\mathbf{x}}_{i},\tilde{\mathbf{x}}_{m})=1-\frac{\Re^{\top} \tilde{\mathbf{x}}_{m}}{\|\tilde{\mathbf{x}}_{i}\|_{2}\|\tilde{\mathbf{x}}_{m}\|_{ 2}}\,\,\text{is the cosine distance and weights}\)\(\gamma_{\text{H}},\gamma_{\text{G}},\gamma_{\text{C}}\geq 0\) are picked based on prior knowledge about the task or tuned as hyperparameters. Choosing the cosine distance leads to higher negative values for spectrally more dissimilar examples. Then, we compute the weight as the normalized similarity over the \(M\) negative samples:
\[w(\mathbf{x}_{i},\mathbf{x}_{m})=\frac{\mathcal{S}(\tilde{\mathbf{x}
away from the anchor based on this dissimilarity score. This encourages an embedding space that is organized with respect to the Hodge decomposition.
## 5 Numerical Results
We corroborate the proposed approach and compare it with supervised alternatives on two edge flow classification tasks: i) a synthetic task considering trajectories on a map; and ii) a real-data case that contains ocean drifters moving around the Madagascar island. Because of the holes in the SC representations, the harmonic embedding captures important information for solving the task. Due to the limited space, we refer the reader to [27, SS5] for more details.
**Setup.** For the trajectory dataset, we generate 200 training, 100 validation, and 100 test data points while there are 160 training and 40 test data points for the ocean drifter. We train the unsupervised simplicial contrastive learner (SCL) on all available unlabled data points and fit a linear support vector machine (SVM) on the obtained embeddings. For the ocean drifters, we use a 10-fold cross-validation on the training set to estimate the penalty parameter for the SVM. We report the average accuracy over 16 data splits. We optimize the network with stochastic gradient descent and grid search the learning rate and weight decay in the interval \([10^{-5},1]\) in decimal steps. Furthermore, we select the edge flow drop probabilities \(\mathbf{p}\) and perturbation budget \(\epsilon_{\mathbf{p}}\) from \([0.1,0.7]\). All models are trained for 200 epochs with a batch size of 100. For the encoder we follow the setting in [12] (which is supervised in there) and use a Tanh-activation and a hidden-layer of size 64. We tune the number of layers and the convolutional orders \(L_{1}=L_{2}\) in \([1,2,3]\). We compare the proposed approach with a fully-supervised SCNN and conduct and extensive analysis to understand the role of the different components.
**Results.** Table 1 depicts the overall performance on the downstream tasks. The spectral simplicial contrastive learner (SSCL\({}_{\text{spec}}\)) trained with reweighted negative samples and spectrally optimized probabilities achieves the best downstream accuracy on both datasets. This shows the ability of the proposed approach to effectively encode more relevant Hodge-related information into the embeddings, facilitating the subsequent linear learner. Fig. 2 further reinforces this aspect by showing the embedding distance between the anchor and two different augmentation techniques (random edge drop and proposed). The proposed approach generates more similar harmonic embeddings, which is key to the obtained results for the task. In Fig. 3, we show the proposed approach consistently achieves a superior performance independent of the augmentation quality.
Notably, even for models trained without a reweighted loss, the incorporation of spectrally optimized augmentations (SCL\({}_{\text{spec}}\)) improves the accuracy over uniform probabilities (SCL). This substantiates the importance of the spectral augmentations as a standalone feature. Furthermore, to evaluate the impact of the encoder, we tested a learner that uses only lower Laplacian encoding (SCL\({}_{\text{low}}\)), omitting triangle relationships. Compared to its simplicial counterpart SCL under identical conditions, SCL\({}_{\text{low}}\), manifests a noticeable decrease in performance. This demonstrates that the structural advantages of simplicial networks to process flow data transfer to the contrastive learning setting.
## 6 Conclusion
We show that a contrastive learning framework, when coupled with a simplicial neural network, is effective for generating representations for edge flow data that contain hodge-related information. Related to this, we demonstrated that positive examples with specific spectral properties can be generated by casting the task as an optimization problem on the underlying probabilities of a dropout augmentation. Once these probabilities are optimized, they can be used to generate examples with desired spectral characteristics. We also introduce Hodge-related information into the problem by reweighting the negative examples in the loss based on their spectral difference to the anchor. This pushes spectrally very different examples further apart and results in an embedding space that takes the relevant hodge information into account. Empirical results demonstrate that these optimized embeddings can be used to significantly outperform a fully-supervised model on two edge flow classification tasks. For future work, exploring other types of data augmentation methods and conducting experiments with simplicial complexes of varying dimensions remain as key areas.
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & Trajectory Task & Ocean Drifters \\ \hline SSCL\({}_{\text{spec}}\) (ours) & \(97.9\pm 0.3\) & \(90.3\pm 1.4\) \\ SCNN (supervised) & \(95.2\pm 0.5\) & \(78.5\pm 1.1\) \\ \hline SSCL & \(96.8\pm 0.4\) & \(89.1\pm 1.0\) \\ SCL\({}_{\text{spec}}\) & \(98.2\pm 0.4\) & \(83.1\pm 1.1\) \\ SCL & \(96.1\pm 0.6\) & \(81.6\pm 1.6\) \\ SCL\({}_{\text{Low}}\) & \(91.0\pm 0.2\) & \(77.1\pm 1.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test Accuracies for the Trajectory and Ocean Drifter datasets. SCL denotes models trained with the standard InfoNCE loss (cf. (4)), while SSCL models are trained with spectrally reweighted negatives (cf. (6)). The subscript Spec denotes that the augmentation probabilities are spectrally optimized.
Figure 3: Performance comparison between a model trained with the standard loss (SCL) (cf. (4)) vs. a model trained with a spectrally reweighted loss (SSCL) (cf. (6)) for an edge drop augmentation. The SSCL outperforms the SCL irrespective of the augmentation quality.
Figure 2: Embedding distance for augmentations sampled with uniform probabilities (blue) and with the proposed spectrally optimized ones (orange). In the harmonic embedding, for the distribution associated with the spectral edge drop more probability mass lies over smaller differences and we are thus more likely to generate samples with more similar harmonic components than with the standard method. |
2309.17163 | Calibrating the effective magnitudes of type Ia supernovae with a
model-independent method | This research explores the correlation between the absolute magnitude and the
redshift of Type Ia supernovae (SNe Ia) with a model-independent approach. The
Pantheon sample of SNe Ia and strong gravitational lensing systems (SGLS) are
used. With the cosmic distance-duality relation (CDDR), the evolution parameter
of the magnitude, the light curve parameters of SNe Ia, and the parameters of
the SGLS geometric model are constrained simultaneously. Considering the
consistency of the redshifts, we selected a subsample of SNe Ia in which the
redshift of each SNe Ia is close to the corresponding redshift of the SGLS
sample. Two parametric models are used to describe this evolution, which can be
written as $\delta_M=\varepsilon z$ and $\delta_M=\varepsilon\log(1+z)$,
respectively. Our analysis reveals that $\varepsilon=-0.036^{+0.357}_{-0.339}$
in the first parametric model and $\varepsilon=-0.014^{+0.588}_{-0.630}$ in the
second model, indicating that no significant evolution ($\varepsilon=0$) is
supported at the 1$\sigma$ confidence level in this study. These results
represent a significant advancement in our understanding of the intrinsic
properties of SNe Ia and provide important constraints for future SNe Ia study. | Jian Hu, Jian-Ping Hu, Zhongmu Li, Wenchang Zhao, Jing Chen | 2023-09-29T11:57:35Z | http://arxiv.org/abs/2309.17163v1 | # Calibrating the effective magnitudes of type Ia supernovae with a model-independent method
###### Abstract
This research explores the correlation between the absolute magnitude and the redshift of Type Ia supernovae (SNe Ia) with a model-independent approach. The Pantheon sample of SNe Ia and strong gravitational lensing systems (SGLS) are used. With the cosmic distance-duality relation (CDDR), the evolution parameter of the magnitude, the light curve parameters of SNe Ia, and the parameters of the SGLS geometric model are constrained simultaneously. Considering the consistency of the redshifts, we selected a subsample of SNe Ia in which the redshift of each SNe Ia is close to the corresponding redshift of the SGLS sample. Two parametric models are used to describe this evolution, which can be written as \(\delta_{M}=\varepsilon z\) and \(\delta_{M}=\varepsilon\log(1+z)\), respectively. Our analysis reveals that \(\varepsilon=-0.036^{+0.357}_{-0.339}\) in the first parametric model and \(\varepsilon=-0.014^{+0.588}_{-0.630}\) in the second model, indicating that no significant evolution (\(\varepsilon=0\)) is supported at the 1\(\sigma\) confidence level in this study. These results represent a significant advancement in our understanding of the intrinsic properties of SNe Ia and provide important constraints for future SNe Ia study.
## I Introduction
In 1998, the accelerating expansion of the universe was discovered by measuring the relation between redshift and the distance of SNe Ia. As a powerful tool in cosmology, SNe Ia has a common origin and approximately equal luminosity, making it a potential standard candle. To use SNe Ia as a standard candle, standardization is required, through empirical procedures based on their light-curve shape and color [1; 2; 3; 4; 5]. Because the light curves of the SNe Ia are different, they must be normalized(e.g. the Phillips relationship, [6; 7] ) before these SNe Ia can provide reliable luminosity distances to studying the cosmology. For example, the equation of state of the dark energy[8; 9; 10; 11], the cosmology curve[12; 13; 14], the cosmology opacity[15; 16; 17; 18], the CDDR[19; 20; 21; 22; 23; 24; 25], and so on. However, these studies might neglect the evolution of the absolute magnitude of SNe Ia. Despite most of SNe Ia coming from the same mechanism, the metallicity, mass, and other features of the SNe Ia progenitors may be different, which would affect the SNe Ia magnitudes. If we neglect the effect of progenitor properties on SNe Ia magnitudes in cosmology tests, it might cause an additional systematic error.
More than a decade ago, some researchers began to study this problem. Wright [27] found that the SNe Ia magnitude evolution with an exponential function of cosmic time in the Einstein de Sitter cosmology model may mimic dark energy. [28] use a simple linear form (\(\delta_{M}=kz\), where \(k\) is the slope) to estimate the bias of the luminosity distances with the Gold sample SNe Ia which is presented in Riess et al. [29], and they found that this effect may cause significant bias only if the slope \(k\) is \(\sim|0.1|\).
More recently, several more researchers have examined this question. Kim et al. [30] use their constructed SNe sample from YONSEI (Yonsei Nearby Supernova Evolution Investigation) to explore the relation between the magnitude of SNe Ia and redshift. They found that SNe Ia low redshifts are about 0.07\(\sim\) 0.08 magnitude fainter than those in high redshift by eliminating the effect of the galactic environment. Kang et al. [31] found a significant correlation between SNe standardization luminosity and stellar population age at a 99.5% confidence level.
In terms of testing methods, Linden [32] explored the effect of SNe Ia magnitude evolution in fitting cosmological parameters. They assume that the peak magnitude of SNe Ia evolved with cosmic age in a linear function and found that the magnitude evolution of SNe Ia in high-redshift appears to be some percent brighter than would be expected in a standard cosmology model. In similar work published by Tutusaus et al. [33], they found that the conclusion of cosmic accelerated expansion is not clear if the independence between SNe Ia intrinsic luminosity and its redshift is not imposed. So, parametric models independent of a cosmological theory method to test the evolution of the magnitude of SNe Ia is crucial for this topic. Evslin [34] developed a model-independent method that uses the ratio of two angular baryon acoustic oscillation (BAO) scales at redshifts 0.32 and 2.34 from BOSS BAO to calibrate the SNe Ia magnitude. The result shows that a statistically insignificant downward shift with \(M_{B}(2.34)-M_{B}(0.32)=-0.08\pm 0.15\) for JLA SNe Ia at low z and Hubble Space Telescope SNe Ia at \(z>1.7\), and a shift of \(-0.24\pm 0.13\) for BOSS data with the best fit Planck \(\Lambda\)CDM BAO expectations. Their results would have been more reliable if they had simultaneously fitted the parameters of the supernova's light curve and magnitude evolution, instead of using someone else's light curve to calculate this parameter, which did not consider the evolution of the absolute magnitude. Zhao & Santos [35] use the data of gravitational-wave sources GW170817 electromagnetic (EM) counterparts to calibrate the absolute magnitude of SNe Ia. It is a cosmological model-independent, but only a few data sets are available for his work. Wen & Liao [36] uses the \(D_{L}\) from gravitational wave signals and the ratio of \(D_{A}\) from the SGLS geometric model with a cosmology model-independent method, which is proposed by Liao et al. [37]. On the subject of constraining the absolute magnitude of SNe Ia, although their method is also model-independent, it does
not parameterize the \(M_{B}\) and only precisely limits the value of it.
In this article, we explore whether this effect is significant using an approach that is independent of cosmological models, which may avoid the above problem. The method is to join the SNe Ia and strong gravitational lensing systems and use the strong lens model to replace the cosmology model to fit the SNe Ia light curve parameters. The strong lensing sample and a selected SNe Ia sample are taken. To avoid introducing additional parameters, the redshift of the SNe sample is close enough to the strong lensing sample, which can be regarded as the same luminosity distance.
This paper is organized as follows. In the next section, we show the data and the method of selecting the data. In section 3, the fitting method and the numerical results are shown. Section 4 gives the conclusions.
## II Data
In this section, we will introduce the data used in this work. The data include the latest SNe Ia sample and the SGLS samples. We also present the technique we employ for matching data to acquire two different distance measures at identical redshift.
### The sample of SNe Ia
In this study, we employ data from the pantheon sample and the fitting method from Scolnic et al. [37]. Pan-STARRS1 (PS1) Medium Deep Survey, Sloan Digital Sky Survey (SDSS), SNLS, and several low-z and Hubble Space Telescope samples compose the sample. This collection contains 1048 SNe Ia with redshifts between 0.01 and 2.3. In their work, they adopt a common distance modulus form \(\mu=5\log_{10}(D_{L}/\mathrm{Mpc})+25\) may be expressed as follows [38]:
\[\mu=m_{B}^{\star}-M_{B}+\alpha\times x_{1}-\beta\times c+\Delta_{M}, \tag{1}\]
where \(m_{B}^{\star}\) refers to the measured peak magnitude in the B band rest frame, \(x_{1}\) denotes the light curve stretch parameter, and \(c\) denotes the light curve color parameter, which are usually different for different SNe Ia. \(M_{B}\) is the B-band fitting absolute magnitude of a fiducial SNe Ia with \(x_{1}=0\) and \(c=0\). \(\alpha\) and \(\beta\) are nuisance distance estimate parameters. \(\Delta_{M}\) is a distance correction depending on the mass of the SNe's host galaxy. In the research of Scolnic et al. [37], it was parameterized by
\[\Delta_{M}=\gamma\times\Big{[}1+e^{(-(m-m_{\mathrm{app}})/\tau)}\Big{]}^{-1} \big{]}, \tag{2}\]
\(m\) is the observed value, indicating the \(\log_{10}\) mass of the host galaxy of the SNe Ia, \(\gamma\), \(m_{\mathrm{step}}\) and \(\tau\) are fitting parameters. To explore the evolution of SNe Ia absolute magnitude, a tiny function \(\delta_{M}\) about redshift is introduced:
\[M_{B}=M_{B0}+\delta_{M}. \tag{3}\]
Here \(M_{B}\) is divided into two parts, where \(M_{B0}\) is the absolute magnitude of SNe Ia at \(z=0\), and \(\delta_{M}\) is part of the absolute magnitude that evolves with redshift. If \(\delta_{M}=0\), it means that \(M_{B}\) does not evolve with the redshift, so \(M_{B0}\) is \(M_{B}\).
In this work, we parameterized the \(\delta_{M}(z)\) with one index parameter. We are parameterizing \(\delta_{M}(z)\) whose value stays zero when \(M_{B}\) is not evolution. Any significant evolution from absolute magnitude might indicate the emergence of a new SNe mechanism. Since abundant successful cosmic tests use no evolution assumption of \(M_{B}\), we expect \(\delta_{M}(z)\) to stay close to zero. To simulate any deviation from zero, we parameterize \(\delta_{M}(z)\) using 2 parameter representations to describe the possible redshift dependence of the evolution of absolute magnitude:
\[\delta_{M}=\varepsilon z, \tag{4}\]
and
\[\delta_{M}=\varepsilon\log(1+z). \tag{5}\]
The two parameterizations fit differently for low and high redshift samples, respectively, with the \(\varepsilon\) parameter in equation (4) being more sensitive to changes in redshift. Whereas equation (5) is more suitable for fitting data whose samples contain very high redshifts. Both parameterizations are independent of the cosmological model. If \(\varepsilon=0\) means that the absolute magnitude of type SNe Ia does not evolve with redshift, \(\varepsilon<0\), SNe Ia at higher redshifts are brighter, and \(\varepsilon>0\), SNe Ia at higher redshifts are fainter, we would expect the result to be \(\varepsilon<0\) or close to zero.
### Gravitational Lensing
Our SGLS sample is made up of 205 SGLS from Amante et al. [39], which includes the survey projects SLACS, CASTLES survey, BELLS, and LSD, among others. The theory of general relativity predicts several astronomical occurrences. Gravitational lensing is one of the few most successful predictions among them. Gravitational lensing is the deflection of light radiated by an astronomical body when it passes close to a massive celestial body as a result of gravity. This effect is powerful enough to generate various images, arcs, and even Einstein rings, as the foreground lens is typically a galaxy or galaxy cluster. The SGLS effect is gaining importance as a potent tool for investigating both the background cosmology and the nature of galaxies functioning as lenses. Simultaneously measuring the Einstein ring radius and the star core velocity dispersion \(\sigma_{0}\) can yield the angular diameter distance ratio(\(R^{A}\)):
\[R^{A}(z_{l},z_{s})=\frac{D_{ls}^{A}}{D_{s}^{A}}, \tag{6}\]
\(z_{l}\) and \(z_{s}\) in equation(6) denote the redshift of the lens and source, respectively. Equation(6) is the define of the angular diameter distance ratio, where \(D_{ls}^{A}\) and \(D_{s}^{A}\) are, respectively, the angular diameter distances between lens and source and
between observer and source. Considering a Singular Isothermal Sphere (SIS) lens model, the distance ratio \(R^{A}\) is associated with observable values as follows [40]:
\[R^{A}_{SGLS}(z_{l},z_{s})=\frac{c^{2}\theta_{E}}{4\pi\sigma_{SIS}^{2}}. \tag{7}\]
In the above equation, \(c\) represents the speed of light, \(\theta_{E}\) represents the Einstein radius, \(\sigma_{SIS}\) represents the velocity dispersion generated by the lens's mass distribution, and the subscript "SGLS" indicates that the value of expression is provided by the data of SGLS. In general, we must remark that \(\sigma_{SIS}\) is not precisely equal to the measured star velocity dispersion \(\sigma_{0}\)[41]. To account for this disparity, the phenomenological free parameter \(f_{e}\) is created and defined as \(\sigma_{SIS}=f_{e}\sigma_{0}\), where \(0.8^{0.5}<f_{e}<1.2^{0.5}\)[42].
For the more general case of deviation from the standard "SIS" model, since the slope of the density curve of a single galaxy deviates from \(2\) (i.e. the standard SIS model), assuming that its density distribution is a spherically symmetric power-law distribution of the type \(\rho\sim r^{-\gamma}\), the Einstein radius can be written as [43]
\[\theta_{E}=4\pi\frac{D_{A_{ls}}}{D_{A_{s}}}\frac{\sigma_{ap}^{2}}{c^{2}}\left( \frac{\theta_{E}}{\theta_{ap}}\right)^{2-\gamma}f(\gamma), \tag{8}\]
where \(\theta_{ap}\) denotes the stellar velocity dispersion within an aperture of size ap and \(f(\gamma)\) is written as
\[f(\gamma)=-\frac{(5-2\gamma)(1-\gamma)}{\sqrt{\pi}(3-\gamma)}\frac{\Gamma( \gamma-1)}{\Gamma(\gamma-3/2)}\left[\frac{\Gamma(\gamma/2-1/2)}{\Gamma(\gamma /2)}\right]^{2}. \tag{9}\]
According to researchers's previous work on the \(\gamma\) index of the SGLS, the deviation of this index from \(2\) is usually small. For example, [44] used the (Union2.1) SNe Ia sample and the then latest 167 GRBs as luminosity distance metrics to calibrate the SGLS \(\gamma\) indices, which are in the form of indices containing two parameters \(\gamma_{0}\) and \(\gamma_{1}\), and they divided the samples in various ways, resulting in all subsamples and the total sample supporting the point (\(\gamma_{0}\), \(\gamma_{1}\)) = (1, 0) at the \(1\sigma\) confidence region. [45] used the SNe Ia sample of JLA, the CMB sample, the BAO sample, and the H(z) sample to constrain the \(\gamma\) index of SGLS and obtained similar conclusions. So we just set \(\gamma=2\) here. The star velocity dispersion \(\sigma_{0}\) can be expressed as \(\sigma_{ap}(\theta_{eff}/(2\theta_{ap}))^{-0.04}\)[46; 47], which has already been dealt with in the work of Amante et al. [39]. Therefore equation(8) becomes equation(7).
In a model of flat cosmology, the comoving distance(\((D^{c}(z)=(1+z)D^{A})\)) between the lens and the source is expressed as \(D_{ls}^{c}=D_{s}^{c}-D_{l}^{c}\). The ratio of the comoving angular diameter distance may thus be rewritten as follows:
\[R^{A}(z_{l},z_{s})=1-\frac{(1+z_{l})D_{l}^{A}}{(1+z_{s})D_{s}^{A}}=1-\frac{D_{ l}^{c}}{D_{s}^{c}}=1-\frac{d_{l}^{c}}{d_{s}^{c}}, \tag{10}\]
where \(d_{l}^{c}\) and \(d_{s}^{c}\) are the dimensionless distances \(d=\frac{H_{a}}{c}D^{c}\).
### Pair off data of SNe Ia and SGLS
To fit these parameters, the redshift of SNe Ia must be near the source or the SGLS lens. In other words, an SGLS requires two SNe Ia to couple the source and lens independently. To make full use of the data sample, one must couple the subsamples well. We couple subsamples using a distance-deviation consistency technique, which outperforms the usual method that sets a fixed \(\Delta z=\)constant. For example, \(\Delta z=0.005\)[20; 25; 48; 49], \(\Delta z=0.006\)[50], and \(\Delta z=0.003\)[51].
In this study, the redshift difference between subsample pairs is not set and decreases with redshifts to ensure that the distance deviation of the sources remains constant. Following the approach of [52], the relationship between \(z\) and dimensionless distance for a cosmological model may be expressed as follows:
\[\Delta d_{c}^{model}(z)=d_{c}^{model}(z+\Delta z^{model})-d_{c}^{model}(z), \tag{11}\]
The exact expression of equation(11) will vary with the model. In cases when the flat \(\Lambda\)CDM model with \(\Omega_{m}=0.31\)[53] is used. Setting \(\Delta d_{c}/d_{c}\) equals 5% and combining equation(11) yields \(\Delta z(z)\):
\[\begin{split}\int_{0}^{z+\Delta z}\frac{dz^{\prime}}{\sqrt{ \Omega_{m}(1+z^{\prime})^{3}+1-\Omega_{m}}}=\\ (1+\Delta d_{c}/d_{c})\int_{0}^{z}\frac{dz^{\prime}}{\sqrt{\Omega_ {m}(1+z^{\prime})^{3}+1-\Omega_{m}}},\end{split} \tag{12}\]
which may be used to rule out excessively high selection uncertainties. By simplifying equation(11), we can obtain the ordinary differential equation for \(\Delta z\) with respect to \(z\), which can be written by:
\[\frac{d\Delta z(z)}{dz}=1.05\frac{\sqrt{0.31(1+z+\Delta z(z))^{3}+0.69}}{\sqrt {0.31(1+z)^{3}+0.69}}-1, \tag{13}\]
with a initial value \(\Delta z(0)=0\). The numerical solution of equation (13) is plotted on the solid green line in Figure (1). For comparison, we present the \(\Delta z=0.005\) line, represented by the solid orange line in Figure (1). As the selection method that meets the requirement is not unique, we determine the sum of the redshift deviations of these paired samples to choose the best paired sample, which must possess a minimum value of \(\sum_{i=1}^{2n}\left(\Delta z_{i}^{k}\right)^{2}\), where \(n\) denotes the logarithmic number of pairings and \(k\) represents the \(k\) alternatives. Please note that the pairing data utilized here relies on a distinct cosmological model. However, this model only offers a maximum fixed deviation of \(d\), which is not taken into consideration in the subsequent calculations and is exclusively implemented to eliminate significant \(\Delta d\).
By using this route, not only might more observations at high redshifts be kept, but more conforming data sets could be matched as well, for example, those data in the upper right corner of Figure (1). If we use our technique with a fixed \(\Delta d/d=5\%\), the number of matched data sets can increase to 68, which is an increase in the utilization of the original data
of 13.3% compared to the work of Liao et al. [36], which utilized a fixed \(z=0.005\), which matched 60 data sets. We compiled a total of 120 data points, the maximum redshift of which was \(z=2.16\) with fixed \(\Delta d/d=5\%\).
Setting \(\Delta d/d=5\%\) has the following considerations: first, the lowest redshift in our selected SNe Ia is roughly at \(z=0.1\), and the selection using a similar method to Liao et al. [36] is also at \(z=0.1\), where the \(\Delta d/d\) calculated using the \(\Lambda\)CDM model is about 5%. So for the distance-deviation consistency, we take an approach where the upper limit of the selection uncertainty is set to 5% at all redshifts in the sample, otherwise, at all redshifts, our selecting uncertainty is necessarily larger than the selection uncertainty caused by the previous method. If we fail to match at an uncertainty range of less than or equal to 5%, we are discarding that SGLS data. We need to increase this selection error up to 5% so that we can match as much data as possible; for this reason, we will choose a \(\Delta d/d\) deviation of 5% as a compromise.
We made the following considerations for accuracy: The previous method of number selection at high redshifts, such as at \(z=1\), resulted in \(\Delta d/d<<1\%\) being significantly smaller than 1%, indicating a small selection uncertainty at high redshifts but a relatively large selection uncertainty at low redshifts. The uncertainty induced by our technique is constant and corresponds to the uncertainty of the prior method at low redshifts. Our selection uncertainty is significantly higher than that of previous research at high redshifts, but this does not necessarily mean that our results are less precise than those of prior research. Their approach failed to account for this component of the error in a model-independent method, as the selection error is \(z\) and the implicated variable or intermediate variable is \(d\). Linking \(d\) to \(z\) requires the use of a specific cosmological model. Therefore, their approach explicitly disregards this choosing uncertainty, which is contentious. Their method makes it simpler to match more data at low redshifts, despite this component having a rather substantial selection uncertainty, while the high redshift is difficult to match despite the minimal selection uncertainty. Their method neglects the large deviation and discards the data at high redshifts, while our method resolves this problem.
## III Simulatons and Results
### Simulatons
In this study, we employed a model-independent technique to investigate the evolutive parameter of SNe Ia magnitude. For this technique, the geometric model of strong lensing replaces the cosmological model in fitting the SNe Ia light curve in the 2 to its parameters. Then, the light-curve parameters are connected with the strong lensing model's parameters. To investigate the potential evolution of the magnitude of SNe Ia, we use two parameterized forms equation(4) and equation(5). Because the redshifts of some SGLS and SNe Ia are quite large, **a** model for redshift nonlinearity will also be adopted.
The two diameters in equation(10) should be computed using a model-independent approach. To get these distances, the CDDR and luminosity distance of SNe Ia must be taken into account. The CDDR, also known as the Etherington relation, is a crucial concept in observational cosmology. The following equation connects the luminosity distance (\(D_{L}\)) to the angular diameter distance (\(D_{A}\))
\[D_{L}=D_{A}(1+z)^{2}. \tag{14}\]
It is valid for all Riemannian geometry-based cosmological theories. The theoretical foundations of this relationship are the conservatism of the photon count and the photons' motion along the null geodesics in Riemannian space-time [54]. This relationship plays a crucial role in modern cosmology, particularly in galaxy observations[55; 56], gravitational lensing [54], and cosmic microwave background (CMB) radiation observations[57].
To gain the value of parameters, we must construct the \(\chi^{2}\) analysis to the assumed parameterizations' parameters. When equation(10) and equation(14) are combined, the angular diameter distance ratio may alternatively be expressed as
\[R^{A}_{SNe}(z_{l},z_{s})=1-\frac{(1+z_{s})D_{Ll}}{(1+z_{l})D_{Ls}}, \tag{15}\]
where \(D_{Ll}\) and \(D_{Ls}\) are the luminosity distance of lens and source of SGLS, respectively, and the subscript "SNe" indicates that the value of expression is provided by the data of SNe Ia. They can unitedly express by:
\[D_{L}=10^{(0.2(m_{B}^{4}-M_{B}+\alpha\times X_{1}-\beta\times C+\Delta_{M})-5)}. \tag{16}\]
Combining equation(1) and equation(15) with \(\mu=5\log 10(D_{L}/\mathrm{Mpc})+25\) yields the form of \(\chi^{2}\):
\[\chi^{2}=\sum\frac{(R^{A}_{SNe}(z_{l},z_{s})-R^{A}_{SGLS}(z_{l},z_{s}))^{2}}{ \sigma^{2}_{R^{A}_{SNe}}+\sigma^{2}_{R^{A}_{SNe,sel}}+\sigma^{2}_{R^{A}_{SGLS}}}, \tag{17}\]
\(\sigma_{R^{A}_{SNe}}\) is the uncertainty of \(R^{A}_{SNe}(z_{l},z_{s})\) and may be computed as follows:
\[\sigma_{R^{A}_{SNe}}=(1-R^{A}_{SNe}(z_{l},z_{s}))\sqrt{\sigma^{2}_{D_{Ll}}/D^{ 2}_{Ll}+\sigma^{2}_{D_{Ls}}/D^{2}_{Ls}}, \tag{18}\]
\(\sigma_{D_{Ll}}\) and \(\sigma_{D_{Ls}}\) are the respective uncertainty of \(D_{Ll}\) and \(D_{Ls}\). According to equation(16), it can also use a united form to express by:
\[\sigma_{D_{L}}=\frac{\ln(10)}{5}D_{L}\Delta\mu. \tag{19}\]
\(\sigma_{R^{A}_{SNe,sel}}\) is the selection uncertainty of \(R^{A}_{SNe}(z_{l},z_{s})\). According to equation (10), it can be calculated by
\[\sigma_{R^{A}_{SNe,sel}}=(1-R^{A}_{SNe}(z_{l},z_{s}))\sqrt{(\Delta d_{l}/d_{l} )^{2}+(\Delta d_{s}/d_{s})^{2}}, \tag{20}\]
where both \(\Delta d_{l}/d_{l}\) and \(\Delta d_{s}/d_{s}\) equal \(5\%\). \(\sigma_{R^{A}_{SGLS}}\) represents the uncertainty of \(R^{A}_{SGLS}(z_{l},z_{s})\) and may be computed as follows:
\[\sigma_{R^{A}_{SGLS}}=R^{A}_{2}\sqrt{4(\delta\sigma_{0})^{2}+(\delta\theta_{E}) ^{2}}, \tag{21}\]
where \(\delta\theta_{E}\) and \(\delta\sigma_{0}\) represent the fractional uncertainty of \(\theta_{E}\) and \(\theta_{E}\) in the sample. Following the SLACS methodology, we fixed the fractional uncertainty of the Einstein radius for all SGLS at 5%.
The posterior probability density function (PDF) of the parameters is computed using Bayesian statistical methods and the Markov chain Monte Carlo (MCMC) approach to determine the parameters that best suit the \(\chi^{2}\) and the confidence regions of 1\(\sigma\) and 2\(\sigma\). The probability function is \(L\sim exp(-\chi^{2}/2)\). The settings are optimized using the Python program emcee [58]. The prior of these parameters is adopted as a uniform distribution: \(P(\alpha)=U[-0.2,0.2]\), \(P(\beta)=U[2,6]\), \(P(f_{e})=U[0.5,1.5]\), \(P(\varepsilon)=U[-1,1]\), \(P(\gamma)=U[0,0.3]\), \(P(M_{step})=U[5,15]\), \(P(\tau)=U[0.001,0.3]\). The prior interval of these parameters covers the parameters with 1\(\sigma\) interval, which are constrained by [37] with a flat \(\Lambda\)CDM model.
### results
We utilize two different kinds of parameterizations to build the triangle figure for the parameters for the \(\chi^{2}\) and the confidence regions of 1\(\sigma\) and 2\(\sigma\) in figure(2) for the data that will be used in the study. Table(1) displays the parameters together with their respective 1\(\sigma\) confidence regions. We get to the conclusion that the optimum values of the parameters in \(\varepsilon\), as well as their uncertainty in 1\(\sigma\), are as follows:
\[\varepsilon=-0.036^{+0.357}_{-0.339}, \tag{22}\]
for the parametric model \(\delta_{M}=\varepsilon z\), and
\[\varepsilon=-0.014^{+0.588}_{-0.630}, \tag{23}\]
for the parametric model \(\delta_{M}=\varepsilon\log(1+z)\).
In the results of these two fits, we find that the best-fitted values and their 1\(\sigma\) confidence regions of nuisance parameters \(\alpha\), \(\beta\), and \(\gamma\) differ significantly from the previous results [37; 11], mainly since on the one hand, a model with SGLS was used instead of the standard cosmological model, and on the other hand, the sample of SNe Ia we used is a subsample of the previous published sample, which contains only 240 SNe Ia. However, the best-fitted values and their 1\(\sigma\) confidence regions of nuisance parameters \(m_{step}\), and \(\tau\) are very close to the previous work [37]. The best-fitted values and its 1\(\sigma\) confidence regions the phenomenological free parameter \(f_{e}\) is also very close to the previous work [52]. Thus, the parameters \(m_{step}\), \(\tau\), and \(f_{e}\) are in line with expectations.
We have found no statistically significant evidence (\(\varepsilon=0\) is valid in 1\(\sigma\) confidence regions) for the evolution of the absolute magnitudes of SNe Ia. The best value of \(\varepsilon\) is negative, but far from a slope of \(\sim|0.1|\). This suggests that the hypothesis that absolute magnitudes of SNe Ia do not evolve with redshift is still valid [27]. The two parameterized models for \(\varepsilon\), equations (4) and (5), show very different 1\(\sigma\) confidence regions, see figure(2), where the simple linear model gives more centralized 1\(\sigma\) confidence regions and tighter constraints on the \(\varepsilon\), but, as the analysis in II.1 suggests, the second parameterized model is better suited to data containing high redshifts. We can compare the advantages and disadvantages of the two parameterized models by using Akaike weights, and BIC selection weights [60; 59], which can be uniformly written in the following form [23]
\[P(\alpha)=\frac{\exp{(-XIC_{\alpha}/2)}}{\exp{(-XIC_{other}/2)}+\exp{(-XIC_{ \alpha}/2)}}, \tag{24}\]
where XIC represents the information criterion. Here, we use AIC and BIC [61], they are defined as
\[AIC=\chi^{2}+2k, \tag{25}\]
and
\[BIC=\chi^{2}+k\ln(n). \tag{26}\]
\(k\) is the number of parameters for the two models, n is the number of the data points in section(II.3), and \(\chi^{2}\) is the value in equation(17) with the parameters for best fitted. In this work, \(k\) and \(n\) are the same in both cases, so we only need to consider \(\chi^{2}\). Substituting the numerical values, the relative probabilities for equations (4) and (5) are 48.4% versus 51.6%, this indicates that the second parameterized model is better than the first one.
## IV Conclusions
In this paper, we use the Pantheon SNe Ia sample and the latest SGLS sample to explore the possible evolution of the magnitude with a model-independent method. In this method, the cosmological model is replaced by the SGLS geometric model. To acquire the pure evolution law of absolute magnitude, the parameter of evolution, the model of SGLS, and the SNe Ia light curve are fitted simultaneously with the pair of the SGLS sample and the SNe Ia sample. The fitting results show that no evidence of evolution is found at 1\(\sigma\) confidence level with these samples. It means that the SNe Ia is still a powerful and credible tool in studying cosmology with past research methodology. But the best value of the parameter of evolution \(\varepsilon\) is a small negative value, which might be related to the sample we took. Exploring this issue in the future may require the use of more and larger samples. What's more, we find that different evolutionary parameter models also affect the fitting results, with parameter models with insignificant evolution with redshift outperforming simple linear models, which may mean that even if more data samples are fitted in the future with \(M_{B}\) having an evolution with redshift, this evolution will be extremely insignificant.
###### Acknowledgements.
We thank the anonymous referee for constructive comments. This work is supported by Yunnan Youth Basic Research Projects 202001AU070013, Jiangsu Funding Program for Excellent Postdoctoral Talent (20220ZB59) and Project funded by China Postdoctoral Science Foundation
(2022M721561), National Natural Science Foundation of China (No. 11863002), Yunnan Academician Workstation of Wang Jingxiu (No. 202005AF150025), and Sino-German Cooperation Project (No. GZ 1284).
|
2309.08043 | On Prediction Feature Assignment in the Heckman Selection Model | Under missing-not-at-random (MNAR) sample selection bias, the performance of
a prediction model is often degraded. This paper focuses on one classic
instance of MNAR sample selection bias where a subset of samples have
non-randomly missing outcomes. The Heckman selection model and its variants
have commonly been used to handle this type of sample selection bias. The
Heckman model uses two separate equations to model the prediction and selection
of samples, where the selection features include all prediction features. When
using the Heckman model, the prediction features must be properly chosen from
the set of selection features. However, choosing the proper prediction features
is a challenging task for the Heckman model. This is especially the case when
the number of selection features is large. Existing approaches that use the
Heckman model often provide a manually chosen set of prediction features. In
this paper, we propose Heckman-FA as a novel data-driven framework for
obtaining prediction features for the Heckman model. Heckman-FA first trains an
assignment function that determines whether or not a selection feature is
assigned as a prediction feature. Using the parameters of the trained function,
the framework extracts a suitable set of prediction features based on the
goodness-of-fit of the prediction model given the chosen prediction features
and the correlation between noise terms of the prediction and selection
equations. Experimental results on real-world datasets show that Heckman-FA
produces a robust regression model under MNAR sample selection bias. | Huy Mai, Xintao Wu | 2023-09-14T22:10:09Z | http://arxiv.org/abs/2309.08043v2 | # On Prediction Feature Assignment in the Heckman Selection Model
###### Abstract
Under missing-not-at-random (MNAR) sample selection bias, the performance of a prediction model is often degraded. This paper focuses on one classic instance of MNAR sample selection bias where a subset of samples have non-randomly missing outcomes. The Heckman selection model and its variants have commonly been used to handle this type of sample selection bias. The Heckman model uses two separate equations to model the prediction and selection of samples, where the selection features include all prediction features. When using the Heckman model, the prediction features must be properly chosen from the set of selection features. However, choosing the proper prediction features is a challenging task for the Heckman model. This is especially the case when the number of selection features is large. Existing approaches that use the Heckman model often provide a manually chosen set of prediction features. In this paper, we propose Heckman-FA as a novel data-driven framework for obtaining prediction features for the Heckman model. Heckman-FA first trains an assignment function that determines whether or not a selection feature is assigned as a prediction feature. Using the parameters of the trained function, the framework extracts a suitable set of prediction features based on the goodness-of-fit of the prediction model given the chosen prediction features and the correlation between noise terms of the prediction and selection equations. Experimental results on real-world datasets show that Heckman-FA produces a robust regression model under MNAR sample selection bias.
sample selection bias, missing-not-at-random, Heckman selection model, robust linear regression
## I Introduction
Regression is sensitive to dataset shift [16], where the training and testing sets come from different distributions. Dataset shift can arise due to sample selection bias, where a sample is non-uniformly chosen from a population for training a model. This type of bias can cause a subset of training samples to be partially observed, where any of the covariates or outcome of a sample is missing, or completely unobserved. Consequently, the performance of the model after training on this biased set will be degraded when the model is deployed. Most approaches such as [3, 13], and [19] handle the missing-at-random (MAR) setting, where the selection of training samples is assumed to be independent from the outcome given the observed variables in the training set. However, these approaches cannot properly account for the missing-not-at-random (MNAR) setting, where the selection of training samples is assumed to not be independent from the outcome given the observed variables in the training set.
In this work, we focus on the problem of MNAR sample selection bias on the outcome. We provide an example of this bias scenario in Figure 1 by considering the relationship between SAT score (feature) and the amount of scholarship offered by a certain university (outcome), where some students have missing values of scholarship. There could be some hidden mechanism behind the missing outcomes. For instance, amounts of scholarship offered to students who have not declared their majors are not collected. When the undeclared students are omitted from the training, a biased model is produced and could be very different from the ground truth model that would have been trained had scholarship amounts of all students been collected. However, their feature information are available during training. In order for the trained model to be close to the ground truth model, we leverage the observed feature information of those records with missing outcomes in the training process.
The Heckman selection model [7] is a Nobel-Prize winning approach that addresses MNAR sample selection bias on the outcome. The method involves two equations: the prediction equation and the selection equation. The prediction equation describes the relationship between the covariates and the outcome of interest. The selection equation specifies the
Fig. 1: Illustration of the effect of MNAR sample selection bias on the predictions of a linear model. Solid (dashed) line represents the regression line fitted on the biased (unbiased and fully observed) set.
probability that a sample is selected in the training data. Estimation of the selection equation requires an exclusion restriction, where the selection equation includes one or more variables that are not included in the outcome equation. To handle MNAR sample selection bias on the outcome, the Heckman model considers the respective noise terms of the prediction and selection equations, which follow a bivariate normal distribution.
Although the presence of an exclusion restriction avoids multicollinearity for the prediction model [14], the process to identify a valid exclusion restriction is often difficult in practice. This is first due to the lack of clear theoretical knowledge on which selection features should be included in the prediction model [6]. Moreover, using the Heckman selection model with an invalid exclusion restriction can lead to non-robust regression on the biased training set [23]. Choosing from features that affect the selection, one way to address these challenges is to search through all combinations of selection features to find a suitable set of prediction features for the Heckman selection model. However, this search process becomes computationally expensive as we deal with a large number of selection features in real-world data.
### _Problem Formulation_
We provide a list of important symbols used throughout the paper in Table I. In our work, we generally use unbolded letters to denote scalars, bold lower-case letters to denote vectors, and bold upper-case letters to denote matrices. For accented letters, we use a hat to denote estimations, tilde to denote approximations, and underline to denote augmented vectors or matrices. As exceptions to these conventions, we use \(\mathbf{X}_{\cdot k}\) to denote the \(k\)th column of any matrix and \(\mathbf{\pi}\) to denote the probability matrix.
We let \(\mathcal{X}\) be the feature space and \(\mathcal{Y}\) be the continuous target attribute. We also denote \(\mathcal{D}_{tr}=\{\mathbf{t}_{i}\}_{i=1}^{n}\) as the training set of \(n\) samples that are originally sampled from the population to be modeled yet biased under MNAR sample selection bias. We define each sample \(\mathbf{t}_{i}\) as
\[\mathbf{t}_{i}=\begin{cases}(\mathbf{x}_{i},y_{i},s_{i}=1)&1\leq i\leq m\\ (\mathbf{x}_{i},s_{i}=0)&m+1\leq i\leq n\end{cases} \tag{1}\]
where the binary variable \(s_{i}\) indicates whether or not \(y_{i}\) is observed. We define \(\mathcal{D}_{s}\) as the set containing the first \(m\) training samples where each sample is fully observed and \(\mathcal{D}_{u}\) as the set that contains the remaining \(n-m\) training samples with unobserved outcome.
**Definition 1** (MNAR Sample Selection): _Missing-not-at-random (MNAR) sample selection occurs for a sample \(\mathbf{t}_{i}\) if \(s_{i}\) is not independent of \(y_{i}\) given \(\mathbf{x}_{i}\), i.e. \(P(s_{i}|\mathbf{x}_{i},y_{i})\neq P(s_{i}|\mathbf{x}_{i})\)._
MNAR means that both the probability of data missingness and the value of missing observations are affected by unobserved variables. For the Heckman model, the selection mechanism models the missingness of \(y_{i}\) in terms of a set of selection features. Thus the following assumptions are additionally made in this work:
1. [label=()]
2. For all \(\mathbf{t}_{i}\), \(\mathbf{x}_{i}^{(s)}\) consists of every prediction feature and additional features that do not affect the outcome. The selection features can be specified by domain users or simply learned via goodness-of-fit as all feature information are available in the training data.
3. Given selection features \(\mathbf{x}_{i}^{(s)}\subseteq\mathbf{x}_{i}\) of the \(i\)th training sample, \(P(s_{i}|\mathbf{x}_{i},y_{i})\) is approximated by computing \(P(s_{i}|\mathbf{x}_{i}^{(s)})\).
**Problem Statement.** To perform regression, a linear model \(h(\mathbf{x}_{i}^{(p)};\mathbf{\beta})\) with prediction features \(\mathbf{x}_{i}^{(p)}\) and parameters \(\mathbf{\beta}\) is fitted by learning to minimize a loss function over \(\mathcal{D}_{tr}\). Given that \(\mathcal{D}_{tr}\) is biased due to MNAR sample selection, we seek to choose prediction features from a set of selection features using an assignment function \(\psi\). Based the extracted assignment of prediction features, a model \(h(\underline{\mathbf{x}}_{i}^{(p)};\underline{\mathbf{\beta}})\) is fitted after running the Heckman model, where \(\underline{\mathbf{x}}_{i}^{(p)}\) denotes the extracted prediction features augmented with the inverse Mills ratio (IMR) and \(\underline{\mathbf{\beta}}\) denotes the Heckman coefficients.
### _Contributions_
In this work, we present the Heckman selection model with Feature Assignment (Heckman-FA) as a framework that finds a suitable assignment of prediction features for the Heckman model to robustly handle MNAR sample selection bias on the outcome. The core contributions of our work are summarized as follows. First, Heckman-FA trains an assignment function that determines whether a selection feature is assigned as a prediction feature or not. The assignment function is defined in terms of samples from the Gumbel-Softmax distribution [9]. Second, using the parameters of the trained assignment function, Heckman-FA extracts the set of prediction features for the Heckman model based on goodness-of-fit and the correlation between the noise terms of the prediction and selection equations. Third, we apply our method to real-world datasets and compare the performance of Heckman-FA against other regression baselines. We empirically show that Heckman-FA produces robust prediction models against MNAR sample selection bias and outperforms these regression baselines.
## II Related Work
### _Incorporating the Heckman Selection Model_
The Heckman selection model has been widely utilized to handle MNAR sample selection bias in different applications. For example, [2] evaluated the utility of the Heckman selection model in criminology by testing the method on sentencing data. In epidemiology, [1] proposed a Heckman-type selection model to account for HIV survey nonparticipation and HIV status. Variants of the Heckman selection model have also been proposed (see a comprehensive survey [21]). In the area of fair machine learning, [4] applied the Heckman model to correct MNAR sample selection bias while achieving fairness for linear regression models. Very recently, [10] extended the Heckman model to model multiple domain-specific sample selection mechanisms and proposed to jointly learn the prediction model and the domain selection model to achieve generalization on the true population. For approaches that incorporate the Heckman selection model and its variants, the prediction features are manually chosen beforehand. Our work makes a data-driven choice for the prediction features based on a trained feature assignment function.
Empirical analysis has often been used to examine the effect of exclusion restrictions on the performance of the Heckman selection model. [14] conducted Monte Carlo experiments with and without exclusion restrictions and found that the Heckman model is susceptible to collinearity issues in the absence of exclusion restrictions. Recently, [23] conducted a simulation showing that naive ordinary least squares (OLS) on the biased training set outperforms the Heckman model based approaches that do not have a valid exclusion restriction.
For the Heckman selection model, the correlation between the two noise terms carries information about the sample selection process. Because the noise terms are unobserved, the true value of the correlation is unknown. [6] used a range of values for the correlation to derive uncertainty intervals for the parameters of the prediction model. In our work, we also consider a range of values for the correlation. However, rather than defining the range of correlation values for a fixed set of prediction features, we specify the range of correlation values as we dynamically assign prediction features for the Heckman selection model.
The idea of variable assignment based on reparametrized random samples has also been explored in previous work. For instance, [15] proposed to train an assignment function that maps each latent variable to some causal factor as part of learning a causal representation. Similar to our work, the assignment function is trained by sampling an assignment from the Gumbel-Softmax distribution. In our work, however, we do not map variables to causal factors. Instead, we map each variable to a value that indicates whether or not the variable is assigned as a feature for the predictive model.
### _Learning under Biased Training Data_
Machine learning on missing training data has been well-studied. There are a number of techniques that handle MAR sample selection bias in the training set. Importance weighting [20] is commonly used to handle the MAR setting to reweigh the training samples. However, it can result in inaccurate estimates due to the influence of data instances with large importance weights on the reweighted loss. To address this drawback, recent MAR approaches have been constructed based on distributionally robust optimization of the reweighted loss. [3] proposed a robust bias-aware regression approach, which considers a worst-case shift between the training and testing sets to adversarially minimize the reweighted loss. [19] introduced Rockafellar-Uryasev (RU) regression to produce a model that is robust against bounded MAR sample selection bias, where the level of distribution shift between the training and testing sets is restricted. Based on the assumption for MNAR sample selection bias, methods that account for MAR bias would not properly model the MNAR data mechanism. Thus we expect methods that handle MAR bias to not be robust against MNAR sample selection bias. On the other hand, our approach uses the Heckman selection model to model the MNAR data mechanism, where the selection of samples is expressed as a linear equation. Moreover, unlike the above MAR approaches, we consider observed features of training samples with missing outcomes when modeling the missing data mechanism.
In particular, there are recent approaches that address the problem of MNAR labels in training data. In recommender learning, [22] proposed the joint learning of an imputation model and a prediction model to estimate the performance of rating prediction given MNAR ratings. [12] adopted two propensity scoring methods into its loss function to handle bias of MNAR feedback, where user feedback for unclicked data can be negative or positive-yet-unobserved. While the approaches in [12] and [22] also use separate propensity estimation models to predict the observation of a label, they consider matrix factorization as the prediction model, which is not for linear regression on tabular data.
In semi-supervised learning, [8] employed class-aware propensity score and imputation strategies on the biased training set toward developing a semi-supervised learning model that is doubly robust against MNAR data. We emphasize that our problem setting is different than semi-supervised learning. In semi-supervised learning, unlabeled samples are separated into clusters based on similarities. However, in our problem setting, we do not perform clustering on the samples with missing labels.
## III Heckman Selection Model Revisited
Formally, the Heckman selection model [7] models the selection and prediction equations as follows. For any \((\mathbf{x}_{i},y_{i})\in\mathcal{X}\times\mathcal{Y}\), the selection equation of the \(i\)th sample is
\[d_{i}=\mathbf{x}_{i}^{(s)}\mathbf{\gamma}+u_{i}^{(s)} \tag{2}\]
where \(\mathbf{\gamma}\) is the set of regression coefficients for selection, \(\mathbf{x}_{i}^{(s)}\) denotes the features for sample selection, and \(u_{i}^{(s)}\sim\mathcal{N}(0,1)\)
is the noise term for the selection equation. The selection value of the \(i\)th sample \(s_{i}\) is defined as:
\[s_{i}=\begin{cases}1&d_{i}>0\\ 0&d_{i}\leq 0\end{cases} \tag{3}\]
The model learns the selection based on Eq. (3) and the prediction of the \(i\)th sample based on linear regression, with
\[y_{i}=\mathbf{x}_{i}^{(p)}\mathbf{\beta}+u_{i}^{(p)} \tag{4}\]
Assuming \(u_{i}^{(p)}\sim\mathcal{N}(0,\sigma^{2})\), we define \(u_{i}^{(p)}=\sigma\epsilon_{i}\) where \(\epsilon_{i}\sim\mathcal{N}(0,1)\). Moreover, the noise terms \(u_{i}^{(p)}\) and \(u_{i}^{(s)}\) are correlated with a correlation coefficient of \(\rho\). If \(\rho\) differs from zero, there is an indication that the missing observations are MNAR.
To correct the bias in \(\mathcal{D}_{tr}\), the conditional expectation of the predicted outcome
\[\begin{split}\mathbb{E}[y_{i}|s_{i}=1]&=\mathbb{E} [y_{i}|d_{i}\geq 0]\\ &=\mathbb{E}[\mathbf{x}_{i}^{(p)}\mathbf{\beta}+u_{i}^{(p)}|\mathbf{x}_{i}^{( s)}\mathbf{\gamma}+u_{i}^{(s)}>0]\\ &=\mathbf{x}_{i}^{(p)}\mathbf{\beta}+\mathbb{E}[u_{i}^{(p)}|\mathbf{x}_{i}^{( s)}\mathbf{\gamma}+u_{i}^{(s)}>0]\\ &=\mathbf{x}_{i}^{(p)}\mathbf{\beta}+\mathbb{E}[u_{i}^{(p)}|u_{i}^{(s)}>- \mathbf{x}_{i}^{(s)}\mathbf{\gamma}]\end{split} \tag{5}\]
is computed over all samples in \(\mathcal{D}_{s}\). Because \(u_{i}^{(p)}\sim\mathcal{N}(0,\sigma^{2})\) and \(u_{i}^{(s)}\sim\mathcal{N}(0,1)\) are correlated,
\[\mathbb{E}[u_{i}^{(p)}|u_{i}^{(s)}>-\mathbf{x}_{i}^{(s)}\mathbf{\gamma}]=\lambda_{i}\rho\sigma \tag{6}\]
where \(\lambda_{i}=\dfrac{\phi(-\mathbf{x}_{i}^{(s)}\mathbf{\gamma})}{1-\Phi(-\mathbf{x}_{i}^{(s )}\mathbf{\gamma})}=\dfrac{\phi(-\mathbf{x}_{i}^{(s)}\mathbf{\gamma})}{\Phi(\mathbf{x}_{i}^{( s)}\mathbf{\gamma})}\) is the inverse Mills ratio (IMR). We denote \(\phi(\cdot)\) as the standard normal density function and \(\Phi(\cdot)\) as the standard normal cumulative distribution function. Thus Eq. (5) is rewritten as
\[\mathbb{E}[y_{i}|s_{i}=1]=\mathbf{x}_{i}^{(p)}\mathbf{\beta}+\lambda_{i}\rho\sigma= \mathbf{\underline{x}}_{i}^{(p)}\mathbf{\underline{\beta}} \tag{7}\]
where \(\mathbf{\underline{x}}_{i}^{(p)}=[\mathbf{x}_{i}^{(p)},\lambda_{i}]\), \(\mathbf{\underline{\beta}}=[\mathbf{\beta},\beta_{H}]\), and \(\beta_{H}=\rho\sigma\).
We provide pseudocode of executing the Heckman model in Algorithm 1. The Heckman model follows two steps:
**Step 1.** Since there is no prior knowledge of the true value of \(\lambda_{i}\) for each sample in \(\mathcal{D}_{s}\), \(\lambda_{i}\) is estimated as \(\hat{\lambda}_{i}\) by first computing \(\hat{\mathbf{\gamma}}\) using probit regression over \(\mathcal{D}_{tr}\). As indicated in line 2 of Algorithm 1, we estimate \(\hat{\mathbf{\gamma}}\) after maximizing
\[\begin{split}\mathcal{L}(\mathbf{\gamma})&=\prod_{i=1}^ {n}P(s_{i}=1)^{s_{i}}\cdot P(s_{i}=0)^{1-s_{i}}\\ &=\prod_{i=1}^{n}\Phi(\mathbf{x}_{i}^{(s)}\mathbf{\gamma})^{s_{i}}(1- \Phi(\mathbf{x}_{i}^{(s)}\mathbf{\gamma}))^{1-s_{i}}\end{split} \tag{8}\]
over \(\mathcal{D}_{tr}\), where \(\mathcal{L}(\cdot)\) is the likelihood. As shown in line 4, \(\hat{\mathbf{\gamma}}\) is then used to compute \(\hat{\lambda}_{i}\) for each \(\mathbf{t}_{i}\) in \(\mathcal{D}_{s}\).
**Step 2.** Using \(\hat{\lambda}_{i}\), the prediction model is
\[\hat{y}_{i}=\mathbf{x}_{i}^{(p)}\mathbf{\beta}+\hat{\lambda}_{i}\beta_{H} \tag{9}\]
The estimated set of coefficients \(\hat{\underline{\beta}}\) is computed by minimizing \(\sum_{i=1}^{m}(y_{i}-\hat{y}_{i})^{2}\) over \(\mathcal{D}_{s}\). As a result, \(\hat{\underline{\beta}}\) is computed using the closed-form solution
\[\hat{\underline{\beta}}=(\underline{\mathbf{X}}^{(p)T}\underline{\mathbf{X}}^{(p)})^{ -1}\underline{\mathbf{X}}^{(p)T}\mathbf{y} \tag{10}\]
as indicated in line 7.
**Exclusion Restriction.** The Heckman selection model assumes that the selection features consist of every prediction feature and additional features that do not affect the outcome. The selection and prediction features are generally not identical as it can introduce multicollinearity to the prediction model. Specifically, if the selection and prediction features are the same, the IMR \(\lambda_{i}\) would be expressed as
\[\lambda_{i}=\dfrac{\phi(-\mathbf{x}_{i}^{(s)}\mathbf{\gamma})}{\Phi(\mathbf{x}_{i}^{(s)} \mathbf{\gamma})}=\dfrac{\phi(-\mathbf{x}_{i}^{(p)}\mathbf{\gamma})}{\Phi(\mathbf{x}_{i}^{(p)} \mathbf{\gamma})} \tag{11}\]
This would mean that Eq. (7) is properly identified through the nonlinearity of \(\lambda_{i}\). However, \(\lambda_{i}\) is roughly linear over a wide range of values for \(\mathbf{x}_{i}^{(s)}\mathbf{\gamma}\)[18]. Hence, Step 2 of the Heckman model would yield unrobust estimates for \(\hat{\underline{\beta}}\) due to the multicollinearity between \(\mathbf{x}_{i}^{(p)}\) and \(\lambda_{i}\).
## IV Methodology
When choosing prediction features from a set of \(K\) selection features, we encounter the following challenges. First, there are \(2^{K}-1\) possible choices to make for the set of prediction features. For any dataset that has a large number of selection features, searching for a suitable set of prediction features becomes computationally expensive. Second, the Heckman selection model does not perform well for exclusion restrictions that are not valid. In other words, some choices for the set of prediction features are not helpful when using the Heckman model to handle MNAR sample selection bias on the outcome.
We introduce Heckman-FA, a framework for using the Heckman selection model via a learned feature assignment. Heckman-FA first learns the weights of an assignment function \(\psi\) that draws samples from the Gumbel-Softmax distribution
[9]. This function outputs an assignment of prediction features given a matrix \(\mathbf{\pi}\) of probabilities of including a selection feature in the prediction model. The framework then uses this assignment of prediction features to run the Heckman model and compute the mean absolute error (MAE) of predictions on \(\mathcal{D}_{s}\) when using the Heckman selection model. To optimize \(\psi\), we minimize the MAE with respect to \(\mathbf{\pi}\). This results in an estimated probability matrix \(\hat{\mathbf{\pi}}\). To extract the prediction features, Heckman-FA looks through different prediction feature matrices by sampling from the Gumbel-Softmax distribution using \(\hat{\mathbf{\pi}}\). When determining the extracted prediction feature matrix, we first consider whether or not the estimated correlation between the noise terms is within a range that is user-defined based on prior domain knowledge. We further consider goodness-of-fit to ensure that the prediction model is of quality. Using the selection and extracted prediction features, we run the Heckman model to fit a robust prediction model under MNAR sample selection bias on the outcome.
### _Assignment Function_
We formally introduce an assignment function \(\psi(k)\), which is defined as
\[\psi(k)=\begin{cases}1&k\text{th selection feature is assigned}\\ 0&k\text{th selection feature is not assigned}\end{cases} \tag{12}\]
In general, an assignment function determines which of the \(K\) selection features are also features in the prediction equation.
**Assignment Computation.** In Algorithm 2, we provide the pseudocode for computing a matrix \(\mathbf{X}^{(p)}\) of assigned prediction features from the selection feature matrix \(\mathbf{X}^{(s)}\). We assume that both \(\mathbf{X}^{(p)}\) and \(\mathbf{X}^{(s)}\) have \(K\) columns. However, we define the \(J\) features assigned for prediction as the columns in \(\mathbf{X}^{(p)}\) that are not equal to the zero vector \(\mathbf{0}\).
To obtain \(\mathbf{X}^{(p)}\) based on selection features, we compute the assignment \(\psi(k)\) for the \(k\)th selection feature by drawing samples from a categorical distribution. Let \(\mathbf{Z}\in\mathbb{R}^{2\times K}\) be a categorical sample matrix such that each element is either 0 or 1. The Gumbel-Max trick is used to efficiently draw categorical samples. Following the steps of [9], a sample \(\mathbf{Z}_{\cdot k}\), the \(k\)th column of \(\mathbf{Z}\), is drawn from a categorical distribution with class probabilities \(\pi_{1k}\) and \(\pi_{2k}\), where \(\pi_{2k}\) (\(\pi_{1k}\)) is the probability that the \(k\)th selection feature is (not) assigned to the prediction model. Formally, \(\mathbf{Z}_{\cdot k}\) is expressed as
\[\mathbf{Z}_{\cdot k}=\text{one\_hot}(\underset{q}{\text{argmax}}[g_{ qk}+\log\pi_{qk}]) \tag{13}\]
where \(g_{qk}\sim\text{ Gumbel}(0,1)\) and \(q\in\{1,2\}\). Thus, in terms of \(\mathbf{Z}_{\cdot k}\), we express the assignment \(\psi\) for the \(k\)th selection feature as
\[\psi(k;\pi)=\begin{bmatrix}0\\ 1\end{bmatrix}^{T}\mathbf{Z}_{\cdot k} \tag{14}\]
and compute
\[x_{ik}^{(p)}=x_{ik}^{(s)}\odot\psi(k;\mathbf{\pi}) \tag{15}\]
for each selection feature as indicated in lines 5 and 6 of Algorithm 2, using \(\odot\) to denote elementwise multiplication.
**Backpropagation.** To train the assignment function \(\psi\), we learn \(\hat{\pi}_{1k}\) and \(\hat{\pi}_{2k}\) for each selection feature. However, \(\mathbf{Z}_{\cdot k}\) is expressed in terms of argmax, which is non-differentiable. Thus we cannot derive \(\nabla_{\mathbf{\pi}}\mathbf{Z}\) in order to learn \(\hat{\mathbf{\pi}}\). On the other hand, we can use the Gumbel-Softmax distribution [9] to approximate the categorical sample \(\mathbf{Z}_{\cdot k}\). By the Straight-Through Gumbel Estimator, we compute \(\hat{\mathbf{Z}}\in\mathbb{R}^{2\times K}\) as a continuous, differentiable approximation of argmax, where
\[\tilde{z}_{qk}=\frac{\text{exp}((\log\pi_{qk}+g_{qk})/\tau)}{ \sum_{r=1}^{2}\text{exp}((\log\pi_{rk}+g_{rk})/\tau)} \tag{16}\]
with \(\tau\) as the softmax temperature. Notice that as \(\tau\to 0\), Eq. (16) will approximate the argmax function in Eq. (13), and the Gumbel-Softmax sample vector will approach a one-hot vector. Hence \(\nabla_{\mathbf{\pi}}\mathbf{Z}\approx\nabla_{\mathbf{\pi}}\tilde{\mathbf{Z}}\), where
\[\frac{\partial\tilde{z}_{qk}}{\partial\pi_{qk}}=\frac{\prod_{r= 1}^{2}\text{exp}((\log\pi_{rk}+g_{rk})/\tau)}{(\sum_{r=1}^{2}\text{exp}((\log \pi_{rk}+g_{rk})/\tau))^{2}}\cdot\frac{1}{\tau\pi_{qk}} \tag{17}\]
Therefore, although we use Eq. (14) to obtain assignments, we use \(\tilde{\mathbf{Z}}\) instead of \(\mathbf{Z}\) when performing backpropagation to update \(\mathbf{\pi}\).
Based on Eq. (17), \(\nabla_{\mathbf{\pi}}\tilde{\mathbf{Z}}\) is well-defined, so we are able to learn the assignment function and estimate parameters \(\hat{\mathbf{\pi}}\). Formally, we compute a probability matrix \(\hat{\mathbf{\pi}}\) such that
\[\hat{\mathbf{\pi}} =\underset{\mathbf{\pi}}{\text{argmin}}\,\mathcal{L}_{MAE} \tag{18}\] \[=\underset{\mathbf{\pi}}{\text{argmin}}\left(\frac{1}{m}\sum_{i=1}^{m} \left|y_{i}-(\mathbf{x}_{i}^{(p)}\mathbf{\beta}+\hat{\lambda}_{i}\beta_{H})\right|\right)\]
where \(x_{ik}^{(p)}=x_{ik}^{(s)}\odot\psi(k;\mathbf{\pi})\) for the \(k\)th selection feature. To ensure that the Heckman model is a quality prediction model, we consider the predictive performance of the Heckman model on \(\mathcal{D}_{s}\) to learn the assignment function. In this work, we
choose to minimize the MAE over \(\mathcal{D}_{s}\) in order to obtain \(\hat{\mathbf{\pi}}\). For \(\nabla_{\mathbf{\pi}}\mathcal{L}_{MAE}\), using on Eq. (17), we see that
\[\begin{split}\frac{\partial\mathcal{L}_{MAE}}{\partial\pi_{qk}}& =\frac{\partial\mathcal{L}_{MAE}}{\partial\hat{y}_{i}}\cdot\frac{ \partial\hat{y}_{i}}{\partial x_{ik}^{(p)}}\cdot\frac{\partial x_{ik}^{(p)}}{ \partial\psi(k)}\cdot\frac{\partial\psi(k)}{\partial\bar{z}_{qk}}\cdot\frac{ \partial\bar{z}_{qk}}{\partial\pi_{qk}}\\ &=-\frac{1}{m}\sum_{i=1}^{m}\frac{y_{i}-\hat{y}_{i}}{|y_{i}-\hat{ y}_{i}|}\cdot\beta_{k}\cdot x_{ik}^{(s)}\cdot\frac{\partial\psi(k)}{\partial\bar{z}_{qk}} \\ &\cdot\frac{\prod_{i=1}^{2}\exp((\log\pi_{rk}+g_{rk})/\tau)}{( \sum_{r=1}^{2}\exp((\log\pi_{rk}+g_{rk})/\tau))^{2}}\cdot\frac{1}{\tau\pi_{qk} }\end{split} \tag{19}\]
where \(\frac{\partial\psi(k)}{\partial\bar{z}_{2k}}=1\). Thus \(\nabla_{\mathbf{\pi}}\mathcal{L}_{MAE}\) is well-defined. Moreover, \(\nabla_{\mathbf{\pi}}\mathcal{L}_{MAE}\) is not a zero matrix, meaning that \(\mathbf{\pi}\) is updated during backpropagation.
**Discussion.** Although other metrics such as mean squared error (MSE) and root mean squared error (RMSE) are typically used to evaluate the performance of linear regression, we cannot minimize these metrics in order to obtain \(\hat{\mathbf{\pi}}\). We show that if our objective is to minimize the MSE or RMSE, then \(\mathbf{\pi}\) does not change at all during backpropagation. As an example, consider \(\mathcal{L}_{MSE}\), which denotes the MSE loss function. For \(\nabla_{\mathbf{\pi}}\mathcal{L}_{MSE}\), we see that
\[\begin{split}\frac{\partial\mathcal{L}_{MSE}}{\partial\pi_{qk}}& =\frac{\partial\mathcal{L}_{MSE}}{\partial\hat{y}_{i}}\cdot\frac {\partial\hat{y}_{i}}{\partial x_{ik}^{(p)}}\cdot\frac{\partial x_{ik}^{(p)}} {\partial\psi(k)}\cdot\frac{\partial\psi(k)}{\partial\bar{z}_{qk}}\cdot\frac {\partial\bar{z}_{qk}}{\partial\pi_{qk}}\\ &=\frac{\partial\mathcal{L}_{MSE}}{\partial\psi(k)}\cdot\frac{ \partial\psi(k)}{\partial\bar{z}_{qk}}\cdot\frac{\partial\bar{z}_{qk}}{\partial \pi_{qk}}\end{split} \tag{20}\]
where
\[\begin{split}\frac{\partial\mathcal{L}_{MSE}}{\partial\psi(k)}& =-\frac{2}{m}\sum_{i=1}^{m}(y_{i}-\hat{y}_{i})\cdot\beta_{k}\cdot x _{ik}^{(s)}\\ &=-\frac{2}{m}\beta_{k}\sum_{i=1}^{m}(y_{i}-\hat{y}_{i})\cdot x_{ ik}^{(s)}\\ &=-\frac{2}{m}\beta_{k}\sum_{i=1}^{m}v_{i}\cdot x_{ik}^{(s)}\end{split} \tag{21}\]
where \(v_{i}\) is the error of predicting the outcome of \(\mathbf{t}_{i}\in\mathcal{D}_{s}\) using the Heckman selection model. Given that \(\mathbf{\beta}\) has length \(K\), consider two cases on the \(k\)th selection feature. First, for any \(k\)th selection feature not assigned for prediction, \(\mathbf{X}_{\cdot k}^{(p)}=\mathbf{0}\), where \(\mathbf{X}_{\cdot k}^{(p)}\) is the \(k\)th column of \(\mathbf{X}^{(p)}\). Thus \(\beta_{k}=0\) after running the Heckman selection model and hence \(\frac{\partial\mathcal{L}_{MSE}}{\partial\psi(k)}=0\). Second, for any \(k\)th selection feature assigned for prediction, \(x_{ik}^{(s)}=x_{ik}^{(p)}\) for all \(\mathbf{t}_{i}\in\mathcal{D}_{s}\). Thus
\[\begin{split}\frac{\partial\mathcal{L}_{MSE}}{\partial\psi(k)}=- \frac{2}{m}\beta_{k}\sum_{i=1}^{m}v_{i}\cdot x_{ik}^{(s)}&=- \frac{2}{m}\beta_{k}\sum_{i=1}^{m}v_{i}\cdot x_{ik}^{(p)}\\ &=-\frac{2}{m}\beta_{k}\left(\mathbf{v}^{T}\mathbf{X}_{\cdot k}^{(p)} \right)\end{split} \tag{22}\]
where \(\mathbf{v}\) is the error vector. Now \(\mathbf{X}_{\cdot k}^{(p)}\) is considered part of the input space that is used to compute \(\hat{\mathbf{y}}\) and in turn \(\mathbf{v}\). Because the values in \(\mathbf{\beta}\) are estimated by minimizing \(\mathbf{v}^{T}\mathbf{v}\), then \(\mathbf{v}\) is orthogonal to the columns of the input space used to compute \(\hat{\mathbf{y}}\), which includes \(\mathbf{X}_{\cdot k}^{(p)}\). Thus \(\mathbf{v}^{T}\mathbf{X}_{\cdot k}^{(p)}=0\), and \(\frac{\partial\mathcal{L}_{MSE}}{\partial\psi(k)}=0\) for all selection features assigned for prediction. Based on these two cases, \(\frac{\partial\mathcal{L}_{MSE}}{\partial\pi_{qk}}=0\) for all \(K\) selection features. Hence \(\nabla_{\mathbf{\pi}}\mathcal{L}_{MSE}\) is a zero matrix.
### _Extraction of Suitable Assignment_
After we train the assignment function \(\psi\), we utilize the estimated parameters \(\hat{\mathbf{\pi}}\) to extract a suitable set of features for the prediction model. We propose a sampling-based strategy to extract \(\mathbf{X}^{(p)}\) using \(\hat{\mathbf{\pi}}\). We base this strategy on the estimated correlation \(\hat{\rho}\) between the noise terms \(u_{i}^{(s)}\) and \(u_{i}^{(p)}\) and the adjusted \(R^{2}\) value, denoted as \(R_{a}^{2}\). Specifically, a suitable assignment of prediction features is extracted based on a user-defined range of values for the correlation. The range of values is provided based on prior domain knowledge on a given dataset. Moreover, we observe that multiple prediction feature assignments can correspond to a correlation that is within the user-defined range. To further decide on which prediction feature matrix to extract, we also consider goodness-of-fit to ensure that the prediction model is of quality.
In general, \(\rho\) carries information about the nature of the sample selection process. However, because \(u_{i}^{(s)}\) and \(u_{i}^{(p)}\) are unobserved for all \(\mathbf{t}_{i}\), the true value of \(\rho\) is unknown for a dataset. We must instead consider the estimated correlation \(\hat{\rho}\). To compute \(\hat{\rho}\), we note that a consistent estimator of \(\sigma^{2}\) can be derived based on the Heckman selection model. First, define \(v_{i}=y_{i}-\hat{y}_{i}\) as the error of predicting the outcome of the \(i\)th sample using the Heckman selection model. The true conditional variance of \(v_{i}\) is
\[\begin{split}\mathbb{E}[v_{i}^{2}|s_{i}=1]&=\sigma^{ 2}(1+\rho^{2}(\lambda_{i}(-\mathbf{x}_{i}^{(s)}\gamma)-\lambda_{i}^{2}))\\ &=\sigma^{2}+(\rho\sigma)^{2}(\lambda_{i}(-\mathbf{x}_{i}^{(s)} \mathbf{\gamma})-\lambda_{i}^{2})\end{split} \tag{23}\]
Consider the average conditional variance over all samples in \(\mathcal{D}_{s}\), where
\[\begin{split}\text{plim }\frac{1}{m}\sum_{i=1}^{m}\mathbb{E}[v_{i}^{2}|s_{i}=1]& =\sigma^{2}\left(1+\frac{\rho^{2}}{m}\sum_{i=1}^{m}\lambda_{i}(-\mathbf{x}_{i}^{(s)} \gamma)-\lambda_{i}^{2}\right)\\ &=\sigma^{2}+\frac{(\rho\sigma)^{2}}{m}\sum_{i=1}^{m}\lambda_{i}(- \mathbf{x}_{i}^{(s)}\gamma)-\lambda_{i}^{2}\end{split} \tag{24}\]
with plim denoting convergence in probability. Now the average conditional variance over all samples in \(\mathcal{D}_{s}\) is estimated using the mean squared error
\[\frac{1}{m}\sum_{i=1}^{m}v_{i}^{2}=\frac{1}{m}\sum_{i=1}^{m}(y_{i}-\hat{y}_{i})^ {2} \tag{25}\]
of predicting the outcome using the Heckman model. Thus, using Eq. (24) and (25), \(\sigma^{2}\) can be estimated by computing
\[\hat{\sigma}^{2}=\frac{1}{m}\sum_{i=1}^{m}(y_{i}-\hat{y}_{i})^{2}-\frac{\hat{ \beta}_{H}^{2}}{m}\sum_{i=1}^{m}\hat{\lambda}_{i}(-\mathbf{x}_{i}^{(s)}\hat{\mathbf{ \gamma}})-\hat{\lambda}_{i}^{2} \tag{26}\]
where \(\hat{\beta}_{H}\) is an estimate of \(\rho\sigma\). Therefore, using Eq. (26), we obtain \(\hat{\rho}=\hat{\beta}_{H}/\hat{\sigma}\).
We consider a set of prediction features to be suitable for the Heckman selection model if \(\hat{\rho}\) is within the range \([\rho_{min},\rho_{max}]\), where the values of \(\rho_{min}\) and \(\rho_{max}\) are user-specified. We work with this user-specified range because the
true value of \(\rho\) is unknown. We expect that the values of \(\rho_{min}\) and \(\rho_{max}\) are appropriately chosen to indicate that the Heckman model properly handles MNAR sample selection bias. On one hand, the range \([\rho_{min},\rho_{max}]\) should not contain 0 since \(\rho=0\) indicates MAR sample selection bias. On the other hand, \(\rho_{min}\) and \(\rho_{max}\) should not be too negative or too positive since strong correlation renders the Heckman model unstable [17].
Because there may be multiple values of \(\hat{\rho}\) that are in \([\rho_{min},\rho_{max}]\), we also obtain \(R_{a}^{2}\) by computing
\[R_{a}^{2}=1-\frac{(1-R^{2})(m-1)}{m-J-1} \tag{27}\]
where \(R^{2}\) is the coefficient of determination and \(J\) is the number of prediction features. Having the largest number of prediction features such that \(\hat{\rho}\) is in \([\rho_{min},\rho_{max}]\) does not imply that the linear model is the best fitted. Thus \(R_{a}^{2}\) is helpful since it also factors in the number of prediction features when measuring the goodness-of-fit for a linear model.
As our strategy to extract \(\mathbf{X}^{(p)}\) for the Heckman selection model, we propose to collect \(B\) Gumbel-Softmax samples based on \(\hat{\mathbf{\pi}}\). Out of the \(B\) samples, we choose the prediction feature matrix such that the prediction model has the highest value of \(R_{a}^{2}\) given that \(\hat{\rho}\) is in \([\rho_{min},\rho_{max}]\). The pseudocode for this process is listed in Algorithm 3. In lines 2-12, we iteratively look for \(\mathbf{X}^{(p)}\). In line 3, we sample \(\mathbf{X}^{(p)}_{temp}\) by executing Algorithm 2 based on \(\hat{\mathbf{\pi}}\). In line 4, we execute the Heckman model based on \(\underline{\mathbf{X}}^{(p)}_{temp}\) and get \(\underline{\mathbf{\beta}}\) and \(\{\hat{\lambda}_{i}\}_{i=1}^{m}\). In line 5, we compute \(\hat{\sigma}^{(p)2}\) using Eq. (26). In line 6, we calculate \(\hat{\rho}\) by dividing \(\hat{\beta}_{H}\) by \(\hat{\sigma}\). In line 7, we compute \(R_{a}^{2}\). Throughout each iteration, we check whether or not \(\hat{\rho}\) is in \([\rho_{min},\rho_{max}]\) and \(R_{a}^{2}\) is larger than the current largest \(R_{a}^{2*}\) value. If this condition is satisfied, then we update \(\mathbf{X}^{(p)}\), as indicated in line 9. In line 13, \(\mathbf{X}^{(p)}\) is returned.
```
0: Training set \(\mathcal{D}_{tr}=\{(\mathbf{x}_{i},y_{i},s_{i}=1)\}_{i=1}^{m}\cup\{(\mathbf{x}_{i},s_{i }=0)\}_{i=m+1}^{n}\), number of selection features \(K\), initial fixed value \(c\), number of training epochs \(T\), learning rate \(\alpha\), lower correlation threshold \(\rho_{min}\), upper correlation threshold \(\rho_{max}\), number of Gumbel-Softmax samples \(B\)
0: Augmented prediction feature matrix \(\underline{\mathbf{X}}^{(p)}\), estimated Heckman coefficients \(\underline{\mathbf{\hat{\beta}}}\)
1:for\(K\) selection features do
2: Initialize \(\pi_{2k}=c\) and \(\pi_{1k}=1-c\)
3:endfor
4:for\(T\) epochs do
5:\(\mathbf{X}^{(p)}\leftarrow\)FeatureAssign(\(\mathcal{D}_{tr}\), \(K\), \(\mathbf{\pi}\))
6:\(\underline{\mathbf{\beta}},\{\hat{\lambda}_{i}\}_{i=1}^{m}\leftarrow\)Heckman(\(\mathcal{D}_{tr}\), \(\mathbf{X}^{(s)}\), \(\mathbf{X}^{(p)}\))
7: Compute \(\mathcal{L}_{MAE}\) using \(\underline{\mathbf{\beta}}\) and \(\{\hat{\lambda}_{i}\}_{i=1}^{m}\)
8:\(\mathbf{\pi}\leftarrow\mathbf{\pi}-\alpha\nabla_{\mathbf{\pi}}\mathcal{L}_{MAE}\)
9:\(\hat{\mathbf{\pi}}\leftarrow\mathbf{\pi}\)
10:endfor
11:\(\mathbf{X}^{(p)}\leftarrow\)Extraction(\(\mathcal{D}_{tr}\), \(K\), \(\hat{\mathbf{\pi}}\), \(\rho_{min}\), \(\rho_{max}\), \(B\))
12:\(\underline{\mathbf{\beta}},\{\hat{\lambda}_{i}\}_{i=1}^{m}\leftarrow\)Heckman(\(\mathcal{D}_{tr}\), \(\mathbf{X}^{(s)}\), \(\mathbf{X}^{(p)}\))
13:\(\underline{\mathbf{X}}^{(p)}\leftarrow[\mathbf{X}^{(p)},\{\hat{\lambda}_{i}\}_{i=1}^{m}]\)
14:return\(\underline{\mathbf{X}}^{(p)}\) and \(\underline{\mathbf{\beta}}\)
```
**Algorithm 4** Heckman-FA
### _Heckman Selection Model with Feature Assignment_
Algorithm 4 gives the pseudocode of Heckman-FA. In lines 1-3, we initialize \(\mathbf{\pi}\). In this work, we use a fixed value \(c\in(0,1)\) to initialize \(\pi_{2k}\) and \(\pi_{1k}=1-\pi_{2k}\) for the \(k\)th selection feature. This means that each selection feature has an equal probability of being assigned as a prediction feature when we start training \(\psi\). For \(c\), we give users some flexibility to choose which value to use. However, we suggest for users to not use values that are extremely close to 0 or 1 for \(c\). This ensures that \(\psi\) is trained on a variety of sets of prediction features based on random Gumbel-Softmax samples. In lines 5-9, \(\psi\) is trained over \(T\) epochs. In line 5, we obtain the prediction feature matrix \(\mathbf{X}^{(p)}\) by executing Algorithm 2. In line 6, we
execute the steps of the Heckman model to get \(\hat{\underline{\boldsymbol{\beta}}}\) and \(\{\hat{\lambda}_{i}\}_{i=1}^{m}\). In line 7, we use \(\hat{\underline{\boldsymbol{\beta}}}\) and \(\{\hat{\lambda}_{i}\}_{i=1}^{m}\) to compute \(\mathcal{L}_{MAE}\). In line 8, we update \(\boldsymbol{\pi}\) by computing \(\nabla_{\pi}\mathcal{L}_{MAE}\) using Eq. (19). In line 11, using \(\hat{\boldsymbol{\pi}}\), we extract the assigned prediction feature matrix \(\boldsymbol{X}^{(p)}\) by calling Algorithm 3. In line 12, we run the Heckman model using the extracted prediction features and obtain \(\hat{\underline{\boldsymbol{\beta}}}\) and \(\{\hat{\lambda}_{i}\}_{i=1}^{m}\). After having \(\underline{\boldsymbol{X}}^{(p)}\) as the concatenation of \(\boldsymbol{X}^{(p)}\) and \(\{\hat{\lambda}_{i}\}_{i=1}^{m}\) in line 13, Heckman-FA returns \(\underline{\boldsymbol{X}}^{(p)}\) and \(\hat{\underline{\boldsymbol{\beta}}}\).
**Heckman-FA*.** We also present Heckman-FA* as alternative option to Heckman-FA, where users extract \(\boldsymbol{X}^{(p)}\) by simply ranking the selection features based on the largest value of \(\hat{\pi}_{2k}\) instead of using Algorithm 3. In other words, we rank the likliest selection features to be assigned as prediction features based on the objective in Eq. (18) that computes \(\hat{\boldsymbol{\pi}}\). We then examine the first \(J\) selection features in the ranking for all \(J\in\{1,\ldots,K-1\}\). Letting the first \(J\) selection features in the ranking be prediction features, we run the Heckman selection model on the training set and obtain \(\hat{\underline{\boldsymbol{\beta}}}\). We then compute \(\hat{\sigma}^{2}\), \(\hat{\rho}\), and \(R_{a}^{2}\) using the same steps indicated in lines 5-7 in Algorithm 3. Finally, out of all \(J\in\{1,\ldots,K-1\}\), we choose the set of top \(J\) selection features in the ranking as prediction features such that \(\hat{\rho}\in[\rho_{min},\rho_{max}]\) and \(R_{a}^{2}\) is at a maximum given \(\hat{\rho}\). As a result, the columns of \(\boldsymbol{X}^{(p)}\) corresponding to these features do not equal **0**.
**Computational Complexity.** To derive the computational complexity of Algorithm 4 (Heckman-FA), we first have to consider the complexity of Algorithm 2. The assignment computation takes \(O(mK)\) time. During the training of \(\psi\) in lines 4-10, the assignment computation is repeated for \(T\) epochs. Thus the complexity of lines 4-10 is \(O(nKT)\). Moreover, the complexity of Algorithm 3 is \(O(nKB)\) as the assignment computation is repeated for \(B\) Gumbel-Softmax samples. Therefore, Heckman-FA has a computational complexity of \(O(nK(T+B))\).
We also consider the complexity of Heckman-FA*. Similar to Heckman-FA, we first see that \(\psi\) is trained in \(O(nKT)\) time when running Heckman-FA*. However, the complexity of extraction is different for Heckman-FA* than for Heckman-FA. Because the Heckman selection model is called for \(K-1\) sets of selection features, the extraction process has a complexity of \(O(m(K-1))\). Thus the complexity of Heckman-FA* is \(O(nKT+m(K-1))\).
## V Experiments
### _Setup_
We evaluate the performance of our framework on CRIME [5] and COMPAS [11] datasets. The CRIME dataset contains socio-economic, law enforcement, and crime information for \(1,994\) communities in the United States. For each community, we predict the total number of violent crimes committed (per 100,000 population). The COMPAS dataset consists of \(5,278\) records collected from defendants in Florida between 2013 and 2014. Given attributes such as race, age, and priors count, we predict the defendant's decile score.
For each dataset, we split the dataset to include 70% of samples in \(\mathcal{D}_{tr}\). We then construct \(\mathcal{D}_{s}\) based on \(\mathcal{D}_{tr}\). For the CRIME dataset, we create sample selection bias in \(\mathcal{D}_{tr}\) by selecting communities such that the proportion of people under poverty is less than 0.05. As a result, 976 out of 1,395 communities in \(\mathcal{D}_{tr}\) are in \(\mathcal{D}_{s}\). For the COMPAS dataset, we select defendants in \(\mathcal{D}_{tr}\) who have a violent decile score of less than 10. As a result, there are 2,585 samples in \(\mathcal{D}_{s}\). We provide the set of selection features used for each dataset in Table II, where \(K=26\) for the CRIME dataset and \(K=10\) for the COMPAS dataset.
**Baselines and Hyperparameters.** We compare our approach to the following baselines: (1) naive linear regression (Naive) on \(\mathcal{D}_{s}\) and (2) Rockafellar-Uryasev (RU) regression [19], which involves fitting two neural networks with the RU loss to train a robust model under bounded MAR sample selection bias. In our experiments, we examine the RU regression baseline where the distribution shift between training and testing sets are restricted given \(\Gamma=3\). Unlike Heckman-FA, the baselines do not have access to features that model the selection of training samples. As a result, we should expect the baselines to not be as effective as Heckman-FA in handling regression under MNAR sample selection bias.
As we compute \(\hat{\boldsymbol{\pi}}\) using Heckman-FA, we work with the following hyperparameters. When initializing \(\boldsymbol{\pi}\), we set \(c=0.75\). We then train \(\psi\) for \(T=4,000\) epochs with the softmax temperature \(\tau=1\) for both datasets. We set the learning rate \(\alpha\) equal to 0.75 for the CRIME dataset and 0.05 for the COMPAS dataset. For the extraction of prediction features, we draw \(B=1,000\) Gumbel-Softmax samples. In all experiments, the range \([\rho_{min},\rho_{max}]\) is set to be \([0.01,0.1]\) for the CRIME dataset and [0.1, 0.3] for the COMPAS dataset. All models are implemented using Pytorch and executed on
the Dell XPS 8950 9020 with a Nvidia GeForce RTX 3080 Ti.
### _Results on Heckman-FA_
In Table III, we report the training and testing MSE to compare Heckman-FA to the other baselines when using the extracted prediction features. When evaluating the methods using the extracted prediction features, we observe that the training MSE is equal for Heckman-FA and Naive. However, we see that the testing MSE is lower for the Heckman-FA than Naive. For instance, the testing MSE for Heckman-FA on the CRIME and COMPAS datasets is 0.0203 and 0.2506, respectively, which is lower than the testing MSE for Naive. For both datasets, we see that Heckman-FA produces a model that outperforms Naive given the extracted prediction features. When comparing Heckman-FA and RU, the training MSE of RU is 0.0050 and 0.0027 lower than Heckman-FA on the CRIME and COMPAS datasets, respectively. This is expected as RU fits a non-linear model for regression. On the other hand, we see that the testing MSE for Heckman-FA is 0.0009 and 0.0083 lower than RU for the CRIME and COMPAS datasets, respectively.
We also run a paired \(t\)-test on 10 different assignments of prediction features to analyze the significance of the comparison between Heckman-FA and the other baselines. Table IV shows results of the test. We see that the p-value is very small after running the hypothesis test on both datasets. For instance, on the CRIME dataset, the p-value is 0.0001 and 0.0053 when comparing Heckman-FA with Naive and RU, respectively. Given that Heckman-FA significantly outperforms Naive and RU, the results in Tables III and IV show that Heckman-FA outputs a robust regression model under MNAR sample selection bias.
**Sensitivity Analysis.** We perform sensitivity analysis on Heckman-FA by testing the approach over different values for the number of epochs \(T\), fixed initial value \(c\), and number of Gumbel-Softmax samples \(B\) drawn during assignment extraction. Table V gives the testing MSE of Heckman-FA across different values of \(T\) and \(c\) while fixing \(B=1,000\). For each combination of \(T\) and \(c\) listed in Table V, the testing MSE of Heckman-FA is almost equal to the testing MSE provided in Table III for both datasets. We have a similar observation for each combination of \(T\) and \(B\) as shown in the right three columns of Table VI, which give the testing MSE of Heckman-FA across different values of \(T\) and \(B\) while fixing \(c=0.75\). This shows that the performance of Heckman-FA is not sensitive to changes in how \(\boldsymbol{\pi}\) is initialized and the number of Gumbel-Softmax samples examined during extraction.
We also look at \(\hat{\boldsymbol{\pi}}\), which is used to draw Gumbel-Softmax samples for assignment computation, to consider the condition \(\hat{\rho}\in[\rho_{min},\rho_{max}]\). We compare using the learned \(\hat{\boldsymbol{\pi}}\) to fixing all elements of \(\hat{\boldsymbol{\pi}}\) as 0.5 when drawing Gumbel-Softmax samples during extraction. To fairly compare the two settings, we obtain \(\hat{\boldsymbol{\pi}}\) after having \(c=0.5\). For each setting for \(\hat{\boldsymbol{\pi}}\), we compute the proportion of Gumbel-Softmax samples that correspond to \(\hat{\rho}\in[\rho_{min},\rho_{max}]\) out of 1,000 total samples. We repeat the experiment 40 times using the COMPAS dataset. We find that the average proportion of having \(\hat{\rho}\in[\rho_{min},\rho_{max}]\) is 0.0601 after using a fixed value of 0.5 for all elements of \(\hat{\boldsymbol{\pi}}\). However, when using the learned \(\hat{\boldsymbol{\pi}}\) from Heckman-FA, the average proportion of having \(\hat{\rho}\in[\rho_{min},\rho_{max}]\) increases by 0.0211. This result indicates that compared to letting each element of \(\hat{\boldsymbol{\pi}}\) be 0.5, Heckman-FA is more likely to extract assignments of prediction features such that \(\hat{\rho}\in[\rho_{min},\rho_{max}]\) when the Gumbel-Softmax samples are drawn using the learned \(\hat{\boldsymbol{\pi}}\).
**Execution Time.** We report the execution time after running Heckman-FA across different values of \(T\) and the number of Gumbel-Softmax samples \(B\) drawn during the assignment extraction in left three columns of Table VI. We observe that for both datasets, Heckman-FA runs fast for each combination of \(T\) and \(B\). For instance, when \(T=100\) and \(B=100\), Heckman-FA is completed after 5.32 and 7.70 seconds for the CRIME and COMPAS datasets, respectively. While the execution time increases as \(T\) and \(B\) increase, the testing MSE of Heckman-FA remains close to the testing MSE reported in
Table III. This result shows that Heckman-FA is fast while maintaining quality performance on the testing set.
### _Results on Heckman-FA*_
In Table VII, we compare the performance of baselines to Heckman-FA*. When using the extracted prediction features, we see that while the training MSEs of Heckman-FA* and Naive are equal for both datasets, the testing MSE for Heckman-FA* is 0.0020 and 0.0048 lower than Naive on the CRIME and COMPAS datasets, respectively. This shows that by simply choosing \(J\) prediction features after ranking selection features based on \(\hat{\pi}_{2k}\), Heckman-FA* is robust against MNAR sample selection bias. Moreover, in terms of the testing MSE, Heckman-FA* outperforms RU by 0.0016 and 0.0090 on the CRIME and COMPAS datasets, respectively.
**Comparison with Correlation-Based Ranking.** To further show the effectiveness of Heckman-FA*, which requires training \(\psi\), we also compare the ranking of selection features based on \(\hat{\pi}_{2k}\) to the ranking of selection features based on their strength of correlation with the outcome. We use the name Heckman-C to describe the correlation-based ranking of selection features. Unlike the ranking based on \(\hat{\pi}_{2k}\), Heckman-C does not rely on training the assignment function beforehand. For Heckman-FA* and Heckman-C, we run each approach on the CRIME dataset using the top \(J\) selection features across different values of \(J\). Table VIII provides the MSE of the models on the testing set. We consider the values of \(J=6\) through \(J=12\). The underlined testing MSE corresponds to the size of the final set of prediction features. For Heckman-FA*, the final set of prediction features consists of \(8\) prediction features. For Heckman-C, there is no testing MSE underlined as there is no set of top \(J\) selection features extracted for the final set of prediction features. In other words, when ranking the selection features based on their correlation with the outcome using the CRIME dataset, there is no set of \(J\) features from the ranking that ensures the robustness of Heckman-C under MNAR sample selection bias. In our experiment, we also find that the condition \(\hat{\rho}\in[\rho_{min},\rho_{max}]\) is satisfied for Heckman-FA* for the range \(J=8\) through \(J=12\). However, for Heckman-C, no set of \(J\) prediction features satisfies this condition for any value of \(J\). This indicates that when assigning prediction features for the Heckman model, ranking selection features based on \(\hat{\pi}_{2k}\) after training \(\psi\) is more effective than ranking based on the correlation of features with the outcome.
## VI Conclusion
In this paper, we introduced Heckman-FA, a novel data-driven approach that obtains an assignment of prediction features for the Heckman selection model to robustly handle MNAR sample selection bias. Given a set of features that are used to fit the selection of samples, our approach first trains an assignment function by minimizing the MAE on the set of fully observed training samples. Heckman-FA finds a set of prediction features for the Heckman model by drawing a number of Gumbel-Softmax samples using the learned probability of assignment for each selection feature. This set is extracted based on the prediction model's goodness-of-fit and the estimated correlation between noise terms. We observed that the Heckman-FA produces a robust regression model under MNAR sample selection bias on the outcome after training the model on real-world datasets. In the future, we plan to extend our approach to the problem of learning MNAR outcomes/labels in non-tabular data.
**Reproducibility.** The source code can be downloaded using the link [https://tinyurl.com/25s786z6](https://tinyurl.com/25s786z6).
## Acknowledgements
This work was supported in part by NSF 1946391 and 2137335.
|
2301.13527 | Real-Time Outlier Detection with Dynamic Process Limits | Anomaly detection methods are part of the systems where rare events may
endanger an operation's profitability, safety, and environmental aspects.
Although many state-of-the-art anomaly detection methods were developed to
date, their deployment is limited to the operation conditions present during
the model training. Online anomaly detection brings the capability to adapt to
data drifts and change points that may not be represented during model
development resulting in prolonged service life. This paper proposes an online
anomaly detection algorithm for existing real-time infrastructures where
low-latency detection is required and novel patterns in data occur
unpredictably. The online inverse cumulative distribution-based approach is
introduced to eliminate common problems of offline anomaly detectors, meanwhile
providing dynamic process limits to normal operation. The benefit of the
proposed method is the ease of use, fast computation, and deployability as
shown in two case studies of real microgrid operation data. | Marek Wadinger, Michal Kvasnica | 2023-01-31T10:23:02Z | http://arxiv.org/abs/2301.13527v1 | # Real-Time Outlier Detection with Dynamic Process Limits
###### Abstract
Anomaly detection methods are part of the systems where rare events may endanger an operation's profitability, safety, and environmental aspects. Although many state-of-the-art anomaly detection methods were developed to date, their deployment is limited to the operation conditions present during the model training. Online anomaly detection brings the capability to adapt to data drifts and change points that may not be represented during model development resulting in prolonged service life. This paper proposes an online anomaly detection algorithm for existing real-time infrastructures where low-latency detection is required and novel patterns in data occur unpredictably. The online inverse cumulative distribution-based approach is introduced to eliminate common problems of offline anomaly detectors, meanwhile providing dynamic process limits to normal operation. The benefit of the proposed method is the ease of use, fast computation, and deployability as shown in two case studies of real microgrid operation data.
anomaly detection, interpretable machine learning, online machine learning, real-time systems, streaming analytics
## I Introduction
The era of Industry 4.0 is ruled by data. Effective data-based decision-making is driven by the quantity of collected data. Internet of Things (IoT) devices made data acquisition seamless and positively influenced a wide range of industries. It is estimated that the annual economic impact of IoT will further grow and reach up to $6.2 trillion by 2025 [1].
Various data collection mechanisms are used to buffer and store the data for future processing. However, the tremendous increase in data availability and the desire to extract valuable insight led to problems with the unbounded buffering and storage capacity. Real-time evaluation of the data streams became an acronym for smart data processing.
Streaming data analytics introduced mechanisms for online extraction and transformation while loading to the storage only a fraction of the former data load, which allowed the storage of the vital information carried by the data more comprehensively. However, the unstable quality of the data appeared to have the most crucial importance over the quantity.
Anomaly detection, well studied in the last decades, was reborn to the world of new challenges. Former studies were mainly concerned with a domain-specific detection of various anomalies while trained offline [2]. However, anomalies of diverse sources, from fraudulent web activity and suspicious financial transactions to sensor failure, malfunctioning of the hardware, and performance drops, mutate over time, and the model had to be updated.
Companies expanded their research activities on the creation and integration of generic frameworks combining prediction, detection, and alert mechanisms. One of the first projects, open-sourced for the public, are EGADS by Yahoo [3] and AnomalyDetection by Twitter [4]. The frameworks' modularity allowed the automation of the anomaly detection of time-series data and created space for discussion.
Moving from domain-specific to generic methods posed new problems connected to type I errors, i.e., a false-positive classification of normal behavior as anomalous. Accurate selection of forecaster, detector, and alerting mechanism allowed to tackle the problem, nevertheless, introduced considerable dependence on expert domain knowledge and fine-tuning.
Further work proved improvement in performance while relieving the tight requirements on domain knowledge [5]. However, strict demands on detection systems ranging from lasting up times to continuous monitoring with stable performance pointed to the challenge of data stationarity. Change points and concept drifts troubled unsupervised models, which led to service downtime due to the model retraining.
The era of adaptive machine learning introduced incremental learning schemes as a solution. Multiple studies for learning modes, adaptation methods, and model management swept through the machine learning community. Pannu et al. proposed an adaptive anomaly detection system [6]. However, the method represented a supervised operator-in-the-loop solution. Zhang et al. introduced an adaptive kernel density-based algorithm that uses an adaptive kernel width [7]. Nonetheless, training the models on big data had limitations resulting from the storage and unbounded buffering of data. Online learning models relaxed the need for data availability during model training [8]. On the contrary, it processed the data from a bounded buffer sequentially as in [9] and [10].
Anomaly detection in microgrids, however, called for low latency detection which implied real-time training and prediction processes [11]. Such adaptation of streamed modeling took into consideration strict boundaries on computational time. For work in this area see [12] and [13].
Alerting mechanisms in process automation detect situations where signal value deviates from constraints. An alert watchdog is triggered on threshold violation by individual signals.
The constraints, or process limits, are usually predefined and fixed. Nevertheless, factors such as aging and environmental changes call for dynamic process limits. Setting up a procedure for an evergrowing number of signal measurements is time-consuming. Besides, it is impossible for signals where no prior information about a correct process range is known. Those are subject to external factors that are unknown at setup time.
In this article, we suggest using existing process automation infrastructure based on alerting (PLC, SCADA, among others) and applying machine learning for dynamic process range based on changing conditions. We propose an unsupervised anomaly detection algorithm capable of online adaptation to change points and concept drifts, which adds to a recently developed body of research. The approach is evaluated on two case studies of microgrid sensors. To the author's knowledge, there are no studies to date concerned with providing adaptive operation constraints.
The main benefits of the proposed solution are that it:
* Keeps existing IT infrastructure, saving costs, and does not require operator retraining
* Automates alerting thresholds setup for a high number of signals
* Automates alerting for signals with no a priori knowledge of process limits
* Assesses changing environmental conditions and device aging
* Uses self-learning approach on streamed data
## II Preliminaries
This section introduces the main concepts which are building pillars of the developed approach. Subsection II-A will discuss a one-pass algorithm that allows for online adaptation. The following Subsection II-B proposes the ability to invert the solution in a two-pass implementation. The mathematical background of distribution modeling in Subsection II-C provides a basis for the Gaussian anomaly detection model conceptualized in the last Subsection II-D of Preliminaries.
### _Welford's Method_
Streaming data analytics, restrict the unbounded buffer or storage of the data, i.e., limits the uncontrolled growth of memory usage with the increasing amount o input data. In such cases, it is desired to keep the data only for the period of time required to perform computations. For the given purpose serve one-pass algorithms. This category of methods allows processing on-the-fly without the need to store the entire data stream.
**Definition II.1** (One-pass algorithm): _The algorithm with a single access to the data items in the order of their occurrence, i.e., \(x_{1},x_{2},x_{3},...\) is called one-pass algorithm [14]_
Welford's method represents a numerically stable one-pass solution for the online computation of mean and variance [15]. Given \(x_{i}\) where \(i=1,...,n\) is the sample index in given population \(n\), the corrected sum of squares is defined as
\[S_{n}=\sum_{i=1}^{n}(x_{i}-\bar{x}_{n})^{2}, \tag{1}\]
where the running mean \(\bar{x}_{n}\) is
\[\bar{x}_{n}=\frac{n-1}{n}\bar{x}_{n-1}+\frac{1}{n}x_{n}=\bar{x}_{n-1}+\frac{x_ {n}-\bar{x}_{n-1}}{n}. \tag{2}\]
The following identities to update the corrected sum of squares hold true
\[S_{n}=S_{n-1}+(x_{n}-\bar{x}_{n-1})(x_{n}-\bar{x}_{n}), \tag{3}\]
and the corresponding variance is
\[s_{n}^{2}=\frac{S_{n}}{n-1}. \tag{4}\]
As we can see in (3), we do access only current data sample \(x_{n}\) and previous value of \(\bar{x}_{n-1}\) which is updated in (2) using the same data sample and the size of seen population \(n\).
### _Inverse Welford's Method_
Let the incoming stream of data be subject to the concept drift. Such alternation in statistical properties has a negative influence on prediction accuracy. Therefore, an adaptation of any machine learning model is crucial for successful long-term operation.
**Definition II.2** (Concept drift): _Concept drift is a change in the statistical properties that occur in a sub-region of the feature space._
The previous Subsection II-A defined the main concept of online statistical computation that allows reacting to such changes. However, the further in time the shift occurs, the slower the adjustment of the running mean is, resulting from a negative relationship in (2) between population size \(n\) and influence of the last sample in population \(x_{n}\) on the updated value of \(\bar{x}_{n}\). For this reason, we define the expiration period \(t_{e}\), over which the running statistics are computed. After the expiration period, the data items are forgotten. Such reversal results in a need to store all the data in the window in order to revert their effect. Given \(t_{e}=n-1\) we can revert the influence of the first data sample on the running mean as
\[\bar{x}_{n-1}=\frac{n}{n-1}\bar{x}_{n}-\frac{1}{n-1}x_{n-t_{e}}=\bar{x}_{n}- \frac{x_{n-t_{e}}-\bar{x}_{n}}{n-1}, \tag{5}\]
then reverting the sum of squares follows as
\[S_{n-1}=S_{n}-(x_{n-t_{e}}-\bar{x}_{n-1})(x_{n-t_{e}}-\bar{x}_{n}), \tag{6}\]
which allows the computation of variance
\[s_{n-1}^{2}=\frac{S_{n-1}}{n-2}. \tag{7}\]
### _Modeling Distribution_
Statistical distribution can be used to create a generalized model of a normal system behavior based on observed measurement. Specifically, if no change point is expected in a given subset of samples, the Gaussian normal distribution can be fitted. Parameters of the normal distribution are used to compute standard score (8) for each new observation.
**Definition II.3** (Standard Score): _Standard score or \(Z\)-score is a number that specifies the number of sample standard
deviations \(s_{n}^{2}\) by which observation \(x\) deviates from the sample mean \(\bar{x}_{n}\) of normal distribution
\[z_{n}=\frac{x_{n}-\bar{x}_{n}}{s_{n}^{2}}. \tag{8}\]
In order to define the general probability of \(z\)-score belonging to anomaly we use probability computed using Cumulative Distribution Function (CDF). However, the \(z\)-score must be bounded using an error function into the interval from 0 to 1.
**Definition II.4** (Approximate Error Function): _The approximate error function represents the approximate probability that the random variable \(X\) lies in the range of \([\,-z_{n},z_{n}]\,\) denoted as_
\[E_{A}(z_{n})=z_{n}\frac{e^{-z_{n}^{2}}}{\sqrt{\pi}}(\,2/1+4/3x^{2}+8/15x^{4}+...)\,. \tag{9}\]
**Definition II.5** (Cumulative Distribution Function (CDF)): _CDF represents the probability that the random variable \(X\) takes a value less than or equal to \(x_{i}\). \(F_{X}\colon\mathbb{R}\to[0,1]\). For generic normal distribution with sample mean \(\bar{x}_{n}\) and sample deviation \(s_{n}\) the cumulative distribution function \(F_{X}(x)\) equals to_
\[F_{X}(x_{i})_{n}=\frac{1}{2}(\,1+E_{A}(\frac{z_{n}}{\sqrt{2}})\,). \tag{10}\]
Given the probability, we can also derive the value of \(x\) to which it belongs using a percent point function to compute inverse CDF (ICDF) denoted also as \(F_{X}(x_{i})_{n}^{-1}\).
**Definition II.6** (Percent-Point Function (PPF)): _PPF returns the threshold value for random variable \(X\) under which it takes a value less than or equal to the value, for which \(F_{X}(x)\) takes probability lower than selected quantile \(q\). \(Q_{X}\colon[0,1]\to\mathbb{R}\). An algorithm that calculates the value of the PPF is reported below as Algorithm 1._
```
0: quantile \(q\), sample mean \(\bar{x}_{n}\) (2), sample variance \(s_{n}^{2}\) (4)
0: threshold value \(x_{n,q}\)
0: Initialisation :
1:\(f\gets 10\); \(l\leftarrow-f\); \(r\gets f\); LOOP Process
2:while\(F_{X}(l)-q>0\)do
3:\(r\gets l\);
4:\(l\gets lf\);
5:endwhile
6:while\(F_{X}(r)-q<0\)do
7:\(l\gets r\);
8:\(r\gets rf\);
9:endwhile
10:\(\bar{x}_{n,q}=\arg\min_{\bar{x}}\|F_{X}(z)-q\|\) s.t. \(l\leq z\leq r\)
11:return\(\bar{x}_{n,q}\sqrt{\bar{s}_{n}^{2}+\bar{x}_{n}}\)
```
**Algorithm 1** Percent-Point Function for Normal Distribution
### _Gaussian Anomaly Detection_
Anomalies come in various kinds and flavors. Commonly denoted types are point (spatial), contextual, and collective (temporal) anomalies [2]. Spatial anomalies take on a value that particularly deviates from the sample mean \(\bar{x}_{n}\). From a statistical viewpoint, spatial anomalies can be considered values \(x\) that significantly differ from the data distribution.
In empirical fields, such as machine learning, the three-sigma rule defines a region of distribution where normal values are expected to occur with near certainty. This assumption makes approximately 0.27% of values in the given distribution considered anomalous.
**Definition II.7** (Three-Sigma Rule of Thumb (3\(\sigma\) rule)): _3\(\sigma\) rule represents a probability, that any value \(x_{i}\) of random variable \(X\) will lie within a region of values of normal distribution at the distance from the sample mean \(\mu_{n}\) of at most 3 sample standard deviations \(\sigma_{n}\)._
\[P\{|x_{i}-\mu_{n}|<3\sigma_{n}\}=0.99730 \tag{11}\]
Anomalous values occur on both tails of the distribution. In order to discriminate the anomalies using the three-sigma rule on both tails of the distribution, we define the anomaly score as follows
\[y_{i}=2\left|F_{X}(x_{i})_{n}-\frac{1}{2}\right|, \tag{12}\]
where
\[y_{i}\in[0,P\{|x_{i}-\mu_{n}|<3\sigma_{n}\}), \tag{13a}\]
applies for normal observations and
\[y_{i}\in[P\{|x_{i}-\mu_{n}|<3\sigma_{n}\},1], \tag{13b}\]
for anomalies.
Using pure statistics to model normal behavior lets us ask the question about the threshold value \(x\) which corresponds to the area under the curve of CDF equal to the given probability. A such query can be answered using inversion of (12). However, inversion of (12) would fail the horizontal line test. Therefore, we restrict the applicability of the inverse only to \(F_{X}(x)_{i}\in[0.5,1]\)
\[x_{i}=F_{X}\left(\frac{y_{i}}{2}+\frac{1}{2}\right)_{n}^{-1} \tag{14}\]
In order to derive a lower threshold, the Gaussian distribution is fitted to the negative value of the streamed data and evaluated accordingly using the previously defined equations.
## III ICDF-based Real-Valued Threshold System
We suggest a novel approach to provide dynamic process limits using an online outlier detection algorithm capable of handling concept drifts in real-time. Our main contribution is based on using an inverse cumulative distribution function (ICDF) to supply a real-valued threshold for anomaly detection, i.e., to find the values of the signal which corresponds to the alert-triggering process limits. Therefore, in the context of machine learning, we are tackling an inverse problem, i.e., calculating the input that produced the observation. To utilize
an adaptive ICDF-based threshold system, the univariate Gaussian distribution has to be fitted to the data in online training and ICDF evaluated on the fly. This method is divided into four parts and described in the following lines. For a simplified representation of the method see Algorithm 2.
### _Model Initialization_
The initial conditions of the model parameters are \(\mu_{0}=x_{0}\) for mean and \(s_{0}^{2}=1\) for variance. The score threshold is constant and set to \(q=0.9973\). Moreover, there are two user-defined parameters: the expiration period \(t_{e}\), and the time constant of the system \(t_{c}\). The expiration period, which defines the period over which the time-rolling computations are performed, can be altered to change the proportion of expected anomalies and allows relaxation (longer expiration period) or tightening (shorter expiration period) of the thresholds. The time constant of the system determines the speed of change point adaptation as it influences the selection of anomalous points that will be used to update the model for a window of values \(Y=\{y_{i-t_{e}},...,y_{i}\}\) if the following condition holds true
\[\frac{\sum_{y\in Y}y}{n(Y)}>q. \tag{15}\]
The existence of two tunable and easy-to-interpret hyper-parameters makes it very easy to adapt the solution to any univariate anomaly detection problem.
### _Online training_
Training of the model takes place in an online fashion, i.e., the model learns one sample at a time at the moment of its arrival. Learning updates the mean and variance of the underlying Gaussian distribution. The computation of moving mean (2) and variance (4) is handled by Welford's method. Each sample after the expiration period is forgotten and its effect reverted in the second pass. First, the new mean is computed using (5) which accesses the first value in the bounded buffer. The value is dropped in the same pass. Second, the new sample variance is reverted based on (7) using the new mean and current mean that is overwritten afterward. For details see Subsection II-B.
### _Online prediction_
In the prediction phase, \(z\)-score (8) is computed and passed through \(E_{A}\) (9) in order to evaluate \(F_{X}(x_{i})\) from (10). The algorithm marks the incoming data points if their corresponding anomaly score (12) is out of the range defined by threshold \(q\). In other words, marks signal value \(x_{i}\) that is higher or equal to the threshold, which bounds the three-sigma region.
### _Dynamic Process Limits_
Normal process operation is constrained online using ICDF. The constant value of \(q\) and parameters of the fitted distribution are both passed through Algorithm 1 to obtain value, which corresponds to the value of \(x\) that would trigger an upper bound outlier alarm at the given time instance. To obtain a lower bound of operation conditions the same procedure is applied to the distribution fitted on negative values of input.
```
0: expiration period \(t_{e}\), time constant \(t_{c}\)
0: score \(y_{i}\), threshold \(x_{i,q}\)
0: Initialisation :
1:\(i\gets 1;\ n\gets 1;\ q\gets 0.9973;\ \bar{x}\gets x_{0};\ s^{2} \gets 1\);
2: compute \(F_{X}(x_{0})\) using (8); LOOP Process
3:loop
4:\(x_{i}\leftarrow\) RECEIVE();
5:\(y_{i}\leftarrow\) PREDICT(\(x_{i}\)) using (12);
6:\(x_{i,q}\leftarrow\) GET(\(q,\bar{x},s^{2}\)) using Algorithm 1;
7:if (13a) or (15) then
8:\(\bar{x}\), \(s^{2}\leftarrow\) UPDATE(\(x_{i},\bar{x},s^{2},n\)) using (2), (4);
9:\(n\gets n+1\);
10:for\(x_{i-t_{e}}\)do
11:\(\bar{x}\), \(s^{2}\leftarrow\) REVERT(\(x_{i-t_{e}},\bar{x},s^{2},n\)) using (5), (7);
12:\(n\gets n-1\);
13:endfor
14:endif
15:\(i\gets i+1\);
16:endloop
```
**Algorithm 2** Online Anomaly Detection Workflow
## IV Case Study
In this section, we demonstrate the applicability of the proposed ICDF-based approach in two case studies of the microgrid operation. The properties and performance were investigated using streamed signals from the IoT devices. The successful deployment demonstrates that this approach is suitable for existing alerting mechanisms of process automation infrastructure.
The case studies were realized using Python 3.10.1 on a MAC with an M1 CPU and 8 GB RAM. The percent point function was solved using an iterative root-finding algorithm, Brent's method.
### _Battery Energy Storage System (BESS)_
First, we verify our proposed method on BESS. The BESS reports measurements of State of Charge (SoC), supply/draw energy events, inner temperature, outer temperature, Heating, ventilation, and air conditioning (HVAC) state. Tight control of the battery cell temperature is needed for the optimal performance and maximum lifespan of the battery. Identifying anomalous events and removal of corrupted data might yield significant improvement on the process control level.
The sampling rate of the signal measurement is 1 minute. However, network communication is prone to packet dropout, which results in non-uniform sampling. To protect the sensitive business value of the data, we normalize all signals to the range \([0,1]\). The goal was to mark anomalous events in the data and provide adaptive process limits from the online self-learning model.
Fig. 1 renders measurement of average battery cell temperature from 21st February until 26th March. We can observe multiple anomalies of various sources given this span, for instance, packet dropout, suspicious events, intermittent sensor failure, and change point in data distribution. Dates of observation given the listed events will be provided later in the paper.
The initial conditions of the model states are set based on Subsection III-A. The user-defined parameters, were set to 7 days for the expiration period and 5 hours for the time constant. Anomalies found during the first day of the service are ignored due to the initialization of the detector. In this case study, the anomaly detection problem was approached by the online model fitting based on Subsection III-B
Using the online prediction described in Subsection III-C we tag the sample as the anomaly or normal data point. Fig. 2 renders vertical rectangles over the regions from the start until the end of the predicted anomalous event.
The results on Average Cell Temperature in Fig. 2 show that the model could capture anomalous patterns of various sources. Despite self-learning without supervision, the model-classified anomalies were also confirmed by the data provider after inspection. For instance, a rare event of manipulation with BESS on 3rd, followed by peak on 4th March. BESS relocation on 7th, led to a change point which was alerted and the system adapted completely over the course of 1 day. Test events resulted in peak values through 10th to 15th March, and faulty measurements on the 12th March followed by a packet loss on 21st March were alerted too. The system tagged the next two tests of temperature control switch-offs.
Findings that favor the model's ability to discriminate anomalous behavior are important for the meaningful realization of the dynamic process thresholding. The real-valued threshold mechanism, defined in Subsection III-D, provided up-to-date upper and lower bounds for the signal. As for the validity of the dynamic process limits, each breakout of the signal value from within the range was also marked by the anomaly detection system. Fig. 3 points to the capability to adapt to change point on 7th March and mitigate the influence of intermittent effects of anomalies on distribution. The speed of the change point adaptation as well as the mitigation of the effect of anomalies on the tightness of limits are governed by the user-defined expiration period and time constant of the system.
Fig. 3: Time Series of Average Battery Cell Temperature measurement (green line) and predicted anomalous events (red vertical rectangles). The reddish fill bonded by the red line represents an area of anomalous behavior as given by the anomaly detector.
Fig. 2: Time Series of Average Battery Cell Temperature measurement (green line) and predicted anomalous events (red vertical rectangles).
### _Power Inverter_
A second case study demonstrates the proposed method's applicability to the temperature of the power inverter. During high load periods, inverters can heat up swiftly. Technical documentation of every inverter provides details on continuous output rating as a function of temperature that implies static process limits. Normally, for high temperatures, the rating drops rapidly. Nevertheless, the impact of aging and ambient conditions may render conservative limits impractical. Thus the alerting mechanism for the detection of abnormal heating shall be developed. Providing a real-valued anomaly threshold tightens the theoretical operating conditions and gives the ability to track the performance and deviations.
Fig. 4 depicts one month of operation of the inverter from 16th March to 17th April 2022. After the packet loss before 21st March, rare temperature events occurred. Both events fell out of the normal operating conditions given by the dynamic process limit. Four faulty sensor readings follow from 22nd, 23rd, 29th March and 4th April. The first two are tagged as anomalies, though almost missed due to the prolonged data loss. Given a shorter time from initialization than \(t_{e}\) the influence of the edge between drop and raise had a relaxing effect on limits. Former finding proposes a need for grace period modification, which would alter self-learning until the buffer given by \(t_{e}\) is not fully filled. The third faulty reading was tagged without influencing the distribution and operational boundaries due to the effect of \(t_{c}\). Oscillations, that kept the boundaries relaxed vanished after 29th March, which further tightened the process limit range. After the fourth caught fault which was not used to update the model, the detector deliberately adapted the range of normal operation during the next day. Outliers during the sensors rescaling period from 7th April were all tagged. However, the relaxed operational conditions would probably lead to ignorance of smaller anomalous oscillations in given period.
## V Conclusion
This paper proposes a novel approach to real-time anomaly detection that provides a physical threshold that bounds normal process operation. Such an approach has wide applicability in all the process automation fields where low latency evaluation and online adaptation are crucial. Moreover, adaptive operation constraints provide less conservative process limits and govern important insight into systems behavior. The plug-and-play feature of the model makes it easily deployable as shown in two case studies.
The first case study performed on BESS examined the average battery cell temperature and demonstrated the ability to capture anomalies as well as the capacity to restrict the operational area by inversion of the cumulative distribution function. Following our investigation of state-of-the-art online anomaly detection described in Section I we conclude, that although the robustness and performance of complex methods may exceed the performance of the proposed method, the ability to invert the prediction to depict real-time operational restrictions and eschew using non-comprehensible parameters makes it superior for a wide range of use cases. However, the performance might be greatly afflicted when the time constraints of the observed system are not known. This restriction is much weaker than the restriction of the need for data scientists skilled in the hyper-parameter tuning of unsupervised models. Moreover, hyper-parameter tuning calls for ground truth information about anomalies, which requires an exhaustive collection and is not possible in real time.
Future works on the method will follow three practical challenges: Firstly, the multivariate online anomaly detection based on the developed method will be researched. The multivariate implementation would allow the detection of temporal anomalies and the use of features that render spatio-temporal characteristics of the modeled system. This is the common property of most of the online anomaly detection methods that do not offer real-valued thresholds on operational conditions. The multivariate clusters can reveal regions of normal operation that would be otherwise detected incorrectly.
Secondly, the challenge of varying positive and negative process limits thresholds will be examined. As depicted in Fig 4 the positive and negative outliers, in many cases, result from different mechanisms that caused them. The current approach draws a range of normal operational conditions centered around the moving mean value.
Thirdly, automated system identification using normal operation data would further simplify the usage by removing the requirement for system dynamics knowledge. The usage of normal distribution makes the three-sigma rule constrain the number of anomalies only theoretically. This allows the number of anomalies in a given time window to vary greatly and thus the performance is not very sensitive to the selection of the threshold. On the contrary, the time window impacts the model's performance.
Fig. 4: Time Series of Inverter Temperature measurement (green line) and predicted anomalous events (red vertical rectangles). The reddish fill bonded by the red line represents an area of anomalous behavior. |
2309.13305 | Multilevel User Credibility Assessment in Social Networks | Online social networks are one of the largest platforms for disseminating
both real and fake news. Many users on these networks, intentionally or
unintentionally, spread harmful content, fake news, and rumors in fields such
as politics and business. As a result, numerous studies have been conducted in
recent years to assess the credibility of users. A shortcoming of most of
existing methods is that they assess users by placing them in one of two
categories, real or fake. However, in real-world applications it is usually
more desirable to consider several levels of user credibility. Another
shortcoming is that existing approaches only use a portion of important
features, which downgrades their performance. In this paper, due to the lack of
an appropriate dataset for multilevel user credibility assessment, first we
design a method to collect data suitable to assess credibility at multiple
levels. Then, we develop the MultiCred model that places users at one of
several levels of credibility, based on a rich and diverse set of features
extracted from users' profile, tweets and comments. MultiCred exploits deep
language models to analyze textual data and deep neural models to process
non-textual features. Our extensive experiments reveal that MultiCred
considerably outperforms existing approaches, in terms of several accuracy
measures. | Mohammad Moradi, Mostafa Haghir Chehreghani | 2023-09-23T08:40:34Z | http://arxiv.org/abs/2309.13305v1 | # Multilevel User Credibility Assessment in Social Networks
###### Abstract
Online social networks are one of the largest platforms for disseminating both real and fake news. Many users on these networks, intentionally or unintentionally, spread harmful content, fake news, and rumors in fields such as politics and business. As a result, numerous studies have been conducted in recent years to assess the credibility of users. A shortcoming of most of existing methods is that they assess users by placing them in one of two categories, real or fake. However, in real-world applications it is usually more desirable to consider several levels of user credibility. Another shortcoming is that existing approaches only use a portion of important features, which downgrades their performance. In this paper, due to the lack of an appropriate dataset for multilevel user credibility assessment, first we design a method to collect data suitable to assess credibility at multiple levels. Then, we develop the MultiCred model that places users at one of several levels of credibility, based on a rich and diverse set of features extracted from users' profile, tweets and comments. MultiCred exploits deep language models to analyze textual data and deep neural models to process non-textual features. Our extensive experiments reveal that MultiCred considerably outperforms existing approaches, in terms of several accuracy measures.
**Keywords** Online social networks, credibility assessment, multilevel user credibility, deep neural networks
## 1 Introduction
Today, due to easy accessibility and low cost, social networks have become very popular. The comprehensiveness and high speed of dissemination over social networks have turned these networks into suitable platforms for news distribution. The spread of inaccurate news and rumors, along with destructive behaviors of certain user accounts, are among the issues that have endangered the functionality and healthiness of these networks. The dissemination of false news or the sharing of unverified information by individuals, both intentionally and unintentionally, can have extensive destructive effects in various dimensions. Therefore, presenting a method for assessing the credibility of users is essential.
Existing approaches in this area utilize a set of features to detect user credibility. Some use features from the text, some use non-textual features, and some use a mix of both. Successful algorithms in this domain often employ machine learning and deep learning techniques for feature analysis and user credibility determination. Existing approaches typically use only a portion of the influential features and parameters for evaluating user credibility. Furthermore, they evaluate users' credibility by categorizing them into either fake or genuine accounts. However, in many cases, these users are real individuals who, knowingly or unknowingly, engage in spreading fake news and other inappropriate behaviors such as sharing malicious links and threatening other users. Evaluating users across multiple levels, rather than just considering the fake and genuine categories, provides a more clear and more realistic picture of user credibility and activities
on social networks. In addition to these challenges, there is currently no available dataset in which users are classified into multiple levels of credibility.
In this paper, due to the lack of an appropriate dataset, first of all we design a method to collect data, in a multiclass manner, to perform credibility assessment at multiple levels. The collected dataset is from the Twitter social network. In the second step, a model is developed to conduct credibility assessment. Our proposed method, called MultiCred, categorizes each user into one of several credibility levels, instead of the binary fake/genuine classification, using a comprehensive set of features related to users' profile, published content, and other users' opinions. Because of the diversity of the exploited features, it uses a separate method for the analysis and processing of each feature category. We evaluate the performance of MultiCred over the collected real-world dataset. Our empirical results demonstrate that MultiCred considerably outperforms the state of the art methods for assessing user credibility, at multiple levels.
The rest of this paper is organized as follows. Section 2 provides a review of previous research conducted in this area. Section 3 covers fundamental concepts and defines the studied problem. In Section 4, our data collection procedure and the description of the collected data are discussed. Moving forward, Section 5 elaborates on our proposed algorithm, dissecting its different components. Section 6 presents our empirical results. Finally, the papers is concluded in Section 7.
## 2 Related work
Over the past decade, social networks have captured significant attention from users worldwide. Consequently, prominent websites such as Facebook, Twitter, LinkedIn and Instagram witnessed an unexpected surge in user registrations. However, researchers argue that not all registered accounts are genuine; many are fake and created for specific purposes. In recent years, researchers have leveraged numerous advanced technologies for identifying fake accounts. In general, existing research in the realm of detecting fake accounts can be categorized into three main groups:
* methods that utilize non-textual features (profile-based features).
* methods that utilize textual features.
* methods that combine and utilize both textual and non-textual features.
In the rest of this section, well-known methods from each category are reviewed.
### Methods based on user profile features
Singh et al. [1] utilized supervised machine learning models for detecting fake profiles on social networks. To differentiate between fake and genuine profiles, they used the average number of followers of users. They found that if a user profile have more than 30 followers, it is not fake. They also discovered that the average age of owners of fake profiles is between 18 to 19, and their profile images are sourced from the internet. Agarwal et al. [2] developed a model for detecting fake accounts by analyzing user sentiments on Facebook using a supervised model. They considered emotion-based features such as anger, sadness, fear, joy, trust, positive frequency, negative frequency. Their analysis revealed that users of fake profiles mainly employ emotions such as hatred, killing and ugliness.
Zarei et al. [3] presented a model for detecting fake political accounts on social media. They collected data from three politicians' Instagram profiles. Using this data, their model was able to identify a significant number of fake individuals and political bots. The authors reported that this was the first paper to perform such analysis on Instagram data. They used the TF-IDF technique to identify accounts with similar profile information and employed convolutional neural networks to compare profile images. Wanda and Jie [4] proposed a deep neural model, called DeepProfile, for detecting fake accounts on online social networks. They modified the pooling layer in convolutional neural networks, to improve accuracy. Kumari et al. [5] designed a system with the aim of identifying fake users on Twitter. Since we use their method as on of our baselines, we will discuss it in details in Section 6.
### Methods based on textual features
Swe and Myo [6] introduced a blacklist approach for detecting fake accounts on online social networks. The blacklist is generated using topic modeling and keyword extraction methods. Their model doesn't require profile-based or network-based features, which in turn reduces the additional time and cost needed for feature extraction. The authors evaluated their method on the 1KS-10KN and the Honeypot datasets. Clark et al. [7] employed natural language processing for automated bot detection on Twitter. Their model utilized human-generated natural language text to establish a criterion for identifying accounts with automated messages. Two datasets were collected: firstly, geolocated tweets from 1000 active users, known as the Geo-Tweet dataset for classifying humans and bots; secondly, a dataset of honeypot contents. They found that the model's accuracy increases with the tweet size on the Geo-Tweet dataset.
Khan et al. [8] distinguished spammers and bloggers from genuine experts on Twitter. They collected around 0.4 million tweets from about 3200 users actively sharing health-related information on Twitter. For categorizing spammers and bloggers, they utilized a link-based topic search approach (HITS) and differentiated them from experts in a specific domain. Their model for distinguishing bloggers (fake users) from genuine experts doesn't require a significant amount of pre-labeled data. Phad and Chavan [9] proposed a model for identifying compromised profiles on social networks. They collected Twitter account information using the Twitter archive. The dataset encompassed 26,363 tweets from 48 prominent accounts. Out of this number, 25,363 were legitimate tweets, while 1,000 were malicious ones. Their model creates a history for a user profile and determines whether the account is at risk based on this history.
### Methods based on both textual and non-textual features
Al-Zoubi et al. [10] identified spam profiles on Twitter using general features. Their dataset included 82 user profiles posting in English and Arabic. Features such as suspicious words, default profile picture, text-to-link ratio, comment ratio, tweet time, and others were extracted from these profiles. Machine learning classifiers, namely decision tree, C4.5, \(K\) nearest neighbor, naive Bayes, and multi-layer perceptron, were employed to classify data into spam and non-spam profiles. Alom et al. [11] proposed a model for detecting spam accounts on Twitter. They utilized a combination of graphic and content-based features. Several machine learning classifiers, including \(K\) nearest neighbor, decision tree, naive Bayes, random forest, logistic regression, SVM, and XGBoost, were employed on the selected features to distinguish between spam accounts and legitimate accounts.
Aswani et al. [12] proposed another model for identifying spammers on Twitter. They collected 1,844,701 tweets from 14,235 Twitter profiles along with 13 relevant statistical features. These features were extracted from social media analytics. They utilized a bio-inspired algorithm, called Firefly, for identifying spammers and regular users. Adewole et al. [13] proposed a model that identifies both spam messages and spam accounts in online social networks. For spam message detection, they utilized datasets collected from three sources: SMS collection V.1, SMS corpus V.0.1 Big, and Twitter spam corpus, with a total of 5574, 1324, and 18,000 data samples, respectively. They extracted eighteen features and used different machine learning algorithms to classify messages and accounts. Among them, random forest achieved the best performance. Verma et al. [14] presented a method that evaluates the credibility of users on Twitter using machine learning and deep learning approaches. Since we use their method as one of our baselines, we will discuss it in details in Section 6.
## 3 Preliminaries
In general, for each user profile in any social network, a set of features is defined, among which we can mention username, user photo, description, etc. For each user profile, we can consider a set \(F=\{f_{1},f_{2},...,f_{n}\}\), where \(f_{i}\) represents the \(i\)-th feature. Not all of these features are of the same type. For example, username is textual, account creation time is numerical, and user photo is of image type. Therefore, the integration of these features in the initial stage is important. To achieve this, a function \(z\) is needed to map the set \(F\) to a vector in such a way that the relevant information of the user account is preserved in the vector. Given the diverse types of features, the mapping function \(z\), varies based on the feature type. For numeric features,
an identity function (the numerical value of the feature itself) can be used for mapping. For image and text types, there exist various methods such as BERT [15] and CNNs (Convolutional Neural Networks) [16].
After mapping the set of features to the vector space, it is now time to assess the credibility of users. Existing approaches perform the credibility assessment by categorizing each user into one of two classes: fake and genuine. However, in this type of evaluation, a lot of information is lost. Many genuine users nowadays engage in activities on social networks, either unknowingly or knowingly, that are harmful. As a result, the credibility of these users should be affected by such behaviors. Therefore, classifying users into just two categories of fake and genuine cannot accurately evaluate their credibility. Defining multiple levels of credibility would lead to a more precise credibility assessment of each user.
Depending on the type of collected data, the number of credibility levels can be determined. It is natural that a higher number of levels leads to a more comprehensive understanding of credibility, subsequently influencing user activities. After determining the number of credibility levels, a function \(g\) should be defined that maps the vectors to the credibility levels. The objective is to determine functions \(z\) and \(g\) in such a way that the credibility levels of users are accurately estimated. For this purpose, various error (loss) functions can be utilized, where one of the most common ones is the cross-entropy function. It measures the performance of a classifier whose output is a probability distribution between 0 and 1. The value of this error function increases when for a data point, the predicted probability deviates from its actual value. Mathematically, for a binary classification, the cross-entropy error function for a single data point is defined as follows:
\[CE=-(y_{i}\log(p_{i})+(1-y_{i})\log(1-p_{i})), \tag{1}\]
where \(y_{i}\) is the true class of the \(i\)th data point, taking values of either 1 or 0, and \(p_{i}\) represents the predicted probability between 0 and 1 for that data point.
When dealing with more than two classes, a new function called categorical cross-entropy is used. This function calculates the sum of separate errors for each class in each data point. It is formally defined as follows:
\[CEE=-\sum_{c=1}^{M}y_{i,c}\log(p_{i,c}), \tag{2}\]
where \(M\) represents the number of classes, and \(y_{i,c}\) takes a value of either 1 or 0, indicating whether the \(i\)th data point belongs to class \(c\) or not. It essentially determines the true label of the data point. The value \(p_{i,c}\) is generated by the classifier and represents the probability that the \(i\)th data point belongs to class \(c\). The total cross-entropy of the model is defined as the sum of the cross-entropy values of the training data points.
## 4 Dataset
In this section, first we briefly present known existing datasets for user credibility assessment. Then, we discuss their limitations. Finally, we describe our data collection method and the characteristics of the collected data.
### Available dataset
In general, most of the datasets available for the tasks of fake news detection and fake user detection, consider two labels for each training data point: fake and real. In some cases, 3 or 5 labels are used instead of 2. The common aspect in all these datasets is that they consider the problem as a classification task. Below are a few examples of these datasets.
The 'instafake' dataset [17] is created for identifying fake user accounts and automatically generated user accounts (bots). Separate sets of features are collected for fake and automated user accounts. Some of these features include the number of posts, the number of followers and user profile biography. These features are often related to the user profile itself, and texts from posts and comments are not collected in this dataset.
The next dataset, named 'fakeuserprofile' [18] is collected from Twitter. This dataset includes information from 6827 user accounts, with 3475 real accounts and 3352 fake accounts. The labels used for these accounts are real and fake. The collected set of features in this dataset is directly gathered using the Twitter API, and that's why some of these features are not seen in other datasets. For example, features such as
profile_text_color and profile_sidebar_border_color, as well as graphics-related features, are typically found in Twitter datasets.
There also exist many datasets for the fake news detection task, such as "FakeNewsNet" [19] and "LIARPLUS" [20]. A review of fake news detection datasets can be found in [21].
### Shortcomings of existing datasets
The first shortcoming is that each of these datasets considers a specific set of features for detecting fake users and collects data based on that. As a result, many other features that can potentially improve the performance of the detection task, are absent in these datasets. The second shortcoming is that they are mostly proper for binary classification, and there's no dataset that considers multiple levels of credibility for users. These reasons lead us to collect our own data for this research, instead of using datasets that are already available.
### Our data collection method
We use Twitter to collect the data, and the NewsGuard website [22] to label the collected user accounts. The NewsGuard organization operates in the field of online journalism. Individuals in this organization review news websites and assign scores to them based on various criteria. The evaluation process is carried out by experienced individuals and journalists, with no artificial intelligence involvement. The assigned scores range from 0 to 100. The evaluation process is based on multiple criteria, each contributing a portion of the score. The frequency of spreading false news is considered as the most important criterion. Table 1 displays all the criterion along their associated score. The more valuable criterion are associated with the credibility of a news collection, while the less valuable criterion are mostly concerned with transparency in management. Hence, it can be argued that the top 5 criterion are effective in evaluating the credibility, and the next 4 criterion are influential in determining the transparency of a news collection.
Each user account on Twitter or other social networks, in general, shares texts and posts on various topics. If we intend to carry out a thematic categorization for evaluating each user account based on its content, the material users share about their daily lives and personal matters holds relatively little significance, and such content cannot substantially impact the assessment. On the other hand, content that includes news or facts about events or content that can have news labels attached to them are important and have a high value in determining the score and credibility of a user. In addition, the goal of creating a credibility assessment system is to allow users to understand the accuracy level of someone's activity on social media by looking at their score. This way, when a news text is shared, an informed decision can be made regarding trusting that news. This is why the data collection process is set up as follows:
* In the first stage, a collection of news websites that NewsGuard has reviewed and assigned scores to are gathered. These are primarily English-language news sites located in the United States and Europe.
\begin{table}
\begin{tabular}{l r} \hline Criteria & Score \\ \hline Does not repeatedly publish false content & 22 \\ Gathers and presents information responsibly & 18 \\ Regularly corrects or clarifies errors & 12.5 \\ Handles the difference between news and opinion responsibly & 12.5 \\ Avoids deceptive headlines & 10 \\ Website discloses ownership and financing & 7.5 \\ Clearly labels advertising & 7.5 \\ Reveals who’s in charge, including possible conflicts of interest & 5 \\ Provides the names of content creators, along with either contact or biographical information & 5 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation criterion of NewsGuard.
* In the second stage, all these websites are examined for having Twitter accounts, and if they have accounts, their usernames are collected. After the first two stages, the user accounts for which information needs to be collected from Twitter are fully identified.
* In the third stage, using the tweepy library in Python and the Twitter API, user account information is collected in several phases. The first phase involves gathering profile features, the second phase involves collecting user tweets, and the third phase focuses on collecting user comments.
The primary focus of interest in this research pertains to the user profile information. Consequently, during the initial phase, using the user IDs collected in the second stage, all profile information is retrieved from the API and stored in the form of a JSON file. Table 2 displays all attributes related to a user profile, which are made available through the API. Due to the utilization of the Twitter API, this section of the data resembles the Fake User Profile dataset.
Similar to other social networks, Twitter grants a blue verification badge, commonly known as a "blue tick" to certain user accounts upon review, indicating their authenticity. Typically, this badge is awarded to prominent individuals, athletes, politicians, and those with significant influence. The "verified" feature signifies profile authentication.
Metrics such as the number of posts, followers, friends, account creation date, and privacy settings are respectively conveyed by the attributes "statuses_count", "followers_count", "friends_count", "created_at" and "protected". Another crucial attribute within a user's profile is "description". Social media users typically provide a brief description on their profile page to introduce themselves or outline their activities. Analyzing the text within this "description" attribute can be highly informative and reveal additional insights into the user's personality. Twitter allocates several features, such as "profile_link_color" and "default_profile_image" to customize the graphical appearance of user profiles. Analyzing the employed colors and profile images can also contribute to determining the authenticity of the user.
\begin{table}
\begin{tabular}{l l} \hline \hline & Features \\ \hline name & is\_translation\_enabled \\ screen\_name & profile\_background\_color \\ location & profile\_background\_image\_url \\ profile\_location & profile\_background\_image\_url\_https \\ description & profile\_background\_tile \\ url & profile\_image\_url \\ entities & profile\_image\_url\_https \\ protected & profile\_banner\_url \\ followers\_count & profile\_link\_color \\ friends\_count & profile\_sidebar\_border\_color \\ listed\_count & profile\_sidebar\_fill\_color \\ created\_at & profile\_text\_color \\ favourites\_count & profile\_use\_background\_image \\ utc\_offset & has\_extended\_profile \\ time\_zone & default\_profile \\ geo\_enabled & default\_profile\_image \\ verified & following \\ statuses\_count & follow\_request\_sent \\ lang & notifications \\ status & translator\_type \\ contributors\_enabled & withheld\_in\_countries \\ \hline \hline \end{tabular}
\end{table}
Table 2: The extracted features for each user profile.
During the second phase, a selection of recent tweets for each user account is gathered via the API. Considering Twitter's limitations in collecting a user's tweets and the research's requirements, 3250 recent tweets for users are stored. Naturally, this count is lower for accounts with fewer tweets. All collected tweets for each user are saved in JSON files. When retrieving a user's tweets, along with each tweet, some additional information are also returned. Table 3 encompasses attributes stored alongside the tweet text.
Among the returned feature set, alongside tweet text, some other features hold notable significance. For example, the "entities" feature encompasses hashtags, links, mentions, and emojis within a news text. Twitter imposes a character limit on each tweet, which is set to 280 characters. Users exceeding this limit divide their text into multiple tweets or tweet threads. The "truncated" attribute indicates whether a tweet is part of a thread. Another noteworthy attribute is "possibly_sensitive" indicating whether the tweet contains sensitive content like inappropriate language or explicit material. This feature has recently been introduced by Twitter and is only enabled experimentally for specific tweets, making its applicability uncertain.
In the third phase, reactions from other users to each account's posts are gathered. In accomplishing this, comments posted by users on a user's posts are collected. Due to API limitations and the research's focus on user opinions about the accounts rather than reactions to specific posts, comments aren't collected individually for each post. Instead, a total of 800 recent comments made by users about the user's posts are stored in a JSON file. Storage for these comments only includes the comment text. However, if needed, additional attributes of these comments can be easily retrieved using their IDs.
After following these three phases and collecting feature data of user accounts, the remaining task is labeling (scoring) them, which NewsGuard does it. These scores are assigned to different accounts, using various criterion mentioned in Table 1. Note that during the scoring process, it is always possible to selectively consider certain important criterion, while ignoring the others, for example transparency-related criterion.
Given the manual data collection and the absence of any resources for sourcing news websites, the data possess an inherent potential for gradual enrichment, with this dataset evolving over time. In this stage, only English-language news websites and Twitter accounts are collated. However, in future the incorporation of other languages is also feasible. All data collection codes are scripted in Python. As mentioned earlier, apart from the scores file, which adopts a text format, all other data is collected in JSON format. Alongside the labeled user accounts, information about a number of unlabeled accounts are also collected in the same format, enabling their use in semi-supervised learning, if desired.1
Footnote 1: We make the collected data publicly available at drive.google.com/file/d/iLPxDLzkztyFqIuF774BxvrVEXxhs94E/view7usp=sharing.
The number of collected labeled user accounts is 649, while the number of unlabeled user accounts is 556. The cumulative size of the compiled dataset amounts to 19.03 GB, comprising 9.13 GB for the unlabeled data and 9.09 GB for the labeled data.
\begin{table}
\begin{tabular}{l l} \hline \multicolumn{2}{c}{Features} \\ \hline created\_at & geo \\ text & coordinates \\ truncated & place \\ entities & contributors \\ source & is\_quote\_status \\ in\_reply\_to\_status\_id & retweet\_count \\ in\_reply\_to\_status\_id\_str & favorite\_count \\ in\_reply\_to\_user\_id & favorited \\ in\_reply\_to\_user\_id\_str & retweeted \\ in\_reply\_to\_screen\_name & possibly\_sensitive \\ user & lang \\ \hline \end{tabular}
\end{table}
Table 3: The extracted features for each tweet.
Our proposed method
In this section, we describe, in details, different steps of our proposed model for multilevel user credibility assessment.
### Data analysis and feature selection
As already mentioned, our data in this work is collected from Twitter. Among all the collected features, the features presented in Table 4 are used as the input for our model. The sources of these features are user profiles, user tweets, and comments. The processing method for each feature differs based on its type. The subsequent sections describe the preprocessing procedure for both textual and non-textual features.
\begin{table}
\begin{tabular}{p{113.8pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline Feature & Type & Source & Description \\ \hline Location & Boolean & User Profile & Does the user profile have a location or not. \\ Description & Boolean & User Profile & Does the user profile have descriptions or not. \\ Url & Boolean & User Profile & Does the user profile have a url link or not. \\ Protected & Boolean & User Profile & Is the user profile private or not. \\ Followers Count & Numeric & User Profile & Number of followers. \\ Friends Count & Numeric & User Profile & Number of followings. \\ Listed Count & Numeric & User Profile & Number of user profile list. \\ Year & Numeric & User Profile & Year of creating the user profile. \\ Month & Numeric & User Profile & Month of creating the user profile. \\ Day & Numeric & User Profile & Day of creating the user profile. \\ Hour & Numeric & User Profile & Hour of creating the user profile. \\ Minute & Numeric & User Profile & Minute of creating the user profile. \\ Second & Numeric & User Profile & Second of creating the user profile. \\ Favorite Count & Numeric & User Profile & The number of times the tweet has been favorited \\ Geo Enabled & Boolean & User Profile & The possibility of accessing the geographical location of the profile exists or not. \\ Verified & Boolean & User Profile & Is the profile verified or not. \\ Status Count & Numeric & User Profile & Number of posts. \\ Profile use background image & Boolean & User Profile & Does the user profile use a background image or not. \\ Year & Numeric & Tweet & Year of the tweet post. \\ Month & Numeric & Tweet & Month of the tweet post. \\ Day & Numeric & Tweet & Day of the tweet post. \\ Hour & Numeric & Tweet & Hour of the tweet post. \\ Minute & Numeric & Tweet & Minute of the tweet post. \\ Second & Numeric & Tweet & Second of the tweet post. \\ Truncated & Boolean & Tweet & Is the tweet part of a thread of tweets or not. \\ Retweet Count & Numeric & Tweet & Number of retweets. \\ Favorite Count & Numeric & Tweet & Number of likes. \\ Favorited & Boolean & Tweet & Has tweet been favorited or not. \\ Retweeted & Boolean & Tweet & Has tweet been retweeted or not. \\ Is Quote Status & Boolean & Tweet & Is the tweet a quotation or not. \\ Number of Hashtags & Numeric & Tweet & Number of hashtags in tweet. \\ Number of User Mentions & Numeric & Tweet & Number of mentions in tweet. \\ Number of URLs & Numeric & Tweet & Number of url links in tweet. \\ Number of Symbols & Numeric & Tweet & Number of symbols like emojis in tweet \\ Poll & Boolean & Tweet & Does the tweet contain a poll or not. \\ Tweet & Text & Tweet & Tweet text. \\ Comment & Text & Comment & Comment text. \\ \hline \end{tabular}
\end{table}
Table 4: Description of the used features.
#### 5.1.1 Non-textual features
For non-textual features presented in Table 4, we do not utilize any feature selection algorithm, and these features are directly fed, in their raw form, into the prediction model. The only preprocessing applied to these features is normalization. The used normalization method calculates for each data \(x\) the normalized value \(\bar{x}\) by means of the following formula, where \(x_{minimum}\) and \(x_{maximum}\) are respectively the minimum and maximum values that different data points find for this feature:
\[\bar{x}=\frac{x-x_{minimum}}{x_{maximum}-x_{minimum}}. \tag{3}\]
#### 5.1.2 Textual features
The second category of features comprises textual data. To work with textual features and use them in learning models, a key step is vectorization, i.e., converting each text into a vector of numerical values. Prior to vectorization, a series of preprocessing steps are carried out on the text:
* all words are converted to lowercase.
* all hashtags are removed from the text.
* all links are removed from the text.
* all usernames present in the text are removed.
* stop words are removed from the text.
For vectorizing tweet texts, the BERT model [15] is utilized. As a result of applying this model to a text data, a vector of dimension 768 is generated. The challenge with using this model lies in the high dimensionality of the generated representation vectors. The high dimensionality of feature vectors complicates the training process of the model and may hinder its convergence. To tackle this challenge, dimensionality reduction techniques can be employed. Dimensionality reduction refers to transforming data from a high-dimensional space to a lower-dimensional space while preserving as much meaningful information from the original data as possible. There are multiple methods that can be used to reduce dimensions according to the characteristics of the problem. One of the most effective methods is [23].
Autoencoders are neural networks used for dimensionality reduction and data compression. These networks consist of two parts: the encoder and the decoder. The goal of the encoder is to map the data to a lower-dimensional space while retaining the essential features of the data in this space. The output of the encoder is a vector in the latent space that retains all the significant features of the data. The purpose of the decoder is to map the data from the latent space back to the original space. The training process of this network is unsupervised. During training, the encoder aims to find the latent space of the data in such a way that the reconstruction error of the decoder's output is minimized. The reconstruction error represents the difference between the original input data and the reconstructed output data generated by the network. Various functions are used to calculate this error, with the Euclidean distance being one of the most common. In this work, we employ an autoencoder neural network, that receives the output vectors of the BERT model as input and extracts their latent dimensions. We use an autoencoder with latent dimension of 10. The training of this network is done using a dataset comprising of 323,500 vector representations of tweets. After completing the training phase, the encoder part is used for mapping the dataset to the latent space.
Another category of textual features includes comments made by other users about the tweets of a specific user. Here, the objective is to understand the opinions and perspectives of other users about a user, so that these opinions can be utilized as informative features, for the final classification. With this goal in mind, we apply a sentiment analysis technique to comments, after preprocessing them. There are many sentiment analysis algorithms in the literature that categorize a given text into one of several emotion categories. One of the pre-trained and popular models is the Distilled BERT model [24]. It takes a text as input and produces a probability distribution indicating its association with emotion classes such as sadness, joy, love, anger, fear, and surprise. This model is particularly used for analyzing the sentiment of user comments. So in this work, we exploit it to extract the opinions of other users about an specific user.
#### 5.1.3 Aggregation
During the data collection process, a total of 3,200 tweets along with their 800 recent comments are collected for each user profile. The ultimate goal in the data preprocessing phase is to create a vector for each user account that includes all the textual and non-textual features, enabling the use of these vectors for the classification of user profiles. After mapping tweet and comment texts to vector spaces, the vectors of each user need to be aggregated. Various aggregation operators exist, including minimum (min), maximum (max), sum, and mean. In this work, the mean operator is utilized. Thus, for each user, the average of the vector representations of his tweets serves as the final vector representation of his tweets, and the average of the vector representations of his comments serves as the final vector representation of his comments. Then, a unique profile vector is obtained for each user profile by concatenating these vectors. Ultimately, after analyzing the textual and non-textual features discussed earlier, the embedding vectors for each feature category are concatenated together to generate the final embedding vector for each user account. Figure 1 illustrates the final embedding vector along with the contribution of each feature category to it.
#### 5.1.4 Labels and class imbalance
In the data collection phase, a numerical value between 0 and 100 is assigned to each user to represent its credibility level. A value of 0 signifies the lowest credibility level, while 100 represents the highest level. To model the classification problem, this numerical range is divided into several sub-intervals, with each interval representing a class. In this paper, this process is carried out in various ways, implying that various classification settings are formed with different number of classes.
After generating different classification systems (with different numbers of classes) on a dataset, each class will find different number of data points. Table 5 presents the number of data points in each class for different classification systems. This table demonstrates that the collected dataset is imbalanced across all classification systems. This situation can lead to the model's success being largely determined by the class with the majority of data points, which can skew the evaluation metrics if the model's performance in minority classes is poor. On the other hand, it has been shown that in classification tasks, the accuracy of each class is directly related to the number of data points in that class: the more data points in a class, the higher the model's accuracy in predicting that class. Therefore, the goal is to mitigate class imbalance in the dataset. There exist various methods in the literature to address the issue of class imbalance in classification problems. In this paper, we employ the Synthetic Minority Oversampling Technique (SMOTE) technique [25].
SMOTE performs data augmentation by generating artificial data points based on the original data points. It can be considered as an improved version of oversampling or a specialized algorithm for data
Figure 1: The final embedding created for each user.
augmentation. The advantage of SMOTE is that it doesn't produce exact duplicates of existing data points; instead, it generates slightly different artificial data points. Different steps of this algorithm are as follows:
1. A random subset of the minority class is selected from the dataset.
2. For each data point in this subset, the \(k\) nearest neighbors are identified.
3. For each data point, one of its \(k\) nearest neighbors is chosen, and the vector between them is calculated.
4. This vector is multiplied by a random number between 0 and 1.
5. The result of the previous step is added to the original data point to generate a new data point.
\begin{table}
\begin{tabular}{l c} \hline Classification system & Number of data points for each class \\ \hline
4-class system & 507 \\
6-class system & 428 \\
8-class system & 416 \\
10-class system & 346 \\ \hline \end{tabular}
\end{table}
Table 6: The number of data points per class after applying the SMOTE algorithm, in different classification systems.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline Classification system & \multicolumn{3}{c}{\(\begin{array}{c}\text{\textless{}}\\ \text{\textless{}}\\ \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}\textless{}\text{}\textless{} \textless{}\text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}}\text{} \text{\textless{}}\text{\textless{}}\text{\textless{}\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}\text{}\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}}\text{\textless{}\textless{}\text{}\textless{}\text{}\textless{} \text{}\textless{}\text{}\textless{}\text{}\textless{}\text{}\textless{}\text{} \textless{}\text{}\textless{}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}\textless{}\text{}\textless{}\text{}\textless{}\text{}\textless{} \text{\textless{}}\text{\textless{}}\text{\textless{}\textless{}\textless{}} \text{\textless{}}\text{\textless{}}\text{\textless{}\text{}\textless{}\text{} \textless{}\text{}\textless{}\text{}\textless{}\text{}\textless{}\text{}\text{\textless{}} \text{\textless{}\textless{}}\text{\textless{}}\text{}\text{\textless{}}\text{\textless{} \textless{}\text{}\textless{}\text{}\textless{}\text{}\textless{}\text{}\textless{} \text{}\textless{}\text{}\text{\textless{}}\text{\textless{}}\text{\textless{}} \text{\textless{}\text{}\textless{}\text{}\textless{}\text{}\textless{}\textless{} \text{}\textless{}\text{}\text{\textless{}}\text{\textless{}\textless{}\textless{} \textless{}\text{}\textless{}\text{\textless{}\textless{}\textless{}\textless{}\text{} \textless{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\]
\begin{table} \end{table}
Table 5: The number of data points for each class in different classification systems.
Figure 2: Generating new data in the SMOTE algorithm.
As an example, Figure 2 illustrates how new data points are generated on the line (2D space) between the data point \(x_{1}\) and one of its nearest neighbors, \(x_{11}\). This operation is similar to a partial movement of the data point toward its neighbor. This ensures that the generated artificial data point is not an exact copy of an existing data point, while guarantying that it has a small difference from observations in the minority class. Table 6 shows the distribution of data points in different classification systems after applying this algorithm. As can be seen in the table, after applying the SMOTE algorithm, the problem of class imbalance is mitigated.
### The classification phase and training
After thoroughly discussing the feature extraction and embedding learning phase, now it is time to present our classification method and its training. Various models can be employed as the classification head, ranging from machine learning models such as naive Bayes, \(K\) nearest neighbor, support vector machine, and random forest, to deep learning models such as neural networks. In this paper, we utilize a multi-layer neural network, whose specifications are presented in Table 7.
In this neural network, dropout and batch normalization layers are used to prevent overfitting and expedite the training process, respectively. In neural networks, inappropriate initialization of weights can lead to divergence or slow convergence during training. Initializing weights with extremely large values causes exploding gradients, while initializing with very small values results in vanishing gradients. To address this issue, a normal distribution is used to generate initial weights, mitigating these undesired effects. This network has \(98,250\) parameters out of which, \(97,226\) parameters are learnable during the training process.
During the training process, the dataset is divided into training, testing, and validation sets in the proportions of \(0.7\), \(0.2\), and \(0.1\), respectively. The Adam optimizer is employed for training, and the learning rate parameter is defined dynamically. This parameter is initially set to \(0.01\), but over time, it decreases exponentially with a decay rate of \(0.9\). During training, data is fed into the network in batches, with each batch containing \(16\) data points. The rectified linear unit (ReLU) activation function is used in all layers of the network. The training is ran for \(2000\) epochs, but if the validation accuracy does not improve for \(200\) epochs, the training process is early-terminated. Figure 3 illustrates the high-level structure of our proposed model. Since our proposed model estimates users' credibilities at multiple levels, we refer to it as MultiCred2.
\begin{table}
\begin{tabular}{l l l} \hline \hline Layer(type) & Output size & \#parameters \\ \hline dropout (Dropout) & 51 & 0 \\ hidden\_layer\_1 (Dense) & 256 & 13312 \\ batch\_normalization (BatchNorm Normalization) & 256 & 1024 \\ dropout\_1 (Dropout) & 256 & 0 \\ hidden\_layer\_2 (Dense) & 256 & 65792 \\ batch\_normalization\_1 (BatchNorm Normalization) & 256 & 1024 \\ dropout\_2 (Dropout) & 256 & 0 \\ hidden\_layer\_3 (Dense) & 64 & 16448 \\ Output (Dense) & 10 & 650 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Specifications of our used neural network for classification.
## 6 Empirical results
In this section, in order to evaluate the empirical effectiveness of our proposed model, we compare it against state of the art credibility assessment methods. In the following, first we discuss the used evaluation criterion. Then, we briefly describe the methods used in our comparisons. Finally, we present our empirical results and discuss and analyze them.
### Evaluation metrics
For evaluating the classification results, we use criterion such as recall, precision, and F1-score. Precision specifies how accurately the labels assigned by the model to the data points are chosen among those that have been labeled. Recall indicates how many of all the data points that the model should have labeled are actually labeled. These two metrics are calculated using the counts of true positives, true negatives, false positives, and false negatives. The evaluation metrics are formally defined as follows:
\[accuracy=\frac{TP+TN}{TP+FN+TN+FN} \tag{4}\]
\[precision=\frac{TP}{TP+FP} \tag{5}\]
\[recall=\frac{TP}{TP+FN} \tag{6}\]
\[F1-score=\frac{2*precision*recall}{precision+recall} \tag{7}\]
where:
* \(TP\) is true positive, data that belong to a specific class and the model correctly predicts their class.
* \(FP\) is false positive, data that the model incorrectly assigns to a class while they do not belong to that class.
* \(TN\) is true negative, data that do not belong to a certain class, and the model does not classify them into that class.
* \(FN\) is false negative, data that belong to a specific class, but the model incorrectly assigns them to other classes.
Figure 3: The high-level architecture of our proposed model.
### Baseline methods
We use two recent state of the art algorithms, [5] and [14] as our baselines.
Kumari et al. [5] designed a system aimed at identifying fake users on Twitter. Their dataset consists of two parts. The first part is collected by the authors themselves using the Twitter API and manually labeled. The second part is from a study conducted in 2015, called "TheFakeProject". The final dataset comprises 6,973 user accounts, with 3,752 accounts labeled as fake and 3,221 as real. All the features they used are non-textual. They applied a combination of feature selection techniques to refine the feature set. They employed the logistic regression model for account classification, and the particle swarm optimization algorithm for optimization.
Kumar verma et al. [14] presented the UCred method that evaluates the credibility of users on Twitter using machine learning and deep learning techniques. UCred ultimately categorizes users into two groups: genuine and fake. The dataset they used includes 1,337 fake profiles and 1,481 genuine profiles. After preprocessing the data, they used three categories of models, selected the best model from each category and employed a voting mechanism among the three final models to determine whether a user is genuine or fake.
### Results
In this section, we present our empirical results. Before delving into the results, it is necessary to address a few key points. First, the two studies examined in the previous section, along with the general body of work in this field, evaluate user credibility at two levels: genuine and fake. Our proposed algorithm (MultiCred) is a multi-class classification system. We adopt the baselines to the multi-class classification system. Second, since in MultiCred we use the SMOTE algorithm to address the issue of imbalance data, in order to have fair comparisons, we apply this technique to our competitors to improve their performance.
We run each experiment for 10 times and report the average results as well as the standard deviations. The results are reported in Tables 8, 9, 10, and 11. As evident from the tables, our proposed MultiCred algorithm significantly outperforms the other algorithms in all the classification settings, in terms of all evaluation metrics.
### Discussion
Considering a broad spectrum of features by our model and handling them properly, improve the performance of MultiCred, compared to the cases wherein only a subset of features is used. The experimental results indicate that, as expected, considering both textual and non-textual features together enhances model's performance. The incorporation of users' opinions into the final vector, on average, increases the accuracy by 4.09% across all the classification settings. Additionally, adding tweet embeddings to the final vector results in an average improvement of 2.46% in accuracy.
An observation in our experimental results is that with an increase in the number of classes, F1-score usually decreases for all the methods. As the number of classes increases, data points of different classes become more intertwined in the feature space, making it considerably harder to distinguish between them. The results indicate that even though the performance of MultiCred degrades as the number of classes increases, it still outperforms the other methods, and its performance improvement is considerable.
Another observation is that as can be seen in Figure 4, in the method of Kumari et al. [5] the evaluation metrics in the 10-class setting are only slightly worse than the case of 8-class. In UCred [14], the evaluation
\begin{table}
\begin{tabular}{c c c c c} \hline Model & Precision(\%) & Recall(\%) & F1-score(\%) & Accuracy(\%) \\ \hline Kumari et al. [5] & 27.27\(\pm\)1.89 & 29.60\(\pm\)1.42 & 27.45\(\pm\)1.37 & 29.34\(\pm\)1.30 \\ UCred [14] & 29.78\(\pm\)6.5 & 35.61\(\pm\)4.99 & 31.80\(\pm\)6.19 & 34.69\(\pm\)4.88 \\ MultiCred & **86.60\(\pm\)2.86** & **86.78\(\pm\)2.31** & **85.92\(\pm\)2.29** & **86.71\(\pm\)2.36** \\ \hline \end{tabular}
\end{table}
Table 10: Comparing the performance of MultiCred against the baseline algorithms in the 8-class classification system.
Figure 4: Accuracy comparison between MultiCred and the two baseline methods, for different number of classes.
\begin{table}
\begin{tabular}{c c c c c} \hline Model & Precision(\%) & Recall(\%) & F1-score(\%) & Accuracy(\%) \\ \hline Kumari et al. [5] & 26.05\(\pm\)2.43 & 28.09\(\pm\)1.54 & 25.78\(\pm\)1.90 & 28.14\(\pm\)1.60 \\ UCred [14] & 46.51\(\pm\)2.31 & 45.30\(\pm\)1.43 & 45.78\(\pm\)1.66 & 45.30\(\pm\)1.36 \\ MultiCred & **86.89\(\pm\)1.30** & **87.29\(\pm\)0.97** & **85.85\(\pm\)1.08** & **87.61\(\pm\)0.83** \\ \hline \end{tabular}
\end{table}
Table 11: Comparing the performance of MultiCred against the baseline algorithms in the 10-class classification system.
metrics improve when we move from 8-class setting to the 10-class setting. In MultiCred too, recall, precision, and accuracy show slight improvements in the 10-class setting compared to the 8-class setting. This phenomenon suggests that the dataset used in this study demonstrates a better response to the 10-class setting, and the data distribution in the feature space is such that considering 10 classes for data classification enhances model performance. As a result, users in this dataset naturally tend to be classified into 10 levels of credibility, than into 8 levels. This presents an example wherein, considering more credibility levels can better reflect users' credibilities.
## 7 Conclusion and future work
In this paper, we studied the problem of multilevel user credibility assessment in social networks. To do so, first we collected a dataset which is suitable to study user credibility assessment at multiple levels. Then, we proposed the MultiCred model that places users at one of several levels of credibility, based on a rich and diverse set of features extracted from users' profile, tweets and comments. MultiCred uses deep language models to analyze textual data and deep neural models to process non-textual features. We conducted experiments over the collected data to show that MultiCred outperforms existing approaches, in terms of several accuracy measures.
Due to computational limitations, this research did not incorporate certain types of features such as images and multimedia content shared by users. The graph of the social network is another data which wasn't considered in this work. This graph reflects important structural information about users that, when combined with other features, could provide a deeper understanding of users' activities. An interesting direction for future work could be analyzing these two types of features and combining them with our proposed model.
|
2309.04679 | Embedding structure matters: Comparing methods to adapt multilingual
vocabularies to new languages | Pre-trained multilingual language models underpin a large portion of modern
NLP tools outside of English. A strong baseline for specializing these models
for specific languages is Language-Adaptive Pre-Training (LAPT). However,
retaining a large cross-lingual vocabulary and embedding matrix comes at
considerable excess computational cost during adaptation. In this study, we
propose several simple techniques to replace a cross-lingual vocabulary with a
compact, language-specific one. Namely, we address strategies for
re-initializing the token embedding matrix after vocabulary specialization. We
then provide a systematic experimental comparison of our techniques, in
addition to the recently-proposed Focus method. We demonstrate that: 1)
Embedding-replacement techniques in the monolingual transfer literature are
inadequate for adapting multilingual models. 2) Replacing cross-lingual
vocabularies with smaller specialized ones provides an efficient method to
improve performance in low-resource languages. 3) Simple embedding
re-initialization techniques based on script-wise sub-distributions rival
techniques such as Focus, which rely on similarity scores obtained from an
auxiliary model. | C. M. Downey, Terra Blevins, Nora Goldfine, Shane Steinert-Threlkeld | 2023-09-09T04:27:18Z | http://arxiv.org/abs/2309.04679v2 | # Embedding Structure Matters: Comparing Methods to Adapt Multilingual Vocabularies to New Languages
###### Abstract
Pre-trained multilingual language models underpin a large portion of modern NLP tools outside of English. A strong baseline for specializing these models for specific languages is Language-Adaptive Pre-Training (Lapt). However, retaining a large cross-lingual vocabulary and embedding matrix comes at considerable excess computational cost during adaptation. In this study, we propose several simple techniques to replace a cross-lingual vocabulary with a compact, language-specific one. Namely, we address strategies for re-initializing the token embedding matrix after vocabulary specialization. We then provide a systematic experimental comparison of our techniques, in addition to the recently-proposed Focus method. We demonstrate that: 1) Embedding-replacement techniques in the monolingual transfer literature are inadequate for adapting multilingual models. 2) Replacing cross-lingual vocabularies with smaller specialized ones provides an efficient method to improve performance in low-resource languages. 3) Simple embedding re-initialization techniques based on script-wise sub-distributions rival techniques such as Focus, which rely on similarity scores obtained from an auxiliary model.
## 1 Introduction
For languages other than English and a handful of other very high-resource languages, pre-trained multilingual language models form the backbone of most current NLP systems. These models address the relative data scarcity in most non-English languages by pooling text data across many languages to train a single model that (in theory) covers all training languages Devlin (2019); Conneau and Lample (2019); Conneau et al. (2020); Liu et al. (2020); Scao et al. (2023), i.a.). These models often include language-agnostic tokenization and an increased vocabulary capacity over monolingual models Conneau et al. (2020).
However, Wu and Dredze (2020) show that these massively multilingual models still underperform on lower-resource languages. Recent efforts to cover these languages instead pre-train models that are specialized to specific languages or language families Ogueji et al. (2021); Ogunremi et al. (2023). These approaches nonetheless require training a new model from scratch and do not leverage transferable information in existing models.
Our study builds on a line of work which instead _adapts_ a pre-trained cross-lingual model (such as XLM-R; Conneau et al., 2020) to a single language, or a smaller set of languages. Language-Adaptive Pre-Training (Lapt)--continuing the MLM or CLM pre-training task on only the target language(s)--is a simple and strong baseline in this regard Chau et al. (2020).
However, Lapt with no change to the cross-lingual vocabulary comes with considerable excess computational cost: when adapting to a single language or small subset of languages, only a small fraction of the cross-lingual vocabulary is used. The excess vocabulary still contributes to the computational cost on both the forward and backward pass, and embedding/output matrices often constitute a large fraction of the total trainable model parameters (for XLM-R-base, 192M / 278M \(\approx\) 69% of parameters). Additionally, the information-theoretic tokenization modules for cross-lingual models are usually under-optimized for any given language, and especially low-resource languages Acs (2019); Conneau and Lample (2019), i.a.)
For this reason, we propose several simple techniques to replace the large cross-lingual vocabulary of a pre-trained model with a compact, language-specific one during model specialization. Training a new SentencePiece or BPE tokenizer poses no special difficulties. However, re-initializing the embedding matrix for a new vocabulary, which will almost certainly introduce many new tokens lacking pre-trained embeddings, poses significant
challenges. We compare several methods for such embedding re-initialization.
After reviewing related literature in Section 2, we conduct a qualitative exploration of the pre-trained embedding space for a standard multilingual model: XLM-R (Section 3.1). This exploration informs our formalization of simple techniques to align new vocabulary embeddings with the pre-trained embedding distribution of our base model (Section 3.2). We then provide a systematic experimental comparison of the embedding re-initialization techniques we propose, plus the recently proposed Focus re-initialization method Dobler and de Melo (2023), Section 4). Our experiments cover a wide selection of low- and mid-resource target languages (i.e. those that have the most to gain from language specialization).1
Footnote 1: The software used to run all experiments may be found at [https://github.com/cmdowney88/EmbeddingStructure](https://github.com/cmdowney88/EmbeddingStructure)
The results of our experiments (Sections 5, 6) demonstrate the following: 1) Embedding-replacement techniques proposed in the monolingual model adaptation literature are inadequate for adapting multilingual models. 2) Replacing large cross-lingual vocabularies with smaller language-specific ones provides a computationally-efficient method to improve task performance in low-resource languages. 3) The simple re-initialization techniques we propose here, based on script-wise embedding sub-distributions, rival techniques such as Focus, which rely on model-driven semantic similarity.
## 2 Related Work
Pre-trained Model AdaptationExtensive work has proposed re-using and modifying pre-trained models for new settings in order to retain existing model knowledge and reduce pre-training costs. Gururangan et al. (2020) show that continued training on domain-specific data effectively adapts pre-trained models to new domains in both high- and low-resource settings. This approach is also used to adapt models to new languages (i.e. Language-Adaptive Pre-Training / Lapt; Chau et al. (2020).
Other approaches involve training new, language-specific adapter layers to augment a frozen monolingual Artetxe et al. (2020) or multilingual encoder Pfeiffer et al. (2020); Ustun et al. (2020); Faisal and Anastasopoulos (2022). A comparison of these cross-lingual adaptation approaches Ebrahimi and Kann (2021) found that continued pre-training often outperforms more complex setups, even in low-resource settings. With this in mind, our experiments evaluate the success of models tuned for target languages with Lapt, starting from variable initializations depending on a choice of embedding adaptation technique.
Cross-lingual Vocabulary AdaptationA major limitation in adapting pre-trained models to new languages is the subword vocabulary, which often fails to cover an unseen script Pfeiffer et al. (2021) or tokenizes target text inefficiently Acs (2019). Muller et al. (2021) demonstrate that script is an extremely important factor in predicting transfer success. Specifically, the pre-trained coverage of closely-related languages improves transfer, but only if the target language is written in the same script as its pre-trained relative.
One adaptation technique is to initialize new subword embeddings that cover the target language, e.g. by expanding the existing vocabulary with new tokens as necessary, then training the new (randomly initialized) embeddings Chau et al. (2020); Wang et al. (2020). When transferring a monolingual model to a new language, Artetxe et al. (2020) and de Vries and Nissim (2021) instead completely re-initialize the embedding matrix, corresponding to a new subword vocabulary. These embeddings are then trained into alignment with the pre-trained, frozen transformer encoder. We show that this technique is not successful when adapting a multilingual model (Section 5).
Other work reuses information in pre-trained embeddings rather than initializing new ones at random. This may include scaling up smaller embedding spaces from models trained on the target language de Vries and Nissim (2021); Ostendorff and Rehm (2023) or copying embeddings from the original vocabulary where there is exact vocabulary overlap Pfeiffer et al. (2021). When transferring to a target language written in a poorly-covered script, Muller et al. (2021) show that transliterating the target to the script of a well-covered relative can lead to significant performance gains.
Finally, recent work has proposed more complex methods for mapping source embeddings onto semantically similar ones in the target space either through cross-lingually aligned static word embeddings (e.g. the WESCHEL method; Minikhofer et al. (2022) or with bilingual lexicons Zeng et al. (2023). In concurrent work to ours, Dobler and de Melo (2023) extend WECHSEL with the Fo
cus method to specialize multilingual vocabularies to a single language. Ostendorff and Rehm (2023) use a cross-lingual progressive transfer learning approach to combine information from the source embeddings and a smaller target language model to initialize higher-dimension target embeddings. Unlike earlier initialization methods and our proposed setup, these methods all require additional information outside the source model and often require significant additional compute. We compare one method from this family (Focus) to our proposed heuristic-based initialization schemes.
## 3 Vocabulary Replacement & Embedding Re-initialization
Research transferring monolingual models from one language to another (e.g. Artetxe et al., 2020; de Vries and Nissim, 2021), has shown that random re-initialization of embeddings +Lapt is sufficient. However, our experiments show that this technique performs poorly when transferring from a multilingual model (Section 5). For this reason, we propose several simple techniques for initializing new embeddings based on a qualitative exploration of the embedding space for XLM-R (Section 3.1), and include the more complex Focus technique, developed concurrently with our work, for comparison Dobler and de Melo (2023).
### XLM-R Embedding-Space Analysis
To better understand the task of initializing new embeddings for a multilingual model, we explore the token-embedding space of XLM-R through PCA projection. Our hypothesis is that multilingual models do not process all languages homogeneously. This seems to be demonstrated in Figures 0(a) and 0(b), where word embeddings are colored by their respective Unicode script block. We see that the highest-resource scripts in XLM-R (Common, Latin, and Cyrillic) have relatively divergent distributions, while others cluster closer together. This heterogeneity may help explain the finding from Muller et al. (2021) that pre-trained models do not transfer well to even closely-related target languages if the target script does not match that of the pre-trained relative.
Secondly, each script can be further divided into two sub-distributions, roughly corresponding to a shift in the second principal component. Figure 0(c) shows that this division corresponds to whether a token is word-initial or word-medial. To preserve whitespace information, SentencePiece tokens include a leading underscore to indicate tokens that should be preceded by a space (word-initial tokens).2 Although the model does not have access to the internal makeup of its tokens, we hypothesize that it learns to discern which tokens can begin a word and which cannot.
Footnote 2: E.g., “_the” and “the” are word-initial and word-medial tokens of the same character sequence.
Thus when proposing methods to initialize new embeddings for XLM-R, we hypothesize that initializing according script- and position-wise sub-distributions will help to align new vocabulary items with the pre-trained embedding distribution.
### Embedding Re-initialization Techniques
We now formalize simple techniques for embedding re-initialization based on our exploration of XLM-R's embedding space, as well as one recently proposed technique based on an auxiliary embedding model (Focus). Figure 2 provides PCA visualizations of the re-initialized embeddings from each technique on a subword vocabulary specialized for languages of the Uralic family (we experiment with these languages in Section 4). The visualization for these languages' respective scripts (Common, Latin, Cyrillic) in the base model can be found in Figure 0(b) for comparison.
Re-initialization by IdentityReinit-ident first identifies tokens in the new vocabulary that exactly match a token in the original vocabulary, then sets the new embeddings of shared tokens to be identical to those in the original embedding table (Figure 1(a)). This is a common approach to preserve information from the original model, even when the other embeddings are randomly re-initialized (e.g., Pfeiffer et al., 2021). When identity re-initialization is applied in conjunction with another technique (such as Reinit-script), identity takes precedence.
Re-initialization by ScriptFor Reinit-script, all base XLM-R tokens are first categorized by Unicode block, as a stand-in for identifying the script/orthography. We then calculate the mean and standard deviation for each script in the original embedding space. Finally, new token embeddings for each script are distributed according to a Normal distribution with the corresponding mean and standard deviation (Figure 1(b)).
Re-initialization by PositionReinit-posn is based on the observation that within each script, embeddings seem to cluster according their word-initial vs. word-medial status (Figure 0(c)). Similarly to Reinit-script, we identify the mean and standard deviation of embeddings that belong to each category. Because positional status seems to be a sub-cluster within script clusters, we only use Reinit-posn in combination with Reinit-script. The mean and standard deviation for each (script, position) combination is calculated and new embeddings are initialized accordingly (Figure 1(c)).
Focus Re-initializationIn addition to the heuristic-based methods introduced above, we investigate a pre-existing method for embedding transfer, termed FocusDobler and de Melo (2023). Focus works by extrapolating from the embedding space of an existing model, like our heuristic methods, but further introduces an auxiliary embedding model trained on the new language(s). This auxiliary model (based on FastText; Bojanowski et al., 2017) is used to obtain similarity measures between the new vocabulary items. Embeddings corresponding to overlapping tokens in the new vocabulary keep their values from the source model (Reinit-ident). Completely new tokens are initialized as a weighted combination of the overlapping items, with weights obtained according to similarity in the auxiliary model.
Random Re-initializationEmbeddings not initialized through the above methods are initialized according to a Standard Normal Distribution about the origin. This includes the non-overlapping tokens when Reinit-ident is applied on its own, and Reinit-random, where all embeddings are initialized this way.
Inspection of re-initialized embeddingsFigures 2 and 3 show PCA visualizations for the re-initialization techniques described here. Figure 1(a) shows that while Reinit-ident captures some of the pre-trained embedding structure, a large number also remain randomly scattered throughout the space. Reinit-script (1(b)) initializes all embeddings in a Normal distribution about the centroid for each script, but misses key embedding structure, such as the fact that each script has two position
Figure 1: PCA visualizations of the embedding space for XLM-R. Subplots: (a) Distribution of embeddings for the 12 most common Unicode scripts. (b) Plot reduced to only Common, Latin, and Cyrillic scripts for simplicity. (c) Embeddings colored by whether the token begins a word (initial) or occurs in the middle of one (median)
Figure 3: PCA: Reinit-focus embeddings
Figure 2: PCA visualizations embedding re-initialized using the heuristic techniques introduced in Section 3.2
wise sub-distributions. Reinit-script+posn (2c) takes these sub-distributions into account, forming six Normal clusters instead of three.3 Finally, Reinit-script+posn+ident (2d) and Focus (3) give the closest emulation of the original XLM-R embedding structure (1b).
Footnote 3: Figure 5b in the Appendix verifies that these clusters capture the initial vs. medial token distinction
## 4 Experiments
In our experiments, we replace the large cross-lingual embedding matrix of XLM-R and re-initialize it for a new, language-specific vocabulary. We then conduct Lapt to specialize the model for the new language(s), and evaluate performance on downstream tasks. We consider both multilingual\(\rightarrow\)monolingual and multilingual\(\rightarrow\)multilingual transfer scenarios, the latter being transfer to a much smaller set of languages than the original cross-lingual training set. We compare our vocabulary-replacement techniques against the baseline performance of XLM-R off-the-shelf, as well as Lapt while retaining the original, full-sized vocabulary.
Another manipulation we consider is whether the transformer-specific parameters are frozen during Lapt. This follows from the literature on transferring monolingual models, which proposes freezing the encoder parameters and only training the new embedding matrix to mitigate catastrophic forgetting during transfer learning Artetxe et al. (2020); de Vries and Nissim (2021). In our tables, we denote Lapt with trainable transformer layers as Lapt-full, and training with the transformer frozen (but trainable embeddings) as Lapt-emb.
Target LanguagesWe select our target languages for a wide selection of language families, scripts, typological characteristics, and resource availability, while still having standard evaluation sets for comparison. Training data for all languages is obtained from OSCAR v.22.01 Abadji et al. (2022). For our lowest-resource languages, supplemental data is obtained from monolingual splits of the OPUS translation corpus Tiedemann and Nygaard (2004) and the Johns Hopkins University Bible Corpus McCarthy et al. (2020). More data curation details may be found in Appendix A.
Our multilingual\(\rightarrow\)monolingual transfer languages can be found in Table 1. In these experiments, the replacement vocabulary and Lapt training are constrained to a single target language. In addition, we include two multilingual\(\rightarrow\)multilingual experiments. In the first, we simply transfer to the set of languages used in our monolingual experiments. Most of these languages are unrelated and cover a variety of scripts and levels of resource-availability. In the second, we transfer to a set of languages belonging to a single language family -- Uralic. These languages come from the same ancestor language, and share broad grammatical features, but also use both Cyrillic and Latin scripts. These differing settings are designed to demonstrate whether language relatedness has an effect on the success of multilingual vocabulary-replacement techniques.
Vocabulary Replacement / Re-initializationWhen replacing model vocabulary, we train new Sentencepiece models on a subset of the training data. For targets with less than 1GB of data, we use the entire dataset. For those with more, we use a random subset of about 250MB. For multilingual models, we sample 5 million lines according to the same distribution as the training data. All new Sentencepiece models have a total vocabulary size of 32,770 including special tokens. We then initialize the embedding matrix for each new vocabulary according to one or a combination of the techniques described in Section 3.4
Footnote 4: The auxiliary FastText model for Focus initialization is trained on the same set as the vocabulary
TrainingAll of our experiments use XLM-R as a starting point (base size; Conneau et al., 2020). We conduct Lapt for 100k training steps, with evaluation checkpoints every 1000 steps. For Lapt-full experiments, the transformer blocks are frozen for the first 10k steps, then unfrozen for the last 90k, so that the model does not overfit to initial (possibly poor) embedding initializations. For Lapt-emb experiments, transformer blocks remain frozen throughout training. The checkpoint obtaining the best MLM loss on a development set is selected for task fine-tuning and evaluation.
For multilingual training, we sample languages according to a multinomial distribution parameterized by \(\alpha=0.2\), following Conneau and Lample Conneau et al. (2019), conneau et al. (2020), i.a. Languages are sampled sentence-wise rather than batch-wise.
EvaluationWe evaluate model quality with POS-tagging and NER tasks. For each task and each language, the trained model is fine-tuned on task
training data until evaluation set convergence or the maximum number of epochs is reached, across four random seeds. POS performance is evaluated on Universal Dependencies (UD) treebanks de Marneffe et al. (2021), and NER is measured on the WikiAnn benchmark Pan et al. (2017).
## 5 Results
The results for monolingual adaptation can be found in Tables 1-2 and general multilingual adaptation in Tables 3-4. Because the results for multilingual adaptation to the Uralic family mostly echo overall trends, we provide these results in Appendix C.5 In order to adhere to our overall computational budget, we only conduct full-vocabulary Lapt experiments for three languages in the monolingual setting.6
Footnote 5: While training on related languages may be beneficial for low-resource Uralic languages like Erzya, family-based training vs. general multilingual training does not seem to alter the relative ranking of embedding initialization techniques, which is our primary research interest
Footnote 6: We select Erzya, Telugu, and Hebrew for these full-size experiments, spanning very-low, low, and medium resource-availability levels
We first note that across re-initialization methods, Lapt-full always outperforms Lapt-emb. I.e. training with trainable transformer layers outperforms training with frozen ones, despite the risk of catastrophic forgetting with the former. This trend persists across monolingual and multilingual experiments. For example, Reinit-focus+ident shows a 6.9 average POS accuracy drop between Lapt-full and Lapt-emb (Table 1).
Second, although Focus is the best performing re-initialization method when averaged across languages, for individual languages, it does not perform significantly differently than script-based methods. For instance, Armenian and Telugu POS tagging with script-based initialization performs on-par with or better than Focus (Tables 1, 3).7 In the case of the very low-resource language Erzya, script-based methods mostly outperform Focus.8
Footnote 7: Overall performance/ranking of script+posn+ident vs. script+ident remains uncertain. For Lapt-full averaged across languages, the former performs better in 2/3 POS settings, but only 1/3 NER settings
Footnote 8: However, script-based methods show significant variation on Erzya POS after multilingual training (Table 3)
Third, for the languages with the largest amount of data in XLM-R (Estonian, Hebrew, and Russian), the off-the-shelf performance of XLM-R (top row) is slightly better than any re-initialization method. This is not unexpected, since we can expect the highest-resource languages in XLM-R to receive adequate vocabulary coverage, and their embeddings are likely the most robustly trained.
Finally, Lapt with the full, original XLM-R vocabulary, results in marginally better performance than other techniques. On one hand, this might be surprising given the inefficiency with which cross-lingual vocabularies often tokenize low-resource languages Acs (2019). On the other hand, these original pre-trained embeddings are also likely robustly aligned with the transformer encoder, which might contribute to slightly better performance.
Part of the motivation for this work, however, is to investigate _efficient_ ways to specialize multilingual models. Lapt with the full XLM-R vocabulary is much more computationally costly than training new vocabulary. Figure 4 shows the trade-off between computation (in FLOPs) and performance gain in our experiments: the (often) small gains in performance we see from fine-tuning with the original vocabulary come at the cost of two to three times more FLOPs during adaptation.
Erzya POS performance provides one exception to the pattern of full-vocab Lapt providing only marginal benefits (85.1 accuracy with the full vocabulary vs. 79.0 with the reduced vocabulary). This seems surprising, given Erzya is not included in XLM-R's pre-training data, and intuitively should benefit the most from a specialized vocabulary. It could be that the reduced vocabulary size of 32k is sub-optimal for this particular target language, and/or that the new vocabulary does not overlap enough with the original (full-size) one to inherit useful Cyrillic-script embeddings. Investigating the dynamics of target vocabulary size dur
Figure 4: Evaluation scores plotted against total floating point operations of Lapt (computational cost). Left point represents cost of Lapt with reduced vocabulary, right point with full vocabulary
ing vocabulary specialization would be a fruitful direction for future work.
## 6 Discussion
Embedding-only training is inadequate for multilingual model transferOur experiments show that language transfer methods developed for monolingual models, which freeze the transformer blocks and re-train only the embedding matrix Artetxe et al. (2020); de Vries and Nissim (2021), yield poor results when transferring a multilingual model. This work in the monolingual literature not only keeps transformer layers frozen, but initializes new embeddings randomly. This setup (Lapt-emb, Reinit-random) performs much worse than the off-the-shelf baseline in all of our experiments.
It is worth noting that Artetxe et al. (2020) do not necessarily suggest that freezing the main model is the _optimal_ language transfer method. However, it does demonstrate that for monolingual\(\rightarrow\)monolingual adaptation, embedding-only training is competitive with an off-the-shelf multilingual model. We see no such comparability in our experiments. We believe this is partly caused by the heterogeneity of the XLM-R embeddings, where different languages (or at least scripts) are encoded in different spaces. When new embeddings are randomly and homogeneously initialized, they fail to align with the pre-trained subspaces expected by the frozen transformer.
Vocab replacement efficiently specializes modelsWe demonstrate that for languages inadequately covered by a pre-trained multilingual model, replacing and re-training the cross-lingual model vocabulary with a language-specific one is a computationally efficient way to create a compact model specialized for the target language(s). In our monolingual adaptation experiments, vocabulary replacement performs better than off-the-shelf XLM-R in 5/8 languages for POS tagging and 5/7 languages
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline \multicolumn{1}{l}{Lapt} & \multicolumn{1}{l}{Rient} & \multicolumn{1}{l}{Armenia} & \multicolumn{1}{l}{Basege} & \multicolumn{1}{l}{Erzya} & \multicolumn{1}{l}{Estonian} & \multicolumn{1}{l}{Hebrew} & \multicolumn{1}{l}{Russian} & \multicolumn{1}{l}{North Semi} & \multicolumn{1}{l}{Telops} & \multicolumn{1}{l}{Avg} \\ \hline \({}^{*}\) & \({}^{*}\) & 94.4 \(\pm\) & 0.1 & 94.3 \(\pm\) & 0.1 & 89.5 \(\pm\) & 0.6 & 93.3 \(\pm\) & 0.2 & 85.9 \(\pm\) & 0.1 & 90.9 \(\pm\) & 0.2 & 85.4 \(\pm\) & 0.5 & 90.5 \\ FULL & \({}^{*}\) & - & - & 91.8 \(\pm\) & 0.5 & - & 86.9 \(\pm\) & 0.1 & - & 86.6 \(\pm\) & 1.9 & - \\ \hline FULL & focus+ident & **95.1 \(\pm\) & 0.9 \(\pm\) & **94.9 \(\pm\) & **0.4** & **89.9 \(\pm\) & **0.2** & **86.2 \(\pm\) & 0.3** & **90.6 \(\pm\) & 0.1 & **87.7 \(\pm\) & 0.5 & **91.0** \\ FULL & sccript+bemf & 93.9 \(\pm\) & 0.1 \(\pm\) & 93.2 \(\pm\) & **90.2** & 87.2 \(\pm\) & 92.0 \(\pm\) & 0.3 & 83.2 \(\pm\) & 0.8 \(\pm\) & 0.8 \(\pm\) & 0.5 \(\pm\) & 1.8 \\ FULL & sccript+bemf & 93.9 \(\pm\) & 0.3 \(\pm\) & 94.3 \(\pm\) & **90.2** & 89.3 \(\pm\) & 0.2 & 83.4 \(\pm\) & 0.8 \(\pm\) & 0.8 \(\pm\) & 0.4 \(\pm\) & 0.5 & 99.5 \\ FULL & sccript+bemf & 92.0 \(\pm\) & 0.6 \(\pm\) & 92.1 \(\pm\) & 0.7 \(\pm\) & 93.5 \(\pm\) & 0.2 & 78.5 \(\pm\) & 0.2 & 85.7 \(\pm\) & 0.1 & 99.6 \(\pm\) & 1.1 \(\pm\) & 85.9 \\ FULL & sccript+bemf & 94.1 \(\pm\) & 0.4 \(\pm\) & 90.7 \(\pm\) & 92.0 \(\pm\) & 89.3 \(\pm\) & 0.2 & 72.0 \(\pm\) & 0.6 & 86.7 \(\pm\) & 0.1 & 69.3 \(\pm\) & 0.4 & 81.9 \\ FULL & random & 74.1 \(\pm\) & 1.4 \(\pm\) & 81.5 \(\pm\) & 0.3 \(\pm\) & 72.6 \(\pm\) & 33.5 \(\pm\) & 45.8 \(\pm\) & 27.2 & 54.4 \(\pm\) & 0.9 & 70.3 \(\pm\) & 0.7 & 47.2 \(\pm\) & 82.3 \\ \hline EM & focus+ident & **93.5 \(\pm\) & 0.5 \(\pm\) & **92.2 \(\pm\)** & 81.7 \(\pm\) & **82.2 \(\pm\)** & 81.7 \(\pm\) & 0.2 & 84.9 \(\pm\) & 0.1 & 86.9 \(\pm\) & 0.1 & 86.1 \(\pm\) & 1.9 & 85.0 \\ FULL & sccript+bemf & 93.5 \(\pm\) & 0.4 \(\pm\) & 90.2 \(\pm\) & 74.6 \(\pm\) & 2.2 \(\pm\) & 81.7 \(\pm\) & 0.2 & 84.9 \(\pm\) & 0.2 & 86.9 \(\pm\) & 0.1 & 86.9 \(\pm\) & 1.4 \(\pm\) & 1.2 & 86.1 \\ FULL & sccript+bemf & 90.6 \(\pm\) & 90.2 \(\pm\) & 91.0 \(\pm\) & 74.6 \(\pm\) & 2.2 \(\pm\) & 90.4 \(\pm\) & 0.6 \(\pm\) & 0.5 & 99.2 \(\pm\) & 0.4 & 68.9 \(\pm\) & 0.1 & 74.1 \(\pm\) & 1.2 & 86.1 \\ FULL & sccript+bemf & 90.6 \(\pm\) & 90.2 \(\pm\) & 91.7 \(\pm\) & 92.4 \(\pm\) & 90.4 \(\pm\) & 0.6 \(\pm\) & 90.5 \(\pm\) & 0.5 & 99.0 \(\pm\) & 0.7 \(\pm\) & 0.2 & 88.9 \(\pm\) & 0.1 & 74.1 \(\pm\) & 1.2 & 86.1 \\ FULL & sccript+bemf & 81.6 \(\pm\) & 0.4 \(\pm\) & 90.2 \(\pm\) & 71.5 \(\pm\) & 2.1 \(\pm\) & 84.4 \(\pm\) & 0.9 \(\pm\) & 0.5 & 99.2 \(\pm\) & 0.0 & 83.2 \(\pm\) & 0.0 & 70.5 \(\pm\) & 0.2 & 85.0 \\ FULL & sccript+bemf & 81.6 \(\pm\) & 0.4 \(\pm\) & 80.6 \(\pm\) & 91.1 \(\pm\) & 86.4 \(\pm\) & 0.4 \(\pm\) & 91.1 \(\pm\) & 90.2 \(\pm\) & 0.7 \(\pm\) & 0.0 & 81.0 \(\pm\) & 0.0 & 67.9 \(\pm\) & 1.0 & 79.4 \\ FULL & random & 67.4 \(\pm\) & 2.0 \(\pm\) & 72.0 \(\pm\) & 0.3 \(\pm\) & 53.2 \(\pm\) & 70.0 \(\pm\) & 1.1 \(\pm\) & 0.6 \(\pm\) & 0.6 \(\pm\) & 0.6 \(\pm\) & 0.4 \(\pm\) & 0.7 & 96.4 \(\pm\) & 1.0 & 72.4 \\ \hline EM & focus+ident & **93.2 \(\pm\) & **95.1 \(\pm\) & **96.1 \(\pm\)** & **88.6 \(\pm\)** & 0.1 \(\pm\)** & **85.4 \(\pm\)** & **86.6 \(\pm\)** & **96.9 \(\pm\)** & **93.8 \(\pm\)** & **0.04** & **74.6 \(\pm\)** & **86.2 \(\pm\)** & **84.8** & **54.8** \\ FDH & sccript+bemf+bemf & 87.6 \(\pm\) & 1.3 \(\pm\) & 82.7 \(\pm\) & **55.6 \(\pm\)** & **88.6 \(\pm\)** & 0.1 \(\pm\)** & **95.3 \(\pm\)** & 0.3 \(\pm\)** & 91.0 \(\pm\)** & 0.05 & 69.8 \(\pm\) & 1.4 & 81.8 \(\pm\) & 1.2 & 82.5 \\ EM & sccript+bemf & 87.7 \(\pm\) & 1.8 \(\pm\) & 87.9 \(\
for NER. Only the high-resource languages of Estonian, Hebrew, and Russian seem to be adequately covered in XLM-R to outperform our specialization techniques. Language-Adaptive Pre-Training with the full (cross-lingual) XLM-R vocabulary often produces marginally better results overall, but at a much greater computational cost, and without making the model more compact in size. Further training and inference after Lapt will continue to suffer from the memory and compute wasted on unused vocabulary items, which constitute a large percentage of the total model parameters.
Script-distribution initialization rivals semantic similarity methodsWe introduced several methods for embedding re-initialization in Section 3, namely using the insight that token embeddings for XLM-R cluster by script and position within a word, then distributing new vocabulary items according to these pre-trained sub-distributions. We compare this to the Focus re-initialization method, which initializes new embeddings as a weighted combination of existing ones according to similarity scores from an auxiliary model.
Averaged across languages, Focus yields the best performance in downstream tasks by a slight margin. Within languages, it often overlaps significantly with the performance of our script-distribution methods. For very low-resource languages like Erzya, script-based methods even show a slight advantage. This seems to show that, at least in combination with Lapt, the majority of the benefit in re-initialization can be achieved by a method that takes the structure of the pre-trained embedding distribution into account, whether or not it uses advanced methods to precisely initialize the representations of new vocabulary items.
We do note that the advantage of Focus is more clear-cut when Lapt is conducted with transformer blocks frozen. This lends credence to the idea that Focus more precisely mimics the embedding distribution expected by the pre-trained transformer. However, the overall best results come when the transformer blocks are unfrozen/trainable.
Fully random initialization performs poorlyFinally, our experiments demonstrate that fully random re-initialization of embeddings during vocabulary replacement leads to overall poor performance. Across Lapt-full experiments, random initial
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Lapt} & \multicolumn{1}{c}{Rcinit} & \multicolumn{1}{c}{Amenia} & \multicolumn{1}{c}{Basque} & \multicolumn{1}{c}{Erzya} & \multicolumn{1}{c}{Eleconian} & Hebrew & \multicolumn{1}{c}{Russian} & \multicolumn{1}{c}{Telugu} & \multicolumn{1}{c}{Avg} \\ \hline
9 & * & 94.1 \(\pm\) 0.1 & 94.3 \(\pm\) 0.1 & 89.5 \(\pm\) 0.6 & 93.3 \(\pm\) 0.2 & 85.9 \(\pm\) 0.1 & 90.9 \(\pm\) 0.2 & 85.4 \(\pm\) 0.5 & 90.5 \\ FULL & * & 94.0 \(\pm\) 0.5 & 94.5 \(\pm\) 0.2 & 90.5 \(\pm\) 0.3 & 93.7 \(\pm\) 0.2 & 86.2 \(\pm\) 0.1 & 91.1 \(\pm\) 0.2 & 85.9 \(\pm\) 0.7 & 90.9 \\ \hline FULL & Focus+ident & **94.2 \(\pm\) 0.3** & **94.0 \(\pm\) 0.2** & **89.6 \(\pm\) 1.0** & **92.0 \(\pm\) 0.5** & **85.2 \(\pm\) 0.1** & **90.0 \(\pm\) 0.5** & **85.4 \(\pm\) 0.4** & **90.1** \\ FULL & SCRIPT+posm+ident & **94.1 \(\pm\) 0.2** & **94.0 \(\pm\) 0.1** & 88.8 \(\pm\) 0.9 & **92.3 \(\pm\) 0.1** & 85.0 \(\pm\) 0.2 & **90.4 \(\pm\) 0.1** & 84.8 \(\pm\) 0.4 & 89.9 \\ FULL & SCRIPT+ident & **94.2 \(\pm\) 0.2** & **94.1 \(\pm\) 0.2** & **90.1 \(\pm\) 0.6** & **92.4 \(\pm\) 0.1** & 84.9 \(\pm\) 0.3 & 90.3 \(\pm\) 0.1 & 84.5 \(\pm\) 0.2 & 90.0 \\ FULL & SCRIPT+posm & 91.2 \(\pm\) 0.1 & 95.1 \(\pm\) 0.8 & 99.9 \(\pm\) 0.8 & 88.4 \(\pm\) 0.7 & 73.0 \(\pm\) 0.6 & 86.3 \(\pm\) 0.1 & 76.2 \(\pm\) 0.4 & 85.7 \\ FULL & SCRIPT & 90.9 \(\pm\) 0.1 & 91.3 \(\pm\) 0.3 & 86.4 \(\pm\) 1.9 & 87.2 \(\pm\) 0.2 & 75.8 \(\pm\) 0.3 & 85.7 \(\pm\) 0.1 & 75.1 \(\pm\) 0.9 & 84.7 \\ FULL & IDENT & 91.2 \(\pm\) 0.1 & 93.4 \(\pm\) 0.2 & 80.9 \(\pm\) 2.4 & 91.5 \(\pm\) 0.4 & 83.5 \(\pm\) 0.3 & 89.8 \(\pm\) 0.1 & 83.2 \(\pm\) 0.5 & 87.9 \\ FULL & Random & 69.9 \(\pm\) 4.4 & 80.9 \(\pm\) 0.5 & 75.2 \(\pm\) 1.0 & 75.1 \(\pm\) 0.3 & 37.7 \(\pm\) 2.8 & 68.6 \(\pm\) 0.7 & 42.1 \(\pm\) 1.6 & 63.6 \\ \hline Emb & Focus+ident & **93.9 \(\pm\) 0.3** & **93.7 \(\pm\) 0.2** & **89.7 \(\pm\) 0.4** & **91.9 \(\pm\) 0.4** & **84.8 \(\pm\) 0.2** & **89.9 \(\pm\) 0.3** & **85.2 \(\pm\) 0.5** & **89.9** \\ FULL & SCRIPT+posm+ident & **93.7 \(\pm\) 0.2** & 93.5 \(\pm\) 0.1 & 87.2 \(\pm\) 1.0 & **91.9 \(\pm\) 0.2** & 84.0 \(\pm\) 0.2 & **89.9 \(\pm\) 0.1** & 84.0 \(\pm\) 0.5 & 89.2 \\ FULL & SCRIPT+posm+ident & 93.5 \(\pm\) 0.3 & 93.4 \(\pm\) 0.2 & 85.8 \(\pm\) 1.4 & **91.9 \(\pm\) 0.3** & 83.7 \(\pm\) 0.2 & **89.9 \(\pm\) 0.1** & 82.5 \(\pm\) 1.3 & 88.7 \\ FLR & SCRIPT+posm & 87.5 \(\pm\) 0.3 & 88.8 \(\pm\) 0.3 & 81.0 \(\pm\) 3.1 & 84.8 \(\pm\) 0.4 & 72.8 \(\pm\) 0.1 & 82.7 \(\pm\) 0.3 & 67.1 \(\pm\) 1.3 & 80.7 \\ FLR & SCRIPT & 85.2 \(\pm\) 0.3 & 81.3 \(\pm\) 7.1 & 80.0 \(\pm\) 1.1 & 84.3 \(\pm\) 0.3 & 68.3 \(\pm\) 0.9 & 80.6 \(\pm\) 1.0 & 75.9 \(\pm\) 3.5 & 77.1 \\ FLR & IDENT & 91.2 \(\pm\) 0.3 & 92.3 \(\pm\) 0.2 & 76.7 \(\pm\) 1.3 & 90.8 \(\pm\) 0.3 & 81.6 \(\pm\) 0.2 & 89.3 \(\pm\) 0.2 & 78.6 \(\pm\) 1.8 & 85.8 \\ FLR & Random & 62.8 \(\pm\) 0.9 & 74.9 \(\pm\) 1.6 & 66.1 \(\pm\) 1.1 & 62.7 \(\pm\) 1.9 & 23.9 \(\pm\) 1.8 & 53.1 \(\pm\) 4.7 & 37.7 \(\pm\) 2.6 & 54.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Multilingual Lapt: entity-wise NER F1 score after fine-tuning
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Lapt} & \multicolumn{1}{c}{Rcinit} & \multicolumn{1}{c}{Amenia} & \multicolumn{1}{c}{Basque} & \multicolumn{1}{c}{Erzya} & \multicolumn{1}{c}{Eleconian} & Hebrew & \multicolumn{1}{c}{Russian} & \multicolumn{1}{c}{Norm} & \multicolumn{1}{c}{Telugu} & \multicolumn{1}{c}{Avg} \\ \hline
* & 93.4 \(\pm\) 2.2 & 95.1 \(\pm\) 0.7 & 56.3 \(\pm\) 3.3 & 25.6 \(\pm\) 0.1 & 97.3 \(\pm\) 0.1 & 98.6 \(\pm\) 0.1 & 71.2 \(\pm\) 1.8 & 83.8 \(\pm\) 0.1 & 86.4 \\ FULL &
* & 91.3 \(\pm\) 0.1 & 95.9 \(\pm\) 0.0 & 71.7 \(\pm\) 5.3 & 95.5 \(\pm\) 0.2 & 97.4 \(\pm\) 0.2 & 98.6 \(\pm\) 0.0 & 80.6 \(\pm\) 1.4 & 89.7 \(\pm\) 0.5 & 92.0 \\ \hline FULL & Focus+ident & 91.0 \(\pm\) 0
ization performs an average of 19.4 points worse than the next-best re-initialization method, and 24.7 points worse than the off-the-shelf baseline. The poor performance of random initialization has been noted in other works such as Dobler and de Melo (2023), but we emphasize that even incredibly simple methods such as Reinit-ident and Reinit-script work far better than the random baseline.
## 7 Conclusion
This work presents a systematic comparison of methods to specialize the subword vocabularies and embeddings of multilingual models for new languages. We propose simple methods for re-initializing embeddings, motivated by a qualitative exploration of the XLM-R embedding space. Our experiments show that (1) updating the encoder layers during Lapt is crucial for downstream performance, (2) vocabulary replacement provides a computationally-efficient method to improve task performance in low-resource languages, and (3) our re-initialization techniques employing script-wise sub-distributions perform on par with more involved similarity-based methods. We hope these findings can be built upon in future work on multilingual model specialization, with the goal of providing the best performance for under-resourced languages while also making language modeling more accessible through more manageable compute cost and model sizes.
## Limitations
One limitation of our work is the relatively narrow set of evaluation tasks available for our languages of interest. The model-adaptation techniques we compare here are most applicable to low- and medium-resource languages that are not optimally covered by pre-existing multilingual models. For most of these languages, the only standard evaluation datasets that exist are for relatively low-level tasks like Part of Speech tagging and Named Entity Recognition. Evaluation of embedding-reinitialization techniques could be improved in future work if datasets for higher-level tasks like Natural Language Inference, question answering, and paraphrase detection were curated for these under-resourced languages.
We also make several simplifying choices to maintain a feasible scope for our work. First, we conduct model adaptation from only a single base model: XLM-R. A valuable addition in future work would be to determine whether the trends we observe here generalize to other model types (i.e. causal and seq2seq language models) and to larger model scales. Secondly, we consider only one size for newly-initialized target vocabularies (32k). Because effective per-language vocabulary allocation has been shown to be an important factor in multilingual modeling Conneau et al. (2020, i.a.), investigating the dynamics of target vocabulary size during vocabulary re-initialization will be important for future work on this topic.
## Acknowledgements
We thank Ibrahim Sharaf, Anita Silva, and Peter Zuckerman for early investigation of data availability for low-resource languages. We are also gracious to Emily P. Ahn, Gina-Anne Levow, Sara Ng, and our anonymous MRL reviewers for useful feedback and discussion.
|
2309.03116 | Strong magnon-magnon coupling in an ultralow damping
all-magnetic-insulator heterostructure | Magnetic insulators such as yttrium iron garnets (YIGs) are of paramount
importance for spin-wave or magnonic devices as their ultralow damping enables
ultralow power dissipation that is free of Joule heating, exotic magnon quantum
state, and coherent coupling to other wave excitations. Magnetic insulator
heterostructures bestow superior structural and magnetic properties and house
immense design space thanks to the strong and engineerable exchange interaction
between individual layers. To fully unleash their potential, realizing low
damping and strong exchange coupling simultaneously is critical, which often
requires high quality interface. Here, we show that such a demand is realized
in an all-insulator thulium iron garnet (TmIG)/YIG bilayer system. The ultralow
dissipation rates in both YIG and TmIG, along with their significant spin-spin
interaction at the interface, enable strong and coherent magnon-magnon coupling
with a benchmarking cooperativity value larger than the conventional
ferromagnetic metal-based heterostructures. The coupling strength can be tuned
by varying the magnetic insulator layer thickness and magnon modes, which is
consistent with analytical calculations and micromagnetic simulations. Our
results demonstrate TmIG/YIG as a novel platform for investigating hybrid
magnonic phenomena and open opportunities in magnon devices comprising
all-insulator heterostructures. | Jiacheng Liu, Yuzan Xiong, Jingming Liang, Xuezhao Wu, Chen Liu, Shun Kong Cheung, Zheyu Ren, Ruizi Liu, Andrew Christy, Zehan Chen, Ferris Prima Nugraha, Xi-Xiang Zhang, Chi Wah Leung, Wei Zhang, Qiming Shao | 2023-09-06T15:53:49Z | http://arxiv.org/abs/2309.03116v1 | #### Strong magnon-magnon coupling in an ultralow damping all-magnetic-insulator heterostructure
###### Abstract
Magnetic insulators such as yttrium iron garnets (YIGs) are of paramount importance for spin-wave or magnonic devices as their ultralow damping enables ultralow power dissipation that is free of Joule heating, exotic magnon quantum state, and coherent coupling to other wave excitations. Magnetic insulator heterostructures bestow superior structural and magnetic properties and house immense design space thanks to the strong and engineerable exchange interaction between individual layers. To fully unleash their potential, realizing low damping and strong exchange coupling simultaneously is critical, which often requires high quality interface. Here, we show that such a demand is realized in an all-insulator thulium iron garnet (TmIG)/YIG bilayer system. The ultralow dissipation rates in both YIG and TmIG, along with their significant spin-spin interaction at the interface, enable strong and coherent magnon-magnon coupling with a benchmarking cooperativity value larger than the conventional ferromagnetic metal-based heterostructures. The coupling strength can be tuned by varying the magnetic insulator layer thickness and magnon modes, which is consistent with analytical calculations and micromagnetic simulations. Our results demonstrate TmIG/YIG as a novel platform for investigating hybrid magnonic phenomena and open opportunities in magnon devices comprising all-insulator heterostructures.
Spin-wave (or magnonic) devices utilize magnon spin degree of freedom to process information, which can occur in magnetic insulators free from any charge current, and therefore, are promising contenders for ultralow-power functional circuits [1, 2, 3, 4]. Magnetic garnets such as yttrium iron garnet (Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), YIG) have an ultralow damping factor, and they have enabled long magnon spin transmission [5], efficient magnon spin current generation [6], and magnon logic circuits [2, 3]. Another type of magnetic garnet, thulium iron garnet (Tm\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), TmIG), has been engineered to a binary memory with a robust perpendicular magnetic anisotropy [7, 8]. Besides, TmIG thin films can exhibit topological magnetic skyrmion phase [9, 10], promising for future magnetic insulator-based racetrack memory devices. In addition to these promising practical
applications, magnetic insulators are well-known for hosting novel quantum phases such as Bose-Einstein condensate [11], spin superfluidity [12], and topological magnonic insulators [13].
Magnetic heterostructures can provide more functionalities and richer properties because exchange interactions between different layers provide another control knot [14, 15]. While ferromagnetic metal-based heterostructures have been extensively studied and applied in commercial devices such as magneto-resistive random-access memory [15], magnetic insulator-based heterostructures are still on the horizon yet already showcased a few promises, including strong interfacial couplings [16, 17, 18, 19, 20], magnon valve effects [21, 22, 23], control of magnon transport in the magnetic insulator layer using another magnetic layer [24, 25], magnonic crystal [26], coherent magnon-magnon coupling [27, 28, 29, 30], and topological spin textures [10]. Magnetic insulator heterostructures are also theoretically predicted to host exotic quantum phase such as magnon flat band [31]. However, to date, coherent magnon-magnon coupling has only been studied in hybrid systems consisting of a low damping YIG and another ferromagnetic metal [27, 28, 29, 30]. The demonstration of low damping and strong coherent coupling in purely magnetic insulator bilayers is lacking.
In this work, we demonstrate ultralow damping and strong magnon-magnon coupling in a TmIG/YIG heterostructure. We characterize the structural and magnetic properties of our TmIG/YIG heterostructures on gadolinium gallium garnet (Gd\({}_{8}\)Ga\({}_{0}\)\({}_{12}\), GGG) using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), X-ray diffraction (XRD), and vibrating sample magnetometry (VSM). Then, we investigate the magnetic dynamics in these bilayers by using a broadband ferromagnetic resonance (FMR) technique. We observe a strong coupling between the Kittel mode of YIG and perpendicular standing spin wave (PSSW) mode of TmIG. By matching the experimental FMR spectra with analytical calculations and micromagnetic simulations, we obtain the exchange coupling strength at the interface, which is dependent on the magnetic insulator layer thickness and coupling mode. Finally, we benchmark the dissipation rates and cooperativity in our samples against these in ferromagnetic metal-based heterostructures.
We prepare our TmIG/YIG on GGG substrates using pulsed laser deposition (see Methods). Atomic images from HAADF-STEM show a single crystallinity and perfect interfaces at the YIG/GGG and TmIG/YIG boundaries (Fig. 1a). Elemental mapping (Fig. 1b) proves there is no interdiffusion between different layers. Fig. 1c presents the high-resolution XRD spectra of TmIG/GGG, YIG/GGG, and TmIG/YIG/GGG bilayer films measured with the scattering vector normal to the \(<\)001\(>\) oriented cubic substrate. Along the sharp \(<\)004\(>\) peaks from the GGG substrate, the XRD spectra shows Laue oscillations, indicating a smooth surface and interface. We also measured the magnetic hysteresis loops for YIG, TmIG, and TmIG/YIG samples to quantify their saturation magnetizations (see Supplementary Note 1). In principle, exchange coupling strength (J) between different layers can be estimated from major and minor hysteresis loops [15]. We can estimate the interfacial J at the CoFeB(50 nm)/ TmIG(350 nm) interface is \(-0.031\) mJ/m\({}^{2}\), indicating an antiferromagnetic exchange coupling (see Supplementary Note 1). However, YIG and TmIG have very similar coercive fields, preventing us from obtaining the coupling strength directly from the hysteresis loop measurements.
We measure the magnetization dynamics in TmIG(200 nm)/YIG(200 nm) bilayers using a field modulated FMR technique (see Methods). We mount the sample on a coplanar waveguide and apply a microwave current that generates radiofrequency magnetic fields (Fig. 2a). The absorption coefficient exhibits a peak when the FMR conditions for YIG and TmIG are met (Fig. 2b). We experimentally extract the resonance frequency at a specific field by fitting the frequency scan at the field using Lorentz functions (Fig. 3a). In addition to regular FMR peaks, we also observe anti-crossing at specific field-frequency points, which are signatures of exchange interaction-driven coupling of Kittel mode in YIG and PSSW modes in TmIG. To
identify the underlying magnon modes responsible for the coupling, we list the formula of generalized excited spin wave modes in two layers (\(\frac{\omega_{i}}{2\pi}\)_or_\(f_{i}\)):
\[\frac{\omega_{i}}{2\pi}=f_{i}=\frac{\gamma_{i}}{2\pi}\sqrt{(\mu_{0}H_{\text{ext} }+\frac{2A_{\text{ex},i}}{M_{s,i}}k_{i}^{2})\left(\mu_{0}H_{\text{ext}}+\frac{2A _{\text{ex},i}}{M_{s,i}}k_{i}^{2}+\mu_{0}M_{s,i}\right)}, \tag{1}\]
where \(i\)=YIG or TmIG, \(\frac{\gamma_{i}}{2\pi}=(g_{\text{eff},i}/2)\times 28\) GHz/T is the gyromagnetic ratio, \(\mu_{0}\) is the permeability, \(H_{\text{ext}}\) is the external field, \(M_{s}\) is the effective magnetization, \(A_{\text{ex}}\) is the exchange stiffness, and \(k\) is the wavevector of the excited spin wave. Note that if there is no exchange interaction between YIG and TmIG, \(k=\frac{n\pi}{d}\), where n is an integer and \(d\) is the thickness of the magnetic insulator. By fitting the Kittel mode with n=0, we get \(g_{\text{eff},\text{YIG}}=2\) (\(\mu_{0}M_{s,YIG}=0.25\) T) and \(g_{\text{eff},\text{TmIG}}=1.56\) (\(\mu_{0}M_{s,TmIG}=0.24\) T) for YIG and TmIG, respectively, which are consistent with the previous report [34]. Then, by assuming zero exchange interaction and matching \(\omega_{\text{YIG}}=\omega_{\text{TmIG}}\) from Eq. (1), we can understand the first (second) anti-crossing shown in Fig. 2b is from the coupling between n=0 mode in YIG and n=1 (n=2) mode in TmIG. A schematic of n=0 mode in YIG and n=1 mode in TmIG is shown in Fig. 2a. In addition, we determine the exchange stiffness of the TmIG to be 2.69 pJ/m, which is consistent with the previous report [35]. When there is an exchange interaction between YIG and TmIG, we expect an anti-crossing gap, which can be described by the minimum frequency separation of 2g. However, with only Eq. (1) the relation between the exchange interaction and the g value cannot be uniquely determined.
To fully understand the exchange coupling-driven magnon-magnon coupling, we perform the comprehensive numerical analysis and micromagnetic simulations (see Methods). We consider the boundary conditions at the interface and two surfaces of the TmIG/YIG bilayers and arrive at the formula (see Supplementary Note 2):
\[\frac{2A_{\text{ex},\text{YIG}}}{M_{\text{s},\text{YIG}}}\,k_{ \text{YIG}}\tan(k_{\text{YIG}}d_{\text{YIG}})\cdot\frac{2A_{\text{ex},\text{ TmIG}}}{M_{\text{s},\text{TmIG}}}\,k_{\text{TmIG}}\tan(k_{\text{TmIG}}d_{ \text{TmIG}})=\] \[\frac{2J}{\mu_{0}\left(M_{\text{s},\text{YIG}}+M_{\text{s},\text{ TmIG}}\right)}\left[\frac{2A_{\text{ex},\text{YIG}}}{M_{\text{s},\text{YIG}}}\,k_{ \text{YIG}}\tan(k_{\text{YIG}}d_{\text{YIG}})+\frac{2A_{\text{ex},\text{TmIG}} }{M_{\text{s},\text{TmIG}}}\,k_{\text{TmIG}}\tan(k_{\text{TmIG}}d_{\text{ TmIG}})\right], \tag{2}\]
where \(J\) is the interfacial exchange coupling strength. By solving \(\omega_{\text{YIG}}=\omega_{\text{TmIG}}\) from Eq. (1) and Eq. (2) together, we can get a set of (\(k_{\text{YIG}}\), \(k_{\text{TmIG}}\)) values that correspond to different modes. In the presence of exchange interaction, \(k\) will not be precisely equal to \(\frac{n\pi}{d}\) anymore. As a result, the degeneracy is lifted at the crossing point and we have two frequencies corresponding to two (\(k_{\text{YIG}}\), \(k_{\text{TmIG}}\)) values. By employing \(J=-0.057\) mJ/m\({}^{2}\) (see Supplementary Table 1), we have obtained high consistency between the experimental and calculated spectra of field-frequency points in the entire range (Fig. 3a). The negative sign suggests an antiferromagnetic exchange coupling between TmIG and YIG. The strength is also comparable with the ferromagnetic metal/YIG bilayers [27]. We have also carried the FMR measurement on the reference TmIG(350 nm)/CoFeB(50 nm) sample (see Supplementary Note 4). We get \(J=-0.032\) mJ/m\({}^{2}\), which is close to the result from the VSM loop measurements. This consistency suggests that we can reliably extract \(J\) values of TmIG/YIG samples from the FMR measurement.
We further study the thickness and mode dependence of anti-crossing gap (2g). We extract the g value from the frequency scan, for example, g = 85 MHz for the TmIG(200 nm)/YIG(200 nm) bilayer (Fig. 3b). We find the gap reduces as the layer thickness increases (Fig. 3c). To understand this, we derive the approximate solution (see Supplementary Note 3):
\[g\approx\frac{\gamma_{YIGYmIG}}{4\pi^{2}}\frac{J}{(M_{s,YIG}+M_{s,TmIG})}\cdot \frac{\sqrt{\big{(}2\mu_{0}H_{\rm res}+\mu_{0}M_{s,YIG}\big{)}\big{(}2\mu_{0}H_{ \rm res}+\mu_{0}M_{s,TmIG}\big{)}}}{f_{res}}\cdot\frac{1}{\sqrt{d_{YIG}d_{TmIG}}}, \tag{3}\]
where \(\omega_{\rm res}\) and \(H_{\rm res}\) are the resonance frequency and field in the gap center, respectively. The calculated results show the same trend as in the experiments (Fig. 3c). Also, Eq. (3) allows us to analyze the g value for coupling of the YIG Kittel mode to different TmIG PSSW modes. We compare the experimental and calculated g values for the coupling of n=0 mode in YIG and n=2 mode in TmIG in Fig. 3c, where we confirm that the higher mode coupling results in a lower g in our case.
Finally, to evaluate the coupling cooperativity in TmIG/YIG bilayers, we have determined the individual dissipation rates. We first get Gilbert damping factors for YIG and TmIG from field scans at different frequencies when they are not coupled (see Supplementary Note 4). The extracted damping factors are plotted in Fig. 3d, where we find a damping factor as low as 4.91 (\(\pm\)0.79) \(\times 10^{-4}\) in the 350 nm-thick TmIG. We also extract the dissipation rates for YIG and TmIG from frequency scans at different fields when they are not coupled (see Supplementary Note 4). As an example, \(\kappa_{YIG}=10\ MHz\) and \(\kappa_{TmIG}=29.5\ MHz\) for the TmIG(200 nm)/YIG(200 nm) bilayer. Therefore, \(g>\kappa_{YIG}\,\kappa_{TmIG}\,\ \text{and}\ \ C=\frac{g^{2}}{\kappa_{YIG}\kappa_{TmIG}}=24.5\,\) concluding a strong coupling in the bilayer. In Fig. 4, we summarize the dissipation rates and cooperativity for TmIG- and ferromagnetic metal-based heterostructures that show magnon-magnon coupling. The TmIG has a very low dissipation rate compared to ferromagnetic metals, which is consistent with the ultralow Gilbert damping.
In summary, we demonstrate ultralow damping and dissipation rates in the TmIG and achieve strong magnon-magnon coupling and high cooperativity in the TmIG/YIG bilayers. The combined experimental and theoretical analyses allow us to determine the interfacial exchange coupling strengths in our all-insulator bilayers. The all-magnetic-insulator bilayers allow us to achieve ultralow damping insulating synthetic antiferromagnets, magnonic crystals, and other artificial structures to realize energy-efficient spin wave devices. Besides, the strong coupling between two distinct magnetic insulators with ultralow damping allows to explore the novel quantum phases, such as topological magnon insulators and magnon flatband.
## Figures and Captions
Figure 2: **Schematic diagram of the spin waves in the heterostructure and the measured resonance spectra.****a,** Schematic illustration of the measurement set-up, where \(h_{rf}\) and \(H_{ext}\) stand for the microwave magnetic field and external static magnetic field, respectively. Spin-wave spectra are obtained by placing the sample face-down on a coplanar waveguide (CPW). The inset depicts the Kittel uniform spin wave mode in the YIG and the perpendicular standing spin wave (PSSW) mode in the TmIG. **b,** Experimentally color-coded spin-wave absorption spectra of the YIG(200 nm)/TmIG(200 nm) for the first three resonance modes of TmIG (n=0, 1, 2) and the uniform mode of YIG (n=0).
Figure 3: **Observation of strong magnon-magnon coupling and ultralow damping in YIG/TmIG bilayers.****a,** Resonant absorption peaks of the two hybrid modes as a function of external magnetic field with the YIG(200 nm)/TmIG(200 nm) bilayer. Solid curves show the numerical theory method fitting as hybrid modes. Data points are extracted from experimental data by reading out the minimum of each resonant peak from Fig. 2(b). **b,** Spin wave spectra at minimum resonance separation (\(\mu_{0}H_{ext}=10\) mT) in magnetic insulator bilayers with the YIG(200 nm)/TmIG(200 nm) bilayer. **c,** Coupling strength g between TmIG (n=1,2) mode and YIG (n=0) mode as a function of the YIG thickness. Red circles are experimental results and black squares are from theoretical calculations. Red dots: Experiments. **d,** Thickness dependence of Gilbert damping factors of YIG and TmIG in the YIG/TmIG bilayers.
Figure 4: **Summary of dissipation rates in TmIG and ferromagnetic metals versus cooperativities in TmIG- and ferromagnetic metal-based heterostructures.** Star and square symbols are dissipation rates for TmIG and ferromagnetic metals, respectively. All TmIG-related results are from this work and other data points are from refs. [27, 29, 36, 37, 38, 39, 40] (see Supplementary Table 2 for details). |
2309.05567 | Kinematics and Collimation of the Two-Sided Jets in NGC 4261: VLBI Study
on Sub-parsec Scales | We report multi-frequency VLBI studies of the sub-parsec scale structure of
the two-sided jet in the nearby radio galaxy NGC 4261. Our analyses include new
observations using the Source Frequency Phase Referencing technique with the
Very Long Baseline Array at 44 and 88 GHz, as well as archival data at 15 and
43 GHz. Our results show an extended double-sided structure at 43/44 GHz and
provide a clear image of the nuclear region at 88 GHz, showing a core size of
$\sim$0.09 mas and a brightness temperature of $\sim1.3\times10^{9}$ K. Proper
motions are measured for the first time in the two-sided jet, with apparent
speeds ranging from $0.31\pm0.14\,c$ to $0.59\pm0.40\,c$ in the approaching jet
and $0.32\pm0.14\,c$ in the receding jet. The jet-to-counter-jet brightness
ratio allows us to constrain the viewing angle to between $\sim54^{\circ}$ and
$84^{\circ}$ and the intrinsic speed to between $\sim0.30\,c$ and $0.55\,c$. We
confirm the parabolic shape of the upstream jet on both sides of the central
engine, with a power-law index of $0.56\pm0.07$. Notably, the jet collimation
is found to be already completed at sub-parsec scales, with a transition
location of about 0.61 pc, which is significantly smaller than the Bondi radius
of 99.2 pc. This behavior can be interpreted as the initial confinement of the
jet by external pressure from either the geometrically thick, optically thin
advection-dominated accretion flows (ADAF) or the disk wind launched from it.
Alternatively, the shape transition may also be explained by the internal flow
transition from a magnetically dominated to a particle-dominated regime. | Xi Yan, Ru-Sen Lu, Wu Jiang, Thomas P. Krichbaum, Zhi-Qiang Shen | 2023-09-11T15:53:27Z | http://arxiv.org/abs/2309.05567v1 | # Kinematics and Collimation of the Two-Sided Jets in NGC 4261: VLBI Study on Sub-parsec Scales
###### Abstract
We report multi-frequency VLBI studies of the sub-parsec scale structure of the two-sided jet in the nearby radio galaxy NGC 4261. Our analyses include new observations using the Source Frequency Phase Referencing technique with the Very Long Baseline Array at 44 and 88 GHz, as well as archival data at 15 and 43 GHz. Our results show an extended double-sided structure at 43/44 GHz and provide a clear image of the nuclear region at 88 GHz, showing a core size of \(\sim\)0.09 mas and a brightness temperature of \(\sim 1.3\times 10^{9}\) K. Proper motions are measured for the first time in the two-sided jet, with apparent speeds ranging from \(0.31\pm 0.14\,c\) to \(0.59\pm 0.40\,c\) in the approaching jet and \(0.32\pm 0.14\,c\) in the receding jet. The jet-to-counter-jet brightness ratio allows us to constrain the viewing angle to between \(\sim 54^{\circ}\) and \(84^{\circ}\) and the intrinsic speed to between \(\sim 0.30\,c\) and \(0.55\,c\). We confirm the parabolic shape of the upstream jet on both sides of the central engine, with a power-law index of \(0.56\pm 0.07\). Notably, the jet collimation is found to be already completed at sub-parsec scales, with a transition location of about 0.61 pc, which is significantly smaller than the Bondi radius of 99.2 pc. This behavior can be interpreted as the initial confinement of the jet by external pressure from either the geometrically thick, optically thin advection-dominated accretion flows (ADAF) or the disk wind launched from it. Alternatively, the shape transition may also be explained by the internal flow transition from a magnetically dominated to a particle-dominated regime.
galaxies: active -- galaxies: individual (NGC 4261) -- galaxies: nuclei -- radio continuum: galaxies 0000-0002-4002-4882-2879]Xi Yan
0000-0002-8071-7888]Ru-Sen Lu
0000-0002-4072-387X]Wu Jiang
0000-0002-8072-387X]Thomas P. Krichbaum
0000-0002-4072-387X]Zhi-Qiang Shen
## 1 Introduction
Relativistic jets in active galactic nuclei (AGN) undergo poorly understood acceleration and collimation processes that are closely linked to their launching mechanisms. Theoretical studies and simulations (e.g., McKinney, 2006; Tchekhovskoy et al., 2011) suggest that jets can originate from either a spinning black hole (Blandford and Znajek, 1977) or an accretion flow (Blandford and Payne, 1982). Moreover, the initial jet is suggested to be magnetically dominated with a parabolic shape due to external pressure (e.g., McKinney et al., 2012). However, as the jet propagates, it transits to a kinetically dominated state, expanding freely in a conical shape.
Very Long Baseline Interferometry (VLBI) is a powerful tool for studying the jet formation, acceleration and collimation processes. It has been extensively applied to several nearby low-luminosity AGN (LLAGN) to study jet collimation, such as M87 (e.g., Asada and Nakamura, 2012; Lu et al., 2023), NGC 6251 (Tseng et al., 2016), NGC 4261 (Nakahara et al., 2018), NGC 1052 (Nakahara et al., 2020) and NGC 315 (Park et al., 2021; Boccardi et al., 2021). Recently, Kovalev et al. (2020) proposed that the transition from a parabolic to conical shape may be a common effect in nearby AGN jets based on their analysis of a sample of 367 AGN. They also noted that the transition location does not necessarily coincide with the Bondi radius. NGC 315 serves as a typical ex
ample, where the jet collimation is completed early on sub-parsec scales (Boccardi et al., 2021). This behavior is interpreted as the initial confinement of the jet by the external pressure exerted by either the ADAF or the disk wind launched from it.
Among the above-mentioned sources, the Fanaroff-Riley Class I (FR-I) source, NGC 4261, deserves particular attention. First, the jet is observed at a large viewing angle of \(63^{\circ}\pm 3^{\circ}\)(Piner et al., 2001) and is double-sided (e.g., Jones and Wehrle, 1997). Second, precise core-shift measurements have determined the location of the central supermassive black hole (SMBH, at a distance of \(82\pm 16\,\mu\)as from the 43 GHz core, Haga et al., 2015). This allows an accurate estimate of the de-projected radial distance between the jet and the central SMBH. Furthermore, the proximity of NGC 4261 (31.6 Mpc, Tonry et al., 2001) and its large black hole mass (\(1.62\times 10^{9}M_{\odot}\), Boizelle et al., 2021; Ruffa et al., 2023) make it a valuable laboratory for studying jet properties, with 1 mas corresponding to 0.15 pc or 988 Schwarzschild radii (\(R_{\rm s}\)).
Despite these advantages, the collimation and kinematics of the NGC 4261 jet remain largely unexplored. Although previous observations found parabolic-to-conical transition signatures on the jet width profile, the upstream parabolic shape could not be well sampled due to the limited number of width measurements (see Figures 2-4 in Nakahara et al., 2018). In addition, apart from the work by Piner et al. (2001), who provided only one jet speed measurement, there have been no further kinematic analyses conducted on the NGC 4261 jet. For these reasons, we aim to examine the width profile of the upstream jet and investigate its kinematics.
This paper is organized as follows. In Section 2, we present our observations and data reduction. Section 3 describes the methods used for our kinematic analysis and transverse width measurement. The results are presented in Section 4, followed by a discussion in Section 5. Finally, we summarize in Section 6.
## 2 Observations and Data Reduction
### New VLBA observations
We observed NGC 4261 using the Very Long Baseline Array (VLBA) with the Source Frequency Phase Referencing (SFPR) technique (Rioja and Dodson, 2011) on February 14, 2022. The observations were performed at 44 and 88 GHz, with a data rate of 4 Gbits/s and 2-bit sampling. Both left-hand circular polarization (LCP) and right-hand circular polarization (RCP) were recorded, covering a total bandwidth of 1024 MHz. Each polarization was divided into 4 sub-bands (IFs). We used 3C 279 and 3C 273 as the fringe finder and amplitude/phase calibrator, respectively. A summary of the observations is provided in Table 1.
We calibrated the data using NRAO's Astronomical Image Processing System (AIPS, Greisen, 2003) following the procedures in Jiang et al. (2021). The phase calibration involved several steps. Firstly, we removed the constant single-band delays and phase offsets using high signal-to-noise ratio (SNR) calibrator scans. Then, we performed global fringe fitting to eliminate single- and multi-band residual delays and solve for fringe rates. Afterward, we applied frequency phase transfer (FPT) to the 88 GHz data by multiplying the 44 GHz residual phase solutions by the frequency ratio of 2. For the 88 GHz data, a re-fringe-fitting was run on the calibrator 3C 273 and the solutions were applied to NGC 4261 to further correct the residual ionospheric errors as well as the instrumental offsets between 44 and 88 GHz. We
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Freq.} & \multicolumn{1}{c}{P.C.} & Date & Array & Pol & Bandwidth & Beam size & \(I_{\rm peak}\) & \(I_{\rm rms}\) \\ \multicolumn{1}{c}{(GHz)} & & & & & (MHz) & (mas \(\times\) mas, deg) & (Jy beam\({}^{-1}\)) & (Jy beam\({}^{-1}\)) \\ \hline
15 & BM166 & 2002.07.05 & VLBA & Dual & 64 & 0.922\(\times\)0.512, -4.3 & 0.129 & 0.0005 \\
15 & BM175b & 2002.09.27 & VLBA & LCP & 64 & 1.04\(\times\)0.498, -5.1 & 0.133 & 0.0005 \\
15 & BM175c & 2003.05.05 & VLBA & LCP & 64 & 1.01\(\times\)0.456, -5.12 & 0.121 & 0.0005 \\
15 & BM175a & 2003.07.04 & VLBA & LCP & 64 & 1.02\(\times\)0.459, -4.66 & 0.130 & 0.0005 \\ \hline
43 & BM215a & 2004.12.20 & VLBA & Dual & 64 & 0.344\(\times\)0.175, -8.31 & 0.143 & 0.0006 \\ \hline
44 & BY167 & 2022.02.14 & VLBA, -SC,-HN & Dual & 1024 & 0.627\(\times\)0.171, -22.4 & 0.113 & 0.0005 \\
88 & BY167 & 2022.02.14 & VLBA, -SC,-HN & Dual & 1024 & 0.467\(\times\)0.101, -19.2 & 0.0492 & 0.0015 \\ \hline \end{tabular} Note. – Column (1): Observing frequency. Column (2): Project code. Column (3): Date of observation. Column (4): Participating stations. Stations not involved are indicated with a minus sign. Column (5): Polarization. Column (6): Bandwidth. Column (7): Full width at half maximum (FWHM) and position angle of the synthesized beam. Column (8)-(9): Peak intensity and rms noise.
\end{table}
Table 1: Summary of NGC 4261 observations
performed a prior amplitude calibration using the antenna system temperatures and gain curves with opacity corrections. The band-pass calibration was derived from scans on a bright calibrator source. Once calibration was completed, we averaged the data over frequency and conducted imaging and self-calibration using DIFMAP (Shepherd, 1997).
### Archival VLBA data
We also analyzed archival VLBA data of NGC 4261 at 15 and 43 GHz. The details of these observations are provided in Table 1. The BM166 data (15 GHz) were originally observed for polarimetry (Middelberg, 2004). The three-epoch BM175 datasets were observed at multiple frequencies but we only utilized the 15 GHz data for our analysis. In addition, we noted that the BM175c data were already published (Middelberg et al., 2005). The BM215a data (43 GHz) were also designed for polarimetry. For all these archival observations, we performed data reduction and imaging using AIPS and DIFMAP following standard procedures (e.g., Lu et al., 2023).
## 3 Data analysis
### Model fitting
We performed kinematic analysis using the 15 GHz data. To model the source structure, we fitted several circular Gaussian models to the complex visibilities using the MODELFIT task in DIFMAP. Then the fitted components in the four epochs were cross-identified based on their location, flux density and size (Table 2). To align the images, we used the compact bright core component as the reference position (Figure 1). The error in the fitted parameters was determined by considering the local SNR in the image around each feature (Lee et al., 2008). For positional uncertainties smaller than one-fifth of the minor beam size, we adopted the latter as the error estimate.
### Image analysis
To obtain the transverse structure of the jet, we measured the width of the double-sided jets. For the 15 GHz data, we used the stacked image created after convolving each individual image with a common beam. As for the 43 and 44 GHz data, we used the two individual images. Since the jet is almost along the east-west direction, we sliced the jet along PA=0\({}^{*}\) using the AIPS task SLICE and obtained a series of pixel-based transverse intensity profiles. Each transverse intensity profile was fitted with a Gaussian function to determine the full width at half maximum (FWHM), \(W_{\rm fit}\). Then we calculated the deconvolved jet width as \(W^{2}=W_{\rm fit}^{2}-W_{\rm res}^{2}\), where \(W_{\rm res}\) is the resolution along PA=0\({}^{*}\). To obtain the radial profile of the jet width, we calculated the distance from the central engine to each slice location, taking into account the measured core-shift relation (Haga et al., 2015).
## 4 Results
### Source morphology
Figures 1 and 2 show the uniformly weighted CLEAN images of NGC 4261 jet observed at 15, 43, 44 and 88 GHz. Clearly two-sided jets were detected at 15, 43 and 44 GHz, with the western side representing the approaching jet and the eastern side representing the receding jet.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline ID & Ep. & \(r\) (mas) & \(S_{\rm v}\) (mJy) & \(d\) (mas) & \(\beta_{\rm app}\) (c) \\ \hline Core & 1 & 0 & \(131\pm 13\) & \(0.29\pm 0.029\) & \\ & 2 & 0 & \(126\pm 12\) & \(0.27\pm 0.026\) & \\ & 3 & 0 & \(130\pm 11\) & \(0.26\pm 0.023\) & \\ & 4 & 0 & \(133\pm 12\) & \(0.26\pm 0.022\) & 0 \\ \hline W1 & 1 & \(4.42\pm 0.32\) & \(16\pm 11\) & \(1.28\pm 0.63\) & \\ & 2 & \(4.75\pm 0.26\) & \(10\pm 7\) & \(1.13\pm 0.52\) & \\ & 3 & \(5.33\pm 0.50\) & \(15\pm 13\) & \(1.89\pm 1.00\) & \\ & 4 & \(5.69\pm 0.54\) & \(15\pm 14\) & \(2.04\pm 1.08\) & \(0.59\pm 0.40\) \\ \hline W2 & 1 & \(2.61\pm 0.22\) & \(23\pm 11\) & \(1.22\pm 0.43\) & \\ & 2 & \(2.75\pm 0.22\) & \(26\pm 12\) & \(1.20\pm 0.44\) & \\ & 3 & \(3.35\pm 0.26\) & \(16\pm 8\) & \(1.27\pm 0.52\) & \\ & 4 & \(3.52\pm 0.28\) & \(17\pm 9\) & \(1.32\pm 0.56\) & \(0.46\pm 0.25\) \\ \hline W3 & 1 & \(1.00\pm 0.10\) & \(45\pm 9\) & \(0.56\pm 0.10\) & \\ & 2 & \(1.27\pm 0.10\) & \(34\pm 8\) & \(0.62\pm 0.13\) & \\ & 3 & \(1.97\pm 0.11\) & \(19\pm 6\) & \(0.76\pm 0.22\) & \\ & 4 & \(2.13\pm 0.13\) & \(16\pm 6\) & \(0.76\pm 0.26\) & \(0.57\pm 0.12\) \\ \hline W4 & 1 & \(0.46\pm 0.10\) & \(88\pm 11\) & \(0.31\pm 0.10\) & \\ & 2 & \(0.76\pm 0.10\) & \(55\pm 8\) & \(0.34\pm 0.10\) & \\ & 3 & \(1.19\pm 0.09\) & \(30\pm 6\) & \(0.49\pm 0.09\) & \\ & 4 & \(1.42\pm 0.09\) & \(21\pm 5\) & \(0.50\pm 0.11\) & \(0.45\pm 0.09\) \\ \hline W5 & 2 & \(0.39\pm 0.10\) & \(65\pm 8\) & \(0.20\pm 0.10\) & \\ & 3 & \(0.76\pm 0.09\) & \(55\pm 7\) & \(0.31\pm 0.09\) & \\ & 4 & \(0.86\pm 0.09\) & \(54\pm 7\) & \(0.41\pm 0.09\) & \(0.31\pm 0.14\) \\ \hline E1 & 1 & \(-1.22\pm 0.13\) & \(30\pm 10\) & \(0.87\pm 0.26\) & \\ & 2 & \(-1.44\pm 0.15\) & \(20\pm 8\) & \(0.89\pm 0.30\) & \\ & 3 & \(-1.69\pm 0.20\) & \(20\pm 8\) & \(1.07\pm 0.40\) & \\ & 4 & \(-1.96\pm 0.20\) & \(12\pm 6\) & \(1.00\pm 0.40\) & \(0.32\pm 0.14\) \\ \hline \end{tabular} Note. – Column (1): Component label. Column (2): Epoch (1: 2002.07.05, 2: 2002.09.27, 3: 2003.05.05, 4: 2003.07.04). Column (3): The radial distance from the core component. Column (4): The flux density. Column (5): The size (FWHM). Column (6): Apparent speed in units of the speed of light \(c\).
\end{table}
Table 2: Properties of the model-fitted Gaussian components
At 43 and 44 GHz, we observe a more extended structure compared to previous studies (Jones et al., 2000; Middelberg et al., 2005), although the apparent structure are slightly different due to the different beam shape. At 88 GHz, with an angular resolution of 0.467\(\times\)0.101 mas, we obtain a clear image of the nuclear structure on a scale as small as 100 \(R_{\rm s}\). The derived size of the core is 0.09 mas, from which we estimate a brightness temperature (\(T_{\rm B}\)) of \(1.3\times 10^{9}\) K.
### Jet kinematics
Figure 1 displays the measured proper motions of the NGC 4261 jet. We note that a new component (W5) was ejected between September 2002 and July 2003. By conducting linear fits to the radial distances from the core over time, we determined the apparent speeds of these features (see Figure 3 and Table 2). The measured apparent speeds in the approaching jet range from \(0.31\pm 0.14\,c\) to \(0.59\pm 0.40\,c\) and in the counter-jet is \(0.32\pm 0.14\,c\).
The intrinsic velocity (\(\beta_{\rm int}\)) and the viewing angle (\(\theta\)) of the jet can be constrained using the apparent velocity (\(\beta_{\rm app}\)) and the jet-to-counter-jet brightness ratio (\(R\)). These relationships can be expressed by the following
Figure 1: Images of NGC 4261 at 15 GHz. These images are centered on the bright core position. The fitted circular Gaussian components are represented by dark violet circles superimposed on the contours. The cross-identified components are labeled at the bottom. The dark violet lines depict the best-fit line of proper motion. The slategrey filled ellipses on the left indicate the synthesized beam for each image. Contours begin at 3 times the rms value and increase by a factor of \(\sqrt{2}\).
Figure 2: Self-calibrated images of the NGC 4261 jet obtained from VLBA observations at 43, 44 and 88 GHz. The synthesized beam is shown at the bottom left corner of each image. Contours start at 3 times the rms value and increase by a factor of \(\sqrt{2}\).
equations:
\[\beta_{\rm int}=\frac{\beta_{\rm app}}{\rm sin\theta+\beta_{\rm app}\rm cos\theta} \tag{1}\]
and
\[\beta_{\rm int}=\frac{1}{\rm cos\theta}\left(\frac{R^{1/(2-\alpha)}-1}{R^{1/(2- \alpha)}+1}\right) \tag{2}\]
where \(\beta_{\rm int}\) and \(\beta_{\rm app}\) are in units of \(c\), and \(\alpha\) represents the spectral index of the jet (\(S\propto\nu^{+\alpha}\)). We adopted \(\alpha=-1\) based on the spectral index map from Haga et al. (2015).
We determined the longitudinal intensity profile along the jet within 3 mas from the core in the stacked 15 GHz image for both the approaching and receding jet. As shown in the top panel of Figure 4, the brightness ratio varies from \(\sim\)1 to 4. In the same region, we measured the apparent speeds of the approaching jet, which range from 0.31 \(c\) to 0.57 \(c\). By combining these values with the brightness ratios, we were able to constrain the viewing angle to be \(\theta\gtrsim 46^{\circ}\) (Figure 4, bottom).
To measure the brightness ratio of the approaching jet to the receding jet, we excluded the core region to avoid possible biases. This is because the observed central bright core may suffer from blending effects between the base of the approaching and the receding jet, and the emission from the receding jet may also be absorbed by the accretion flow. In doing so, we employed two approaches. First, we excluded the innermost 1 mas region of the flow, which corresponds to twice the minor axis size of the restoring beam (see, e.g., Mertens et al., 2016). With this exclusion, the brightness ratio is between \(\sim\)1.4 and 3 (Figure 4, top). This range provides an estimate for the viewing angle of about 54\({}^{\circ}\) to 84\({}^{\circ}\) (Figure 4, bottom).
Alternatively, we calculated the brightness ratio by considering the clean components in each individual 15 GHz map. By placing two rectangular boxes of the same size and distance from the core on both sides of the jet, we obtained a brightness ratio ranging from 1.6 to 2. Additionally, both the 43 and 44 GHz maps also provided a brightness ratio of about 2. Overall, these results are all within the range of \(1.4\lesssim R\lesssim 3\) and point toward a very similar viewing angle range.
Notably, we also measured an apparent speed of 0.32 \(c\) for the counter-jet at separations from 1 mas to 3 mas. As shown in the bottom panel of Figure 4, this apparent speed intersects with the lines given by the measured brightness ratio. These intersections provide a viewing angle range as well: from \(\sim 64^{\circ}\) (for \(R=3\)) to \(80^{\circ}\) (for \(R=1.4\)). This is highly consistent with the above anal
Figure 4: Top: The radial intensity profiles of the jet (green) and counter-jet (blue) are shown, along with their corresponding brightness ratio \(R\) (red). The brightness ratio within the shaded area was used to constrain the jet viewing angle. Bottom: The allowed range of the viewing angle and intrinsic velocity of NGC 4261 jet.
Figure 3: Radial distance from the core versus time for the cross-identified components.
ysis using the apparent speeds of the approaching jet. Considering all the above results, we obtain a conservative range of viewing angles from \(54\arcdeg\) to \(84\arcdeg\) and an intrinsic speed range from \(\sim\)\(0.30\,c\) to \(0.55\,c\).
### The inner collimation profile
We analyzed the radial width profile of the upstream jet, including measurements at 15, 43 and 44 GHz (Section 3.2). We also considered the 88 GHz core size as an upper limit for the jet width and estimated its distance to the SMBH to be \(\sim\)0.036 mas based on the core-shift relation (Haga et al., 2015). All measurements were converted to the de-projected physical scales in units of \(R_{\rm s}\).
In the top panel of Figure 5, we present the combined results obtained from both the approaching and the receding jet. The inner width profile exhibits a simple power-law relationship, with the form \(W\propto r^{0.56\pm 0.07}\), where \(W\) is the de-convolved jet width and \(r\) denotes the de-projected distance from the black hole. This power-law relationship corresponds to a parabolic jet shape.
We also measured the width of the downstream jet based on previous multi-frequency (1.4, 2.3, 5.0, 8.4, and 22 GHz) VLBA observations Nakahara et al. (2018). We re-imaged the source and determined the jet width as in Section 3.2. The results are shown in the bottom panel of Figure 5. With these multi-frequency jet width measurements, the width profile clearly show a transition from parabolic to conical shape. We note that this transition is in good agreement with the broken power-law function fitted by Nakahara et al. (2018) (see their Eq.(1) and Table 2) 1. We emphasize that the jet collimation is already completed at sub-parsec scales, with the transition location of \(\sim\)0.61 pc or \(4\times 10^{3}R_{\rm s}\) being significantly smaller than the Bondi radius (99.2 pc or \(r_{\rm B}\sim 6.5\times 10^{5}R_{\rm s}\), Balmaverde et al., 2008) 2.
Footnote 1: We shifted the fitting line to account for the different black hole masses used in their study (\(4.9\times 10^{8}M_{\odot}\)) and our study (\(1.62\times 10^{9}M_{\odot}\)).
Footnote 2: In their original paper, the calculated Bondi radius was 32 pc, based on a black hole mass of \(5.25\times 10^{8}M_{\odot}\), which is 3.1 times smaller than the mass we used. Therefore, we adopted a Bondi radius of 99.2 pc (\(r_{\rm B}\propto M_{\rm BH}\).)
## 5 Discussion
In this study, we presented the first multi-epoch kinematic analysis of the NGC 4261 jet. Previous studies by Piner et al. (2001) reported an apparent speed of \(0.83\pm 0.11\) mas/year at about 5-6 mas from the core based on two-epoch observations. By combining this value with the jet/counter-jet brightness ratio and the spectral index, they derived a jet viewing angle of \(63\arcdeg\pm 3\arcdeg\). We found that the apparent jet speeds in our study are consistent with the previous results. The derived viewing angle by Piner et al. (2001) also falls within our constrained range. In addition, with the caveat that the measured proper motions should not be over-interpreted, the increase in the apparent speeds from \(0.31\,c\) to \(0.59\,c\) suggests that the jet may be un
Figure 5: Top: Power-law fit of the jet width versus de-projected distance (assuming a viewing angle of 63°) from the core using data at 15, 43, 44 and 88 GHz. Bottom: Same as the top panel, but including the 1, 2, 5, 8, and 22 GHz data. The black solid line represents the radial width profile fit from Nakahara et al. (2018). The vertical dashed line indicates the location of the structural transition. The black and grey areas represent the size of the event horizon surface for black holes with maximum spin and no spin, respectively.
dergoing acceleration. We note that this acceleration is observed on the sub-parsec scale (de-projected), largely coinciding with the jet collimation region. Future high-resolution and high-cadence observations will allow a more detailed study of this jet acceleration.
Compared to previous studies (Nakahara et al., 2018), we provide a more comprehensive examination of the innermost jet structure using the high-sensitivity data. We confirm that the innermost jet exhibits a parabolic shape. Notably, we found that the transition location of the width profile (0.61 pc or \(\sim 4\times 10^{3}R_{\rm s}\)) is significantly smaller than the corresponding Bondi radius (99.2 pc or \(\sim 6.5\times 10^{5}R_{\rm s}\)). Interestingly, this behavior is similar to that observed in the nearby radio source, NGC 315, where the jet transition location is also at a significantly smaller distance from the core than the Bondi radius (Boccardi et al., 2021; Park et al., 2021).
Similar to NGC 315, we propose that the shape transition in NGC 4261 is influenced by external pressure from the surrounding medium. Following the discussions on NGC 315 by Boccardi et al. (2021), we investigate potential sources of the external pressure in NGC 4261. One possibility is the ADAF itself. Previous observations and theoretical models have shown that the ADAF model is crucial in explaining the X-ray emission in NGC 4261 (Gliozzi et al., 2003; Nemmen et al., 2014). And it is also suggested that the ADAF is truncated by an outer thin disk at a location of \(\sim 10^{3}-10^{4}R_{\rm s}\)(Gliozzi et al., 2003; Nemmen et al., 2014). Notably, this truncation location is comparable to the location of the jet shape transition. Therefore, the parabolic jet profile may be initially collimated by the thick ADAF itself.
Alternatively, the external pressure may be provided by a non-relativistic disk wind rather than the ADAF (e.g., Blandford and Globus, 2022). The disk wind is believed to originate from the ADAF, and its role in shaping the parabolic geometry has been studied in M 87 (e.g., Globus and Levinson, 2016; Nakamura et al., 2018). In the case of NGC 4261, considering reasonable conditions (Boccardi et al., 2021; Globus and Levinson, 2016), the wind may efficiently collimate and confine the jet.
On the other hand, the transition in the internal flow, from a magnetically dominated to a particle-dominated regime, could also account for the observed jet profile transition. A recent semi-analytical model proposed by Kovalev et al. (2020) supports this idea. According to their model, the jet profile transition can occur under the influence of a single power-law external pressure profile. Importantly, the location of the transition point in the profile is closely tied to the initial magnetization of the jet and can lie within the region well below the Bondi radius (see Figure 8 in Kovalev et al., 2020). Based on these, we propose that the initial confinement of the jet is also possibly due to the magnetic pressure that is dominated in a region far below the Bondi radius.
Lastly, it is interesting to note that the jet width in NGC 4261 appears to be comparable to that in M 87 on the same physical scales. This contradicts the previous findings by Nakahara et al. (2018), who found that jet width in NGC 4261 is much larger than that in M 87. However, this can be attributed to the use of a smaller black hole mass in their study.
## 6 Summary
In this paper, we presented multi-frequency VLBI studies of the kinematics and collimation of the two-sided jets in NGC 4261 on sub-parsec scales. Our findings are summarized as follows:
1. We obtained VLBI images of NGC 4261 at 15, 43, 44 and 88 GHz. At 43 and 44 GHz, we observed a more extended double-sided structure compared to previous studies. At 88 GHz, we obtained a clear image of the nuclear structure at a scale as small as 100 \(R_{\rm s}\). We found the core size at 88 GHz is 0.09 mas and the brightness temperature is \(\sim 1.3\times 10^{9}\) K.
2. We measured proper motions in both the approaching and receding jets on sub-parsec scales. The measured apparent speeds in the approaching jet range from \(0.31\pm 0.14\,c\) to \(0.59\pm 0.40\,c\). The increase in apparent speeds with distance from the core suggests an acceleration of the jet, which will need to be confirmed by future observations. Furthermore, we also observed a jet speed of \(0.32\pm 0.14\,c\) in the counter-jet.
3. Using the measured apparent velocity and the jet-to-counter-jet brightness ratio, we constrained the jet viewing angle to between \(54^{\circ}\) and \(84^{\circ}\). We also found that the intrinsic speed is between \(0.30\,c\) and \(0.55\,c\). Combining these results with the jet collimation profile suggests that the jet acceleration region possibly coincides with the jet collimation region.
4. We found a parabolic shape for the upstream jet on both sides, described by \(W\propto r^{0.56\pm 0.07}\). We emphasize that the jet collimation is already completed at sub-parsec scales. Combining our findings with previous studies, we found that the transition location of the jet structure (0.61 pc or \(\sim 4\times 10^{3}R_{\rm s}\)) is significantly smaller than the corresponding Bondi radius (99.2 pc or \(\sim 6.5\times 10^{5}R_{\rm s}\)).
This behavior is similar to what has been observed in NGC 315. Like NGC 315, we interpret this behavior as the initial confinement of the jet by the external pressure exerted by either the geometrically thick, optically thin ADAF or the disk wind launched from it. Alternatively, the shape transition may also be explained by the internal flow transition from a magnetically dominated to a particle-dominated regime.
We thank the anonymous referee for helpful comments and suggestions. This work was supported by the Key Program of the National Natural Science Foundation of China (grant no. 11933007), the Key Research Program of Frontier Sciences, CAS (grant no. ZDBS-LY-SLH011), the Shanghai Pilot Program for Basic Research, Chinese Academy of Sciences, Shanghai Branch (JCYJ-SHFY-2022-013) and the Max Planck Partner Group of the MPG and the CAS. The Very Long Baseline Array is operated by the National Radio Astronomy Observatory, a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
|
2309.15012 | Synchrotron X-ray phase-contrast imaging of ultrasonic drop atomization | Ultrasonic atomization is employed to generate size-controllable droplets for
a variety of applications. Here, we minimize the number of parameters dictating
the process by studying the atomization of a single drop pending from an
ultrasonic horn. Spatiotemporally resolved X-ray phase-contrast imaging
measurements show that the number-median sizes of the ejected droplets can be
predicted by the linear Navier-Stokes equations, signifying that the size
distribution is controlled by the fluid properties and the driving frequency.
Experiments with larger pendant water drops indicate that the fluid-structure
interaction plays a pivotal role in determining the ejection onset of the
pendant drop. The atomization of viscoelastic drops is dictated by extended
ligament formation, entrainment of air, and ejection of drop-encapsulated
bubbles. Existing scaling laws are used to explain the required higher input
amplitudes for the complete atomization of viscoelastic drops as compared to
inviscid drops. Finally, we elucidate the differences between capillary
wave-based and cavitation-based atomization and show that inducing cavitation
and strong bubble oscillations quickens the onset of daughter drop ejection but
impedes their size control. | Anunay Prasanna, Luc Biasiori-Poulanges, Ya-Chi Yu, Hazem El-Rabii, Bratislav Lukić, Outi Supponen | 2023-09-26T15:29:07Z | http://arxiv.org/abs/2309.15012v1 | # Synchrotron X-ray phase-contrast imaging of ultrasonic drop atomization
###### Abstract
Ultrasonic atomization is employed to generate size-controllable droplets for a variety of applications. Here, we minimize the number of parameters dictating the process by studying the atomization of a single drop pending from an ultrasonic horn. Spatiotemporally resolved X-ray phase-contrast imaging measurements show that the number-median sizes of the ejected droplets can be predicted by the linear Navier-Stokes equations, signifying that the size distribution is controlled by the fluid properties and the driving frequency. Experiments with larger pendant water drops indicate that the fluid-structure interaction plays a pivotal role in determining the ejection onset of the pendant drop. The atomization of viscoelastic drops is dictated by extended ligament formation, entrainment of air, and ejection of drop-encapsulated bubbles. Existing scaling laws are used to explain the required higher input amplitudes for the complete atomization of viscoelastic drops as compared to inviscid drops. Finally, we elucidate the differences between capillary wave-based and cavitation-based atomization and show that inducing cavitation and strong bubble oscillations quickens the onset of daughter drop ejection but impedes their size control.
keywords: Ultrasonic atomization, Faraday waves, viscoelasticity, Rayleigh-Taylor instability +
Footnote †: journal: Journal of Sound and Vibration
## 1 Introduction
Atomization is defined as the process of breaking up bulk liquid into smaller droplets. Ultrasonic transducers provide atomization at lower energy costs than mechanical atomizers, and therefore, ultrasonic atomization is typically the desired technique for most atomization applications. These include the preparation of specialty alloy powders (Lierke and Grierhammeri, 1967), the creation of aerosols for pulmonal drug delivery (Taylor and Mccallion, 1997), encapsulation in the food and pharmaceutical industry (Klaypradit and Huang, 2008), emulsification processes (Taha et al., 2020) and chemical sonoreactors (Mc Carogher et al., 2021). Applications involving atomization require high throughput in order to create millions of droplets at a time (Tsai et al., 2012). Furthermore, the size distribution of the ejected droplets needs to be predictable and controllable to optimize the efficiency of the different applications (Rajan and Pandit, 2001; Gogate, 2015).
Several attempts have been made to elucidate the physical mechanisms involved in ultrasonic atomization and to predict the resulting end products. Sollner (1936) hypothesized that cavitation occurs in a liquid film subjected to ultrasound excitation, and that hydraulic shock generation in the film leads to the ejection of daughter drops. On the other hand, Lang (1962) showed that capillary waves are first formed on the surface of the liquid film, and the associated hydrodynamic instabilities are the primary mechanisms for the ejection of daughter drops. In that case, the number-median diameter of the ejected drops, \(d_{e}\), could be predicted as a constant fraction of the capillary wavelength (Lang, 1962). More recent studies have increased the parameter space and produced empirical correlations to predict the mean droplet size in ultrasonic atomization (Rajan and Pandit, 2001; Ramisetty et al., 2013). The currently accepted theory is the conjunction theory or the combined cavitation-capillary theory, which states that cavitation events contribute to the creation of capillary waves on the surface, which then rupture to form daughter droplets. A multitude of studies have demonstrated the conjunction theory in practice (Tomita, 2014; Simon et al., 2015; Cailly et al., 2023).
Interestingly, a number of state-of-the-art applications are attempting to employ high-frequency excitation (in the MHz range) on drops attached to acoustic transducer surfaces to produce daughter droplets (Tsai et al., 2012; Simon et al., 2015). The acoustic excitation of pendant drops has mainly been
studied from a fundamental point of view. Research has been dedicated to estimating typical oscillation behavior and frequencies (Strani and Sabetta, 1984; Bostwick and Steen, 2014; Chang et al., 2015), surface capillary wave formation (Wilkes and Basaran, 1997; Vukasinovic et al., 2007b; Tan et al., 2010), and the hysteretic response of the drops to the applied forcing (DePaoli et al., 1995; Wilkes and Basaran, 1999). One of the first notable attempts to explain drop atomization behavior was made by Goodridge et al. (1997), having performed experiments on millimetric sessile drops at low-frequency excitations (\(f_{d}=20-60\) Hz). Their drops showed nonlinear wave-amplitude behavior, with the threshold amplitude for the ejection of daughter droplets being a function of the applied frequency and the surface tension or the viscosity, depending on whether the drops were inviscid or viscous respectively. James et al. (2003b) studied the atomization characteristics of a sessile drop at a higher driving frequency (\(f_{d}=1000\) Hz). They found that the atomization depended on the coupled fluid-structure interaction of the drop and the vibrating surface, with the drop bursting completely into daughter droplets, provided that the natural frequency of the combined structure and drop was in resonance with the driving frequency. Further studies have investigated the ejection mechanism (James et al., 2003a; Vukasinovic et al., 2007b; Deepu et al., 2013) and the required threshold amplitudes for ejection (Deepu et al., 2018).
However, most of these studies are typically undertaken at low frequencies (usually a multiple of the natural frequency of the drop) and do not comprehensively report on higher frequency daughter drop ejection characteristics, making them difficult to compare with actual applications. Furthermore, based on the method of application of ultrasonic excitation, there seem to exist disagreements on what is the dominant mechanism of droplet ejection. Previous experiments indicate that the effects are case-specific, with the ejected drop size distribution and the throughput being dictated by a large parameter space, including fluid properties and the characteristics of the applied ultrasound. Antonevich (1959) hypothesized from experiments that the stochastic nature of cavitation would lead to a wider range of ejected drop sizes as compared to having ejection from the rupture of capillary waves only. Recent studies have further attempted to quantify the specific role of cavitation in liquid film atomization (Zhang et al., 2021; Zhang and Yuan, 2022). However, there still remains some debate on the distinct contributions of capillary waves and cavitation-related activities with respect to the ejection process, and therefore, there is a need to outline and compare the roles of
both mechanisms. The complexity involved in resolving all the spatiotemporal scales of the problem implies that the current studies used for predicting ejected drop sizes are only reduced-order solutions. We believe that with simple configurations, this study can shed light not only on which is the dominant mechanism for drop atomization, but also on the end products of the atomization process based on the atomization mechanism. This should also enable us to define which mechanism is better suited for atomization applications.
In this work, the phenomenology of drop atomization is underlined by utilizing advanced experimental techniques. A pendant drop is attached to the tip of an ultrasonic horn capable of nucleating vapor bubbles within a liquid volume (Biasiori-Poulanges et al., 2023). By using different configurations, we initiate capillary waves on the drop interface, both with and without cavitation. To further simplify the problem, we reduce the number of parameters to analyze by fixing the input amplitude and driving frequency of the ultrasonic horn. Time-resolved X-ray phase-contrast imaging coupled with conventional shadowgraphy is employed to elucidate the underlying mechanisms involved. This allows us to overcome the overlapping problems of dense droplet sprays and provides a detailed visualization of surface instabilities, cavitation, and ejection mechanisms underlying the drop fragmentation process.
The outline of the paper is as follows. First, we describe the experimental methods. We then provide a qualitative description of the ejection process for water droplets and quantify the ejected drop sizes. In addition, 2% chitosan, which is a viscoelastic fluid, is then used to make pendant droplets that are subjected to the same periodic forcing, and a possible explanation for their ejection dynamics is provided. Chitosan was chosen as our fluid with viscoelastic properties due to its relevance in ultrasound-enhanced bioadhesive and transdermal drug delivery (Ma et al., 2022) which are topics we work with extensively. Further configurations of droplets are tested to study the effects of cavitation on vibration-induced pendant drop atomization in greater detail.
## 2 Experimental Apparatus and Methodology
### Pendant drop and ultrasonic excitation
Drops of water (at volumes of \(V=50\) and \(100\)\(\mathrm{\SIUnitSymbolMicro L}\)) and \(2\%\) chitosan in a hydrochloric acid (HCl) and water solution are used for the experiments. The preparation of the \(2\%\) chitosan solution is outlined in A. The drops are then placed on the active face of the horn with the help of a microsyringe. Due to the expected high strain rates, we select a viscosity value closer to the infinite shear rate value expected for the \(2\%\) chitosan (\(\mu_{\infty}=100\) mPa s), as estimated by shear rheometry (Cho et al., 2006). Modeling the nonlinear, viscoelastic nature of chitosan in this case was non-trivial and beyond the scope of the paper.
A \(1/2\)" diameter ultrasonic horn (Branson Ultrasonic Sonifier SFX550, \(550\) W) with a driving frequency of \(20\) kHz is operated at \(40\%\) of maximum amplitude, which corresponds to a maximum peak-to-peak displacement of \(A_{\mathrm{pp}}=57\)\(\mathrm{\SIUnitSymbolMicro m}\). The horn adjusts its power output based on the viscous dissipation of the liquid to provide the same peak-to-peak displacement for a given amplitude percentage irrespective of the liquid attached to its surface. The horn has a transient period of approximately \(16\)\(\mathrm{ms}\) before the maximum amplitude has fully developed and the acoustic field has set in.
Figure 1: (a) Experimental setup depicting the high-speed imaging approaches employed: Synchrotron X-ray phase-contrast imaging (Camera 1) and shadowgraphy (Camera 2) (b) Laser scanning vibrometry setup to determine the response of the active face of the ultrasonic horn
### High-speed imaging
The experiments are carried out at the ID19 beamline of the European Synchrotron Radiation Facility (Weitkamp et al., 2010). The polychromatic X-ray beam generated by a long-period undulator set to a 20-mm gap is used to probe the fast ultrasound-induced atomization dynamics. The (partial) spatial coherence properties and the high flux of the X-ray beam are leveraged to improve the contrast between dissimilar phases and resolve fluid interfaces while preserving the overall geometrical representation in direct space (Biasiori-Poulanges et al., 2023). The X-ray spectrum used to illuminate the sample is filtered with mandatory optical elements along the 145-m-long vacuum flight tube to provide heat-load moderation (2.2-mm thick diamond window and a series of thin carbon and beryllium windows). The X-ray detector consists of a 1-mm thick LuAg:Ce (Ce-doped Lu\({}_{3}\)Al\({}_{5}\)O\({}_{12}\)) scintillator optically coupled to the Photron SA-Z ultra-fast camera (High-speed camera 1 in Fig. 1(a)) equipped with 2.1\(\times\) magnification (100:210 Hasselblad tandem optic) and provided a pixel size of 9.52 um. Due to a micrometric source size and (almost) parallel illumination, the penumbral blur is orders of magnitude below the pixel size. The detector assembly is positioned 5.5 m downstream of the sample, ensuring that the propagation-based interference between transmitted X-rays results in an increased edge contrast due to (partial) spatial beam coherence while fulfilling the near-field condition (Wilkins et al., 1996).
Shadowgraphy is performed simultaneously on an axis perpendicular to the X-ray phase-contrast imaging, which is captured by a Photron FAST-CAM NOVA S12 (High-speed camera 2 in Fig. 1(a)). Both cameras were set at recording frame rates of 80 kHz. The inset of Fig. 1(a) clearly depicts the differences between the X-ray and shadowgraph images. Every ejected daughter droplet is in focus in the phase-contrast images contrary to the shadowgraphs. The surface and the interior of the droplet are clearly visible in the phase-contrast images, which allows us to evaluate qualitative differences between the test cases in detail. The clarity of the phase-contrast images enables us to evaluate ejected droplet size distributions accurately - an estimate that would at best be approximate with the shadowgraphs. Further differences between the images from the two techniques can be evaluated by the reader from the Supplementary Videos provided for a water droplet with contact radius, \(R_{c}=1.4\) mm.
### Laser scanning vibrometry
To assess the dynamical behavior of the active face of the ultrasonic horn to which the pendant drop is attached, laser scanning vibrometry is employed. It is an optical method that uses the Doppler effect to measure the velocity of the surface (Drain, 1980). The experimental setup is depicted in Fig. 1(b). A Polytec Scanning Vibrometer (PSV-400) with a controller (OFV-5000), capable of measuring frequencies up to 1 MHz, is used for the measurement. The scanning head (OFV-505) has a Helium-Neon (He-Ne) laser with a wavelength of \(\lambda=633\) nm. The VD-09 decoder with a maximum range of 1000 mm s\({}^{-1}\) V\({}^{-1}\) was used, since the typical velocity range expected for the horn was around 0 - 10 m s\({}^{-1}\). Grid points to be scanned by the vibrometer are defined on the active face of the horn. The ultrasonic horn is activated at different horn amplitudes (\(A=30\%,\;40\%\) and \(50\%\)), and after the transient period of the horn, the vibrometer is set to scan the defined grid points. This is used to obtain the frequency response and the velocity amplitude of the active face of the horn, with the velocity amplitude at the different scan points dictating the dominant mode shape of the active face of the horn.
## 3 Results
### Qualitative description of ejection process for water drops
Vibration-based atomization of water drops of volumes \(V=50\) and 100 \(\mu\)L, corresponding to contact radii \(R_{c}=1.4\) mm and \(R_{c}=2.8\) mm respectively, is investigated. From here on, the contact radius will be used to distinguish the drops, as this can be carried over to other drop configurations used later. Exciting the drop under periodic forcing leads to the formation of capillary waves as reported by other studies for both pendant and sessile drops (Wilkes and Basaran, 1997; Goodridge et al., 1997; Biasiori-Poulanges et al., 2022). The frequency content of wave packets can be measured in a manner similar to Vukasinovic et al. (2007a), by taking the intensity variation of a pixel on the image and calculating its power spectral density over time. The results have been briefly summarized here, with a detailed description available in James et al. (2003b) and Vukasinovic et al. (2007a) for sessile water drops.
The atomization of the drop can be characterized into six stages as visualized in Fig. 2. First, the drop radially oscillates for a while, before axisymmetric standing waves become visible in Fig. 2(a). These waves require
minimal excitation amplitude to exist (James et al., 2003b) and are harmonic in nature, with the frequency content of the wave packets being the same as the driving frequency (\(f_{d}=20\) kHz). As the excitation amplitude increases, azimuthal waves are developed at the contact line of the drop along the horn (seen in Fig. 2(b)). These waves are subharmonic and have a frequency content equivalent to half the driving frequency, implying that they correspond to the classical Faraday instability (Faraday, 1831). They move slowly downwards along the surface, from the contact line to the tip of the drop, and interact with the existing harmonic, axisymmetric waves to create higher-order spatial modes on the drop. As the amplitude of the waves on the drop increases, crests and troughs on the surface grow progressively and become more visible. Over time, the interaction of different wave modes leads to the creation of a time-dependent "lattice" mode on the surface of the drop as seen in Fig. 2(c). The process is chaotic, and it is difficult to distinguish a single dominant frequency for this phase of drop vibration. Fig. 2(d) shows the onset of ejection on the primary drop surface, leading to the formation of several daughter droplets. Here, we define the onset of ejection as when the driving frequency is increased. The onset of ejection on the primary drop surface is shown in Fig. 2(e). The onset of ejection on the primary drop surface is shown in Fig. 2(f). The onset of ejection on the primary drop surface is shown in Fig.
there are ejection sites over the entire pendant drop interface. Ejection of daughter droplets occurs due to the collapse of troughs on the pendant drop interface. Sometimes ligaments are ejected instead of droplets, which are then susceptible to the Rayleigh-Plateau instability, leading to the formation of satellite droplets (Eggers and Villermaux, 2008). Over time, the number of daughter droplets and ligaments ejected increases as seen in Fig. 2(e), before the pendant drop completely atomizes (see Fig. 2(f)) and only the ejected daughter droplets exist in the field of view.
The overall atomization dynamics of the pendant drop are similar when the volume is increased (\(R_{c}=2.8\) mm) as seen in Fig. 3. Axisymmetric harmonic waves are created first in the droplet. However, unlike in the case of \(R_{c}=1.4\) mm, some locations on the drop have already generated subharmonic waves before the axisymmetric waves have fully set in. Three such locations are depicted in Fig. 3(a). These locations behave as "point sources", generating subharmonic waves that continue to interact with the harmonic waves, clearly influencing the ejection process as shown in Fig. 3(b). The first daughter droplets are ejected from the locations of the point sources, as opposed to the whole drop surface as for the case of \(R_{c}=1.4\) mm.
Since the point sources are not created by the fluid itself, it can be conjectured that the ultrasonic horn had a role to play in the creation of the point sources. Investigations of the frequency response and the mode shapes of the horn face were carried out using laser scanning vibrometry. Fig. 4 depicts the velocity amplitudes of the active face of the horn for three different horn amplitude percentages. The contour represents the dominant mode of the
Figure 3: Wave patterns on a pendant water drop with \(R_{c}=2.8\) mm. The snapshots of X-ray phase-contrast images are labeled with their non-dimensional times, \(t/T_{d}\). (a) Subharmonic semi-circular waves originate from “point sources”, where the sources are depicted by numbers, and the waves are marked by black dotted lines; (b) Both subharmonic semi-circular waves and harmonic waves are visible as indicated by the black dotted box. Droplet ejection is seen from the location of the “point sources” near the contact line.
horn at its fundamental frequency and indicates that there exist regions of high localized displacement with nearly twice the amplitude of the lower displacement regions on the horn face. While the amplitudes of all the regions in the horn are sufficient to create both harmonic and subharmonic waves (Kumar and Tuckerman, 1994), it is evident from our experiments that the subharmonic waves need a longer time to set in than the harmonic waves. With localized regions of high amplitude on the active face of the horn, it is easier to cross the threshold required to generate subharmonic waves on the drop surface. Therefore, these locations on the horn face seem to behave as "point sources", allowing for a quicker transition between the different stages of drop ejection (harmonic waves - subharmonic waves - "lattice" mode - ejection onset). This discussion clearly indicates that the fluid-structure interaction plays an important role in the atomization process, particularly in determining the onset of ejection, and the regions on the primary drop from where the ejection of daughter droplets begins.
### Ejected droplet size distribution
The daughter droplet sizes ejected by the pendant water drop can be counted from the frames of the phase-contrast images. A Canny edge detection algorithm was employed to estimate the droplet size distribution from selected frames of the different test cases (Canny, 1986). A few shadowgraph frames at the same instants as the phase-contrast images were also processed to esti
Figure 4: The velocity amplitude of the dominant mode of the active face of the ultrasonic horn at three different amplitude percentages obtained by laser scanning vibrometry. The two dotted circles drawn on the shape of \(A=40\%\) show the expected positions of the pendant drops with \(R_{c}=1.4\) mm (white) and \(R_{c}=2.8\) mm (black). It can be noticed that the high amplitude regions on the horn face behave as “point sources” for the pendant drop with \(R_{c}=2.8\) mm
mate if the size distributions obtained were consistent. Due to the necessity to exclude the primary pendant drop from the edge detection algorithm so as to not detect the change in image contrast generated by the surface waves, the pendant drop was masked and excluded from the field of view. For the case of \(R_{c}=1.4\) mm, 80% of the captured field of view can be isolated to count the droplets, while for \(R_{c}=2.8\) mm, 60% of the field of view is used.
In the earlier stages of ejection where fewer droplets exist in a frame, nearly 90% of the ejected droplets are counted correctly. The miscounted droplets are usually the ones that are still near the surface of the primary drop, and so we expect to count them anyway at a later instant. As the fraction of droplets in a frame increases, the performance of the edge detection algorithm decreases. As soon as less than 50% of the droplets in a frame are counted correctly, we stop our evaluation. With this criteria, we evaluate 620 frames for the case of \(R_{c}=1.4\) mm and 220 frames for \(R_{c}=2.8\) mm. This lets us count millions of droplets for both cases and we believe that is sufficient to correctly estimate the ejected drop size distribution. The error associated with the edge detection algorithm itself is approximated by evaluating the intersection between a defined ground truth and the estimate from the algorithm (Lopez-Molina et al., 2013). Doing so for several cases and averaging the associated error, allows us to estimate the overall error as 1 px, which corresponds to 9.52 um.
The droplet size distribution for \(R_{c}=1.4\) mm is depicted in Fig. 5(a). The ejected droplets are polydisperse and can be modeled as a Gaussian distribution. Lang (1962) predicted that the median drop size scales as \(d_{e}=0.34\lambda_{f}\), where \(\lambda_{f}\) corresponds to the expected Faraday wavelength given by
\[\lambda_{f}=\left(\frac{8\pi\sigma}{\rho f_{d}^{2}}\right)^{1/3} \tag{1}\]
where \(\sigma\) is the surface tension between the liquid and air, \(\rho\) is the density of the fluid, and \(f_{d}\) is the driving frequency. For \(f_{d}=20\) kHz, this corresponds to a value of \(r_{e,\mathrm{th}}=28.11\) um, with our experimental median being \(r_{e,\mathrm{exp}}=33.1\) um, which is well within the error associated with edge detection, and shows a good correspondence with Eq.( 1).
Fig. 5(b) plots the evolution of the mean radius of the ejected droplets evaluated per image frame. Isolating the distribution over different time periods of forcing, it can be seen that the mean droplet size of the distribution initially increases over time. Smaller droplets closer to the median size are
Figure 5: (a) The probability distribution of the radius of the ejected droplets for a pendant water drop with \(R_{c}=1.4\) mm plotted for the driving cycles, \(55\leq t/T_{d}\leq 210\). The mean (indicated by the dashed line) and median radius of the ejected droplets are 42.84 \(\pm\) 9.52 μm and 33.1 \(\pm\) 9.52 μm respectively, where the standard deviation is the error obtained from edge detection. (b) The evolution of the mean radius of the ejected droplets for \(R_{c}=1.4\) mm evaluated per frame, plotted for \(55\leq t/T_{d}\leq 210\). The shaded region shows the error associated with edge detection, with the error increasing over time due to the decreasing percentage of ejected droplets counted.
ejected during the initial cycles, with larger droplets being ejected over later cycles. To estimate the ejected mean droplet size theoretically, one would have to evaluate the eigenfrequencies of the capillary waves of a partially wetting drop subject to (pointlike) forced oscillations. However, modeling this is nontrivial and beyond the scope of this study.
It should be noted that Eq. (1) is accurate in predicting the ejected droplet sizes for low amplitude waves and where the viscosity of the atomized fluid does not play an important role (Rajan and Pandit, 2001). However, the polydisperse distribution and the chaotic spike-like structures on the surface of the primary drop indicate that nonlinear effects, similar to the ones seen for lower oscillation frequencies (Goodridge et al., 1997), are significant, especially at later stages of ejection. Therefore, while predicting the ejected droplet sizes theoretically is challenging, it is still interesting to note that Eq. (1) provides good estimates of the ejected droplet size.
### Ejection behavior of a drop with 2% chitosan solution
As mentioned previously, the 2% chitosan solution is highly shear-thinning and shows considerable viscoelastic behavior. Due to the high strain rates involved with the majority of the flow (\(\dot{\gamma}\sim 2\pi f_{d}\)), an effective viscosity closer to the infinite shear rate viscosity will be employed for further analysis (Evans and Morriss, 2007).
The dynamics are elucidated for a drop with \(R_{c}=1.6\) mm, corresponding to a volume of 50 \(\mathrm{\SIUnitSymbolMicro L}\). The first noticeable difference is that the drop initially spreads on the active face of the horn, indicating that these configurations have dynamic contact lines (DePaoli et al., 1995). This brings the drop in contact with the localized regions of high displacement on the face of the horn, which leads to the existence of both harmonic and subharmonic waves on the drop surface as shown in Fig. 6(a). The subharmonic waves arise from the sides of the drop interface where, as shown previously in Fig. 4, higher amplitudes of the horn are expected. Here, the drop is shown at \(t/T_{d}=263.5\), which is a much later driving cycle than for the water drop with \(R_{c}=1.4\) mm, where the whole process of surface wave formation to the onset of ejection takes place between \(35\leq t/T_{d}\leq 55\). For parametric instabilities, it has been shown theoretically that a viscous fluid stabilizes the interface, and it takes a much higher amplitude to trigger the surface instabilities as compared to an inviscid fluid (Kumar and Tuckerman, 1994; Ebo Adou and Tuckerman, 2016). Given the highly viscous nature of the 2%
Figure 6: The dynamics of a pendant 2% chitosan drop with \(R_{c}=1.6\) mm subjected to a periodic vibration of \(f_{d}=20\) kHz. The snapshots of the X-ray phase-contrast images are labeled with their non-dimensional times, \(t/T_{d}\). (a) Evolution of the different wave modes on the drop: [i] shows both harmonic (blue arrow) and subharmonic waves (black arrows). [ii] shows the formation of long, viscoelastic ligaments; (b) A single ligament and its subsequent breakup from the base; (c) Different instances of air entrainment within the primary drop and its ligaments: [i] intertwining ligaments entrap an air bubble depicted within the black box [ii] Interacting ligaments entrain more air within the primary drop (dark blue box), while large amplitude disturbances can break the surface of ligaments to entrap air in them (black box). [iii] The air entrained in ligaments can be ejected along with daughter droplets. Here, the entrained air in the ligament shown in [ii] is ejected as a drop-encapsulated bubble (black box).
chitosanan solution, it is unsurprising that it takes much longer for the different wave modes to set in than for water.
The surface waves form craters and spikes similar to the water drop. However, no daughter droplets are ejected, which is elucidated further in Section 3.4. Indeed, the collapse of troughs leads to the formation of ligaments that oscillate with the horn face as depicted in Fig. 6(a). The pendant drop shows a highly viscoelastic behavior, which can be qualitatively characterized by the Deborah number, \(\mathrm{De}=\kappa/\tau_{f}\sim\kappa/T_{d}\), where \(\kappa\) is the relaxation time for the 2% chitosan solution. For the present case, \(\mathrm{De}>1\) at all times as the relaxation time of the 2% chitosan (Cho et al., 2006) is much higher than the flow time scale, which indicates that the elastic effects of chitosan are relatively important (Bird et al., 1987). In the initial cycles, it is conjectured that the strain rates within the ligaments are quite low, thus keeping the stress in the ligaments below the yield stress and allowing them to react to the oscillation of the ultrasonic horn. Due to the high degree of elasticity, the axial stresses and the strain rates within the ligaments increase, unbounded in time, making these ligaments extend and elongate for a large number of driving cycles (McKinley, 2005). The jetting of the ligaments or drops is an impulsive process, implying that the collapse of troughs needs to be powerful enough to cross a minimum velocity threshold for ejection to occur (Vukasinovic et al., 2007b). Once this threshold has been exceeded, the long ligaments break up from the base rather than from the tip (see Fig. 6(b)), as has also been reported for liquids of higher viscosity (Goodridge et al., 1997; Vukasinovic et al., 2007b). Due to an elasto-inertial-capillary balance on individual ligaments, certain ligaments can also experience breakup from the middle when the axial stress stabilizing the ligament is lower than the force exerted by surface tension (Chang et al., 1999). Furthermore, the high resistive stresses in these ligaments persist even after breakup making the ligaments quite stable to the Rayleigh-Plateau instability (Driessen et al., 2013). Characteristic viscous and viscoelastic effects such as "gobbling" drops (Clasen et al., 2009) and "beads-on-a-string" (Ardekani et al., 2010) are also visible in some of the ligaments as seen in Fig. 6.
Over time, the primary drop starts entrapping ambient air from the surroundings as depicted in Fig. 6(c). The entrapped bubble(s) can oscillate with the primary drop for several cycles, and later break up to form multiple bubbles and enhance mixing within the primary drop. Three mechanisms leading to entrainment have been identified and are described as follows.
Due to the viscoelasticity of the ligament, some of the ejected daugh
ter droplets recoil, provided they are ejected during the negative phase of the ultrasonic horn excitation. Since the ejected droplets have a radius of \(\mathcal{O}(10^{-4})\) m, the effect of gravity is negligible. Therefore, the existing inertia in the ejected droplets pulls them back towards the primary vibrating drop. These droplets impact the primary drop and entrain air, similar to larger drops impacting liquid pools as described by other studies (Hasan and Prosperetti, 1990; Hasan et al., 1995).
Entrainment also occurs when there are multiple ligaments close by and the negative phase of the horn causes these ligaments to swirl and entwine each other (this is the case for Fig. 6(c)[i]). If the time scale of rupture of the air film is larger than the time scale of coalescence between the two ligaments, air can be entrapped. A similar mechanism has been noticed for studies involving entrainment in vibrating liquid-filled vessels (Obenauf et al., 2022) and in piezoelectric inkjet printing (de Jong et al., 2006).
Large amplitude disturbances on the extended ligaments break the surface of the ligament, which can then curl up and entrain air within the ligament itself in a mechanism similar to other breaking surface waves (Kiger and Duncan, 2011). This is depicted in Fig. 6(c)[ii]. These ligaments then jet the entrained air along with a droplet to form drop-encapsulated bubbles that can remain stable for quite a long time, even up to several ms in some cases (see Fig. 6(c)[iii]), before the entrapped air coalesces with the surrounding, leaving just the ejected droplet. A detailed discussion on the benefits and adverse effects of having entrainment in such a configuration is provided in Section 4.
### Threshold acceleration for droplet ejection
The applied acceleration at which the onset of ejection takes place is defined as the threshold acceleration of ejection. The experiments show that the chitosan drop does not initially eject daughter droplets, unlike water drops, and is only capable of ejecting ligaments. Goodridge et al. (1997) and Vukasinovic et al. (2007b) have also provided similar experimental results for glycerin-water mixtures, which have higher viscosities than pure water. Here, we try to explain the reason for this behavior using the explanation provided by Goodridge et al. (1997), which is briefly summarized below.
For inviscid fluids, ejection occurs when the height of the surface waves roughly equals their wavelength, \(h\sim\lambda\). Considering that the only counteracting effect against the input acceleration is the surface tension, \(\sigma\), the
critical displacement amplitude (assuming that \(h_{\rm cr}\sim a_{\rm th}/\omega^{2}\)) required to eject droplets can be given as
\[h_{\rm cr}\sim\left(\frac{\sigma}{\rho}\right)^{1/3}\omega^{-2/3} \tag{2}\]
Using the material properties of water and substituting \(\omega=2\pi f_{d}\) gives the critical displacement for water as \(h_{\rm cr}\sim 4\) um, which is reached by our horn with \(A_{\rm max}=28.5\) um for 40% amplitude. Therefore, pendant water drops are able to eject daughter droplets quite easily in this configuration. For viscous fluids, the power applied to the system is balanced by the viscous dissipation of the fluid. Performing a similar scaling analysis as before, but now using viscosity instead of the surface tension gives
\[h_{\rm cr}\sim\left(\frac{\mu}{\rho}\right)^{1/2}\omega^{-1/2} \tag{3}\]
As stated before, using an effective viscosity for the 2% chitosan solution drop (\(\mu_{\rm eff}\approx 100\) mPa s) and setting \(\omega=2\pi f_{d}\), gives the critical displacement value as \(h_{\rm cr}\sim 25\) um, which is very close to the maximum amplitude of our ultrasonic horn. This amplitude cannot be achieved in the transient period of the horn and therefore, any ejection processes from the drop can only take place after the transient period has passed and the acoustic field has been fully developed in the pendant drop. Therefore, the ejection process here sets in much later and is probably dictated by other events as well, such as the increase in the surface area occupied by the drop under the horn, and the entrainment of air within the primary drop that can further enhance the ejection of ligaments.
### Effects of cavitation on drop ejection
Here, we would like to compare cavitation-induced ejection with respect to only having capillary wave ejection by generating cavitation or cavitation-related activities inside the drop. Time-resolved X-ray phase-contrast imaging proved extremely beneficial in deducing cavitation inception inside drops and enabled us to evaluate the intrinsic bubble dynamics inside the liquid volume. We tested two different configurations to answer the changes induced by cavitation to the onset of ejection and the ejected droplet sizes. We confined water drops between the ultrasonic horn and a flat surface. The rigid
confining surface at the bottom, along with the large impedance mismatch at the lateral gas-liquid interface, acts as a strong reflector for the incoming waves and can lead to cavitation within the trapped drop (Moussatov et al., 2005; Fatjo et al., 2011). Next, we trapped bubbles inside a drop to study specifically how the bubble oscillations and accelerations contribute to the ejection process.
#### 3.5.1 Confined water drops
A confined, wetting water drop is depicted in Fig. 7. It has a contact radius of \(R_{c}\approx 1.4\) mm, similar in volume to the free, pendant drop shown in Fig. 2. The distance between the active face of the horn and the rigid surface is \(h=1.3\) mm. Using the convention provided by Moussatov et al. (2005), this corresponds to \(h/\lambda_{f}=0.018\) and \(R_{c}/\lambda_{f}=0.017\), where \(\lambda_{f}\) is the acoustic wavelength in water for \(f_{d}=20\) kHz. For the above combination of \(h/\lambda_{f}\) and \(R_{c}/\lambda_{f}\) values, not enough pressure amplification is achieved in the liquid volume and no cavitation is generated. Instead, subharmonic, azimuthal waves are triggered without the appearance of axisymmetrical, harmonic waves. This is different from the pendant drop, where harmonic waves are triggered first, and subharmonic waves are generated later.
Interestingly, these waves cover only the top half of the drop, before they start ejecting daughter droplets. This could be simply due to the unstable nature of the wetting form of the confined water drop. The bridge breaks up and the lower half of the drop is disconnected from the top half. Fig. 7(c) shows that the ejection behavior of this drop is similar to a free pendant drop having characteristics equivalent to a drop shape indicated by the black dotted line. As the behavior is similar, it is to be expected that the ejected droplet size distribution for this case is equivalent to that of the pendant water drop (see Fig. 9).
Repeating the experiment with a larger drop shows some differences. Fig. 8 shows a confined drop with \(R_{c}=4.2\) mm. The interface for this configuration is out of the field of view of the image. The distance between the horn and the rigid surface is again maintained at \(h=1.3\) mm. The behavior of the generated waves is the same as the free pendant drop with a large contact radius as described in Section 3.1. Harmonic standing waves are coupled with subharmonic, semi-circular waves from point source locations in the horn. However, for this combination of \(h/\lambda_{f}=0.018\) and \(R_{c}/\lambda_{f}=0.056\), a sufficient pressure amplification is achieved within the liq
Figure 7: The dynamics of a confined water drop with \(R_{c}=1.4\) mm and \(h=1.3\) mm. The snapshots of the X-ray phase-contrast images are labeled with their non-dimensional times, \(t/T_{d}\). (a) The first waves generated in this drop correspond to subharmonic, azimuthal waves. (b) The subharmonic waves continue to grow and cover only the top half of the drop before the ejection of daughter droplets sets in. (c) Over time, only the top half of the confined drop ejects daughter droplets. The top half shows the same ejection dynamics as a free drop indicated by the black dashed line.
uid volume, to nucleate cavitation bubbles, as seen in Fig. 8. These vapor bubbles oscillate and undergo multiple growth and collapse cycles along with the acoustic excitation and are found to affect the ejection process.
The ejected droplet size distribution for both of the confined water drops - with and without cavitation - is plotted in Fig. 9. For \(R_{c}=4.2\) mm, the ejected droplet size distribution was estimated manually due to the difficulty of distinguishing the waves from the ejected droplets by the edge detection algorithm. As can be seen from the figure, the overall trend of the distribution is similar for both cases. This is unsurprising given that the ejection still occurs as a result of the breakup of capillary waves, irrespective of whether cavitation occurs within the liquid or not. However, there is a slight increase in the number of larger ejected droplets (\(r_{e}>0.05\) mm) for the case when cavitation occurs (\(R_{c}=4.2\) mm), shifting the mean of the distribution to a higher value. This suggests that a larger size range exists for droplets produced by the combined effect of cavitation and capillary waves as compared to droplets ejected purely by capillary waves, confirming the hypothesis of Antonevich (1959). A possible explanation for the variation could be the high accelerations associated with the oscillation of the cavitation bubbles within the liquid leading to the formation of higher amplitude capillary waves, which produce droplets of different sizes when they break up, as compared to the capillary waves that are unaffected by the presence of the cavitation bubbles.
A clear distinction cannot be made between the effect of the capillary waves, the drop confinement, and the cavitation-related activities on the ejected drop sizes. However, it must be noted that the only notable difference between \(R_{c}=1.4\) mm and \(R_{c}=4.2\) mm is the creation of vapor bubbles due to cavitation, and this is reflected in the ejected drop size distribution. While the case of \(R_{c}=1.4\) mm is directly comparable to a free pendant water drop,
Figure 8: The dynamics of a confined drop with \(R_{c}=4.2\) mm and \(h=1.3\) mm. The snapshots of the X-ray phase-contrast images are labeled with their non-dimensional times, \(t/T_{d}\). The image on the left depicts one of the first instants when cavitation bubbles become visible, enclosed in black boxes here. After \(50\) μs (1 period), more cavitation bubbles have been nucleated, which are also enclosed in black boxes on the right image.
the case of \(R_{c}=4.2\) mm has a similar trend but with marked differences to the standard pendant drop case.
#### 3.5.2 Bubble entrapped within a drop
The second configuration used to study the effect of cavitation on the ejection process involves trapping air bubbles inside a droplet. Air and 2% chitosan are injected together using the microsyringe and placed on the active face of the ultrasonic horn, inducing trapped bubbles inside the drop. The qualitative results discussed here can equally be translated to bubbles trapped in water drops, apart from the viscoelastic effects. However, maintaining such an unstable configuration with a bubble-in-water drop, long enough to perform the X-ray imaging was impossible, and therefore, the more stable configuration employing chitosan drops was used instead. The initial configuration obtained is as shown in Fig. 10(a), with the frames on the right showing the behavior of the trapped bubbles upon acoustic excitation.
Figure 9: The probability distribution function of the radius of the ejected droplets plotted for the two confined drop cases. For \(R_{c}=1.4\) mm, the PDF is plotted for \(65\leq t/T_{d}\leq 185\), while for \(R_{c}=4.2\) mm, the PDF is plotted for \(25\leq t/T_{d}\leq 150\). The mean ejected droplet sizes for both cases are indicated with dashed lines (black for \(R_{c}=1.4\) mm and gray for \(R_{c}=4.2\) mm).
We will discuss the behavior of the larger bubble shown in Fig. 10(a) as it has a radius close to the resonant size, \(R\approx R_{\rm res}=150\)\(\mathrm{\SIUnitSymbolMicro m}\), where \(R_{\rm res}\) is the resonant bubble size for \(f_{d}=20\) kHz as predicted from linearizing the Rayleigh-Plesset equation (Brennen, 2013).
Initially, this bubble oscillates in the vertical direction along with the applied excitation. After a few oscillation cycles, the bubble jets toward the active face of the horn as seen in Fig. 10(a)[ii] and [iii]. A higher shape mode instability is developed (purely zonal, corresponding to \((k,l)=(4,0)\) according to the terminology used in Ding and Bostwick (2022)) before a daughter bubble is pinched off (seen in Fig. 10[iv]). The daughter bubble oscillates in tandem with the primary bubble and the out-of-phase oscillation of this bubble creates a highly focused jet toward the primary bubble. Over time, multiple bubbles are pinched off as the surface tension of the bubble is unable to withstand the increasing acceleration of the horn. These bubbles oscillate together, contributing to the interface dynamics of the drop.
The radial acceleration created by the trapped bubble oscillations leads to the formation of a bulge in the drop interface on the left side as seen in Fig. 10(b)[ii]. This bulge translates along the drop interface as the bubbles translate along the surface of the horn. The high acceleration appears to break the drop into two sections, with the part of the drop with the bubbles and the bulge showing a different behavior as compared to the rest of the drop. The continued cycles of growth and collapse of the bubbles create large radial accelerations within the liquid volume, leading to the onset of the Rayleigh-Taylor instability on the bulged part of the droplet, with the initial ripples indicated by the black arrow in Fig. 10(b)[iii]. The geometry of the problem implies that this can be classified as a spherical Rayleigh-Taylor instability (Plesset, 1954). It must also be mentioned that the high viscoelasticity of the chitosan solution stabilizes the drop interface against the Rayleigh-Taylor instability (Prosperetti, 1977; Zeng et al., 2018), and only after a certain threshold amplitude is exceeded does the fluid yield and the drop interface becomes unstable. Due to the amplitude of the horn increasing over time during the transient period, the Weber (\(\rho u_{\rm in}^{2}R_{c}/\sigma\), where \(u_{\rm in}\) is the time-dependent velocity of the drop interface) and Reynolds numbers (\(\rho u_{\rm in}R_{c}/\mu_{\rm eff}\)) at the interface increase with time, causing the initial ripples to appear. With further increase in the amplitude over time, higher modes are set in, creating more ripples on the drop interface (see Fig. 10(b)[iv]).
The instability of the interface leads to the ejection of ligaments and daughter droplets in this part of the pendant drop. The ejection process
Figure 10: (a) The left frame shows two bubbles entrapped within a chitosan drop. The frames on the right show the zoomed-in section of the square box on the left frame. The inter-frame time is \(t/T_{d}=0.5\) with frame [i] corresponding to \(t=0\). [ii] The violent collapse of one of the bubbles leads to the formation of a jet and other bubbles. [iii] The jet is clearly visible here [iv] Continuous jetting leads to the formation of daughter bubbles. [v] The daughter bubble oscillates in tandem with the attached bubble [vi] the interaction between these bubbles causes further jets (also seen in [v]), and multiple daughter bubbles are created. (b) The evolution of the drop interface in tandem with the trapped bubble activity. The interframe time is \(t/T_{d}=25\). Frame [i] shows the system at \(t/T_{d}=14.5\). [ii] The drop bulges to the left and migrates as the bubbles translate along the horn face [iii] Initial ripples on the drop surface (black arrow), as it experiences Rayleigh-Taylor instability due to high radial accelerations [iv] Higher modes of the spherical Rayleigh-Taylor instability on the drop surface [v] Ejection of ligaments and droplets with the development of subharmonic waves in the part of the drop without the bubbles (black arrow) [vi] Co-evolution of ligament ejection on the left and subharmonic waves on the right of the pendant drop
here starts at much earlier driving cycles as compared to a pendant drop without cavitation (refer to Fig. 6). This implies that the large inertia and accelerations created by cavitation-related activities hasten the onset of ejection in the pendant drop. These accelerations are also responsible for the direct ejection of daughter droplets along with the ligaments. Furthermore, the ligaments eject satellite droplets due to the Rayleigh-Plateau instability. Both of these scenarios are not present when there are no trapped bubbles in the chitosan drop as seen in Fig. 6. The consequences of having trapped bubbles are discussed further in Section 4.
The part of the drop without the bubble is still attached to the horn and shows the classic behavior as described previously for the other pendant drops. The black arrow in Fig. 10(b)[v] shows the onset of subharmonic capillary waves, which travel and spread across the whole attached drop as seen in Fig. 10(b)[vi]. The waves appear to be inclined because of the tilted orientation of the drop on the face of the horn. Since the whole drop is in focus in the phase-contrast images, the projection of a tilted drop onto a plane while imaging makes it look like the waves themselves are inclined, even though this is not the case. The waves in this part of the drop co-evolve with the ejecting part and transition to ejecting ligaments at similar driving cycles as the chitosan drop without any trapped bubbles (see Section 3.3).
## 4 Discussion
A variety of characteristics are found in the drop atomization process based on the different tested configurations. The dependence on the fluid-structure interaction was visualized clearly when a drop with a larger contact radius with the horn was used. While fluid-structure interaction in this context was briefly studied by James et al. (2003b), the fundamental mode shape of their transducer implied that they could not see localized variations on the drop surface. With the fundamental mode of a nonlinear ultrasonic horn, however, it is visibly clear that the ejection onset is affected. While this did not have an effect on the sizes of the ejected droplets in our case, further study is required to characterize whether localized regions of high-displacement on the surface of the transducer also have an impact on the products of atomization. The interaction could then be used in a beneficial manner to create daughter droplets with lower input power.
The ejected droplet size distribution was estimated for water drops. While the overall distribution is polydisperse, the median size of ejected droplets
correlated well with theoretical predictions. This implies that a very quick estimate of the ejected droplet size can be calculated from the driving frequency and the fluid properties. The transition from ejecting smaller to larger droplets over time implies that nonlinear effects dominate after a certain point. Further research is required to distinguish when and how this transition occurs.
Water drops were confined between the ultrasonic horn and the rigid surface to utilize the ability of the horn to generate cavitation in confined thin liquid layers. Only certain combinations of \(h/\lambda_{f}\) and \(R_{c}/\lambda_{f}\) can produce the necessary pressure amplification to nucleate vapor bubbles within the liquid. No cavitation was produced for the case of \(R_{c}=1.4\) mm, but the confinement alone affected the atomization. The configuration generated subharmonic capillary waves only within the top half of the drop and only this part ejected daughter droplets. It would be curious to see if similar capillary wave formation and ejection behavior is noticed for drops that are non-wetting as they would fundamentally have a different shape to the drop depicted in Fig. 7. For \(R_{c}=4.2\) mm on the other hand, the pressure amplification within the droplet was large enough to cavitate vapor bubbles that experience multiple growth and collapse cycles and create large amplitude capillary waves. As mentioned previously, it is difficult to distinguish between the effects of confinement and cavitation in the ejection process. However, our results suggest that the confinement alone is insufficient to change the size distribution of the ejected droplets while cavitation seems to slightly modify the daughter droplet size distribution. Optimizing the \(h/\lambda_{f}\) and \(R_{c}/\lambda_{f}\) could create localized regions of high-pressure (Moussatov et al., 2005), which provides another method to modify the size distribution of the daughter droplets.
Using a viscoelastic drop to study the atomization process showed interesting features. The requirement of higher input amplitude to atomize viscoelastic drops was shown to correspond quite well with the existing theory. Extended ligament formation and air entrainment within the primary drop were visualized succinctly. Again, the oscillation of the trapped bubbles due to entrainment has an effect on the ejection process. From the Supplementary Video corresponding to Fig. 6, it seems that the ejection of ligaments and daughter droplets is quickened by the oscillation of the entrained bubbles in tune with the acoustic excitation. The case of the initially trapped air bubbles within the chitosan drop provides further proof that this is true. The onset of ejection here is faster as compared to a pure chitosan drop due to
the presence of trapped bubbles that generate high radial accelerations and accelerate the drop interface. This implies that cavitation and cavitation-related activities can easily lower the threshold required to eject daughter droplets, even though the generation of cavitation within the liquid, and the resulting products of ejection are not controllable. For both chitosan cases, it can be seen qualitatively that a wide range of droplet sizes are ejected. Furthermore, quite a few drop-encapsulated bubbles that are stable for relatively long periods of time are ejected as well, which raises the question of the homogeneity of the atomization products. This could have consequences in several applications involving atomization such as inkjet printing, fertilizer sprays, or pulmonary drug delivery.
The configurations used here are quite limited to elucidate all the parameters in play for the atomization process. It would be interesting to see how the dynamics change as the input power and driving frequency are changed as well. While some differences between capillary and cavitation-based ejection have been highlighted, the current setup does not fully allow us to separate the effects and comment on the major distinctions brought about by the two different phenomena. Further research needs to be carried out to study the performance of only capillary or cavitation-based ejection, and the conjunction of both of them. For viscoelastic drops, it would be interesting to separate the role of the viscous and the elastic effects in the atomization process, especially when air entrainment and cavitation-related activities are involved. The interplay between the viscous, capillary, and elastic effects in stabilizing the primary drop interface and the ligaments could play a crucial role in determining the nature of the ejection products and further open up a wide range of applications. If the dynamics are well understood, it could be beneficial to dilute or concentrate liquid solutions with polymers to obtain the exact type of ejection behavior required for a certain application, which makes it a promising avenue to explore.
Finally, we would like to highlight the benefits of performing spatiotemporally resolved X-ray phase-contrast imaging for these experiments. The advanced imaging technique proved critical for most of our reported findings. It allowed us to show that cavitation does not occur inside a free pendant drop and helped us highlight the consequences of cavitation and large bubble oscillations on the drop atomization process.
## 5 Conclusion
In this study, we characterize the atomization behavior of pendant drops subjected to ultrasonic excitation using advanced experimental techniques. The goal was to study the dependence of drop atomization on fluid properties, such as drop volume and viscosity, by performing spatiotemporally resolved imaging measurements and leveraging the virtual infinite depth of field of X-ray phase-contrast imaging as compared to conventional imaging. Our results also shed light on the modifications to the atomization of the drop, subject to the generation of cavitation and cavitation-related activities. Experiments were carried out with water and chitosan drops. Atomization was found to be dictated by the creation of surface waves on the drop. Both harmonic and subharmonic waves were observed with the time needed for the subharmonic waves to set in the drop being greater than the time needed to generate the harmonic waves. The observed transition from one wave mode to another was dictated by the fluid-structure interaction between the pendant drop and the horn. This was evident when drops had a large contact radius (water drop with \(R_{c}=2.8\) mm), as well as when they had a pronounced dynamic contact line and subsequently underwent spreading (chitosan drop with \(R_{c}=1.6\) mm). The larger pendant water drop experienced a quicker onset of ejection due to this fluid-structure interaction.
The amplitude of the surface waves progressively increases over time, creating crests and troughs on the surface of the primary drop. Once the identified threshold displacement is reached, the oscillating pendant drop atomizes and ejects daughter droplets. The threshold value was determined from scaling relations that were different based on the inviscid or highly viscous nature of the fluid. The median and mean ejected droplet sizes for the pendant water drop correspond well with the predictions from previous experiments as well as theory (Lang, 1962).
In addition, configurations were devised to study the effect of cavitation and the combined effects of both cavitation and capillary waves on drop atomization. Drops were confined between the horn and a rigid surface or bubbles were initially trapped within the drop volume. Although no apparent change in the mean daughter droplet size was noticed, the confined drop experiments indicate a much wider range of ejected droplets provided cavitation is generated within the drop. This could be attributed to the formation of high-amplitude capillary waves from the radial oscillations of the generated vapor bubbles that slightly increase the sizes and range of the ejected
droplets. However, inducing cavitation within the drop cannot be controlled or localized in this case.
Finally, when entrapped air bubbles were introduced in the chitosan drop, the cavitation-related activities were observed to substantially quicken the onset of ejection in contrast to having only surface waves. The large radial accelerations generated by the expansion and collapse of the bubbles resulted in an unstable drop interface, which led to the formation of the Rayleigh-Taylor instability. The instability stimulated an earlier ejection of ligaments and daughter droplets from the pendant drop. Both the cavitation-based drop atomization studies imply that with cavitation, the creation of daughter droplets is easier. However, controlling the ejected size distribution and the end products with cavitation proved challenging as compared to having only capillary waves on the surface of the drop. Although further research is required if one wants to use cavitation to accelerate the ejection process and obtain controlled products from it, this work establishes promising avenues based on rich results.
## CRediT authorship contribution statement
**Anunay Prasanna** - Methodology, Formal analysis, Investigation, Data curation, Visualization, Writing - original draft, Writing - review & editing; **Luc Biasiori-Poulanges** - Conceptualization, Methodology, Investigation, Writing - review & editing; **Ya-Chi Yu** - Formal analysis, Investigation, Writing - review & editing; **Hazem El-Rabii** - Conceptualization, Writing - review & editing; **Bratislav Lukic** - Methodology, Resources, Writing - original draft, Writing - review & editing; **Outi Supponen** - Conceptualization, Methodology, Resources, Investigation, Visualization, Writing - review & editing, Supervision, Project administration, Funding acquisition
## Declaration of competing interest
The authors declare that they have no known competing financial or personal interests that could have influenced the work reported in this manuscript.
## Acknowledgements
The authors would like to acknowledge the Swiss National Science Foundation (SNSF project grant number 200021_200567), ETH Zurich Postdoctoral
Fellowship, ETH Zurich, and the European Synchrotron Radiation Facility. The results presented here were gathered during the allocated proposal beamtime ME-1599 on beamline ID19. The authors would like to thank Dr. Claire Bourquard for helping with the preparation of the chitosan solution and Dr. Dhananjay Deshmukh for his help with the laser scanning vibrometry experiments. The gratitude is also extended to colleagues at McGill University, Dr. Zhenwei Ma and Prof. Jianyu Li, for giving the initial idea to the authors thanks to their accidental observations of drop atomization.
## Appendix A Preparation of 2% chitosan solution
To make the 2% chitosan solution (10 mL), 200 mg of low molecular weight chitosan powder (Sigma Aldrich) is added to 9 mL of deionized water. The solution is stirred to obtain a suspension of chitosan particles. Then, 1 mL of 1M HCl solution (Sigma Aldrich) is added into the suspension. The chitosan-HCl-water solution is then left overnight on a rotating mixer to allow the complete dissolution of chitosan and to obtain a homogeneous solution.
## Appendix B Droplet size distribution for pendant water drop with \(R_{c}=2.8\) mm
Fig. 11 plots the probability distribution function for the daughter droplets of the pendant water drop with \(R_{c}=2.8\) mm. The distribution, median, and mean values are similar to those obtained for the smaller pendant drop. The median value predicted by Eq. (1) is dependent only on the driving frequency and the fluid properties, and therefore, the similarity in the distribution is as expected.
|
2309.13463 | A treatment of particle-electrolyte sharp interface fracture in
solid-state batteries with multi-field discontinuities | In this work, we present a computational framework for coupled
electro-chemo-(nonlinear) mechanics at the particle scale for solid-state
batteries. The framework accounts for interfacial fracture between the active
particles and solid electrolyte due to intercalation stresses. We extend
discontinuous finite element methods for a sharp interface treatment of
discontinuities in concentrations, fluxes, electric fields and in
displacements, the latter arising from active particle-solid electrolyte
interface fracture. We model the degradation in the charge transfer process
that results from the loss of contact due to fracture at the electrolyte-active
particle interfaces. Additionally, we account for the stress-dependent kinetics
that can influence the charge transfer reactions and solid state diffusion. The
discontinuous finite element approach does not require a conformal mesh. This
offers the flexibility to construct arbitrary particle shapes and geometries
that are based on design, or are obtained from microscopy images. The finite
element mesh, however, can remain Cartesian, and independent of the particle
geometries. We demonstrate this computational framework on micro-structures
that are representative of solid-sate batteries with single and multiple anode
and cathode particles. | Xiaoxuan Zhang, Tryaksh Gupta, Zhenlin Wang, Amalie Trewartha, Abraham Anapolsky, Krishna Garikipati | 2023-09-23T19:34:51Z | http://arxiv.org/abs/2309.13463v1 | A treatment of particle-electrolyte sharp interface fracture in solid-state batteries with multi-field discontinuities
###### Abstract
In this work, we present a computational framework for coupled electro-chemo-(nonlinear) mechanics at the particle scale for solid-state batteries. The framework accounts for interfacial fracture between the active particles and solid electrolyte due to intercalation stresses. We extend discontinuous finite element methods for a sharp interface treatment of discontinuities in concentrations, fluxes, electric fields and in displacements, the latter arising from active particle-solid electrolyte interface fracture. We model the degradation in the charge transfer process that results from the loss of contact due to fracture at the electrolyte-active particle interfaces. Additionally, we account for the stress-dependent kinetics that can influence the charge transfer reactions and solid state diffusion. The discontinuous finite element approach does not require a conformal mesh. This offers the flexibility to construct arbitrary particle shapes and geometries that are based on design, or are obtained from microscopy images. The finite element mesh, however, can remain Cartesian, and independent of the particle geometries. We demonstrate this computational framework on micro-structures that are representative of solid-sate batteries with single and multiple anode and cathode particles.
**Keywords** Interface phenomena; fracture; stress-dependent kinetics; discontinuous finite elements
## 1 Introduction
Solid-state batteries (SSBs) are gaining in interest due to their high energy density and improved safety over liquid electrolyte-based systems. However, the development of SSBs also faces challenges. Some of them such as dendrite formation or interphases growing at solid electrolyte-active particle interfaces are the driven by complex electrochemical coupling [1]. However, several others stem from higher mechanical stresses relative to liquid electrolyte systems, which develop in the all solid system of electrode, electrolyte, binder and current collector. In most SSB chemistries, the stress arises from intercalation strains and the confined all-solid battery configuration. As the strains cycle with charge and discharge, so do the stresses, and can lead to many of the failure phenomena that are well-known in solid mechanics.
Possibly the most important of these is fracture, which can arise at many locations including the active cathode and anode particles, and brittle electrolytes such as the ceramics \(\beta\)-Li\({}_{3}\)PS\({}_{4}\) or lithium lanthanum zirconium oxides (LLZO). By its being an interface, the electrode particle-electrolyte junction is particularly susceptible to fracture. New surfaces are created at this interface with a loss of contact. Reactions, which rely upon proximity between components, can be compromised by the introduction of physical gaps between materials on either side of the fracture surfaces. This can become a major concern for the solid electrolyte-active particle interface, which, of course, is critical for charge transfer. The suppression of charge transfer across this fractured interface causes a degradation in capacity, which over many charge-discharge cycles can lead to loss of stability of electrochemical performance. The details of this coupling between mechanics and electrochemistry, specifically how the charge transfer kinetics falls off with growth in the fracture-induced gap remains poorly understood. However, it could be modelled phenomenologically, and allow access to computational studies of the loss in electrochemical performance with interface crack growth over charge-discharge cycles.
This is a multiphysics problem: (a) Intercalation strains drive the stress. Depending upon the chemistry, there is local shrinkage of the cathode or anode particle during lithiation or delithiation (assuming a lithium battery). (b) The tensile stress during particle shrinkage can cause fracture of the solid electrolyte-cathode/anode interface. (c) The change in
charge transport due to loss in contact at the interface either transfers less lithium to the cathode/anode or extracts less during charge/discharge, all depending on the cathode/anode chemistry. (d) The loss of load-carrying capacity across the fractured interface leads to a lowering of the tensile stress throughout the solid state battery. The altered charge transfer, and therefore (de)lithiation-driven (de)intercalation also changes the stress as cycles progress. (e) There is an additional effect of stress-mediated kinetic phenomena: on Li and Li\({}^{+}\) transport in the active particles and electrolyte, respectively, and on charge transfer reactions across the interface. As the stress changes with cycles so do these stress-mediated kinetics.
Over hundreds of cycles, there is a progressive capacity loss, that is often manifested by a steep degradation past some threshold. The complexities of this process make it difficult to gain insight purely from experiment. Multiphysics modelling and computation are indispensable, and have led to a number of lines of investigation. In a pair of papers, Klinsmann et al. developed a phase field fracture-based model for crack growth during extraction [2] and insertion [3] of Li in active particles. Their models included diffusive transport, linearized mechanics and the phase field formulation of damage and fracture in a coupled treatment, but without the electric fields. The electrolytes were not solid state, and therefore intra-particle fracture was of interest to the authors rather than particle-electrolyte interface fracture. Nevertheless, these works mark an important step toward the treatment of intercalation stress-induced fracture. Ganser et al. presented a free energy-based coupled treatment of electrostatics, mass/charge transport and nonlinear elasticity for fully solid state batteries [4]. The electrolyte in their numerical models was a solid polymer. The same group of authors used this model to study how the elastic properties of the solid polymer electrolyte influence the stability of its interface with metal active particles to perturbations that could grow into protrusions [5, 6]. Debonding at the active particle-binder interface was treated by Iqbal et al. using cohesive zone elements, a chemo-mechanical model with linearized elasticity for the particle and Neo-Hookean elasticity for the binder [7]. Rezaei et al. also carried out a similar treatment of chemo-mechanically driven fracture in solid state batteries using cohesive zone models [8]. Also related is these authors' extension of this model to phase field fracture to study active particle fracture [9]. Electrolyte, intra-particle and active particle-solid electrolyte interface fracture were modelled using linearized elasticity, damage and cohesive zone elements, and the accumulation of damage (degradation) with cycling was demonstrated. Bistri and Di Leo developed a novel surface element to resolve the chemo-mechanics at the active particle-electrolyte interface [10]. Of interest in their work is the modelling of multi-particle configurations and their relation to the development of capacity loss.
Other work in the literature has accounted for the influence of mechanics on the charge transfer kinetics at the active particle-electrolyte interface. Ganser et al. worked within the framework of transition state theory to propose extensions of the classical Butler-Volmer model [11]. Zhao et al. provided an electro-chemo-mechanical treatment considering void formation and growth at active particle-solid electrolyte interfaces. The elasto-viscoplastic response of Li was accounted for, with phase field models for the formation of voids from vacancies [12]. Afshar and Di Leo have presented a thermodynamically based chemo-mechanics treatment for resolving interface phenomena using phase field methods [13]. A useful experimental study was carried out by Han et al., who performed cycling of Li NMC-anode, argyrodite-electrolyte solid state batteries, finding stresses in the mega Pascal range [14]. This is an important marker for the ceramic electrolytes that are modelled in this communication.
Surveys of the the range of coupled phenomena and failure mechanisms in all solid state batteries have also appeared recently. These include the reviews by Bistri et al. [15] and by Tian et al. [16], which focused on interface stability, interphase fracture and the chemo-mechanics of composite electrodes. A comprehensive review of the open questions in the coupled electro-chemo-mechanics of solid state batteries by Deshpande and McMeeking [17] focused attention on the inadequacy of models that neglect the viscoplasticity of lithium metal electrodes, arguing for the importance of this effect on void formation and the growth of lithium into cracks.
In this communication we propose an electro-chemo-mechanically coupled model to simulate SSBs by resolving the phenomena listed in (a-e) above at the scale of individual particles. Cathode/anode particle sizes and geometries naturally have strong influences on these physics, and it becomes important to carry out computational studies accounting for multi-particle configurations. Stiff solid electrolytes, such as the ceramics LLZO and even \(\beta\)-Li\({}_{3}\)PS\({}_{4}\), in combination with stiff cathode particles such as Li NMC and graphite anodes lead to higher stresses. Fracture occurs with high probability at the active particle-solid electrolyte interface in such systems. With this motivation, a focus of this communication is on modeling fracture at the active particle-solid electrolyte interfaces. The modelling of interface fracture in realistic and experimental image-based microstructures with a distribution of particle shapes and sizes would be limited by meshes that conform to complicated microstructures by following electrolyte-particle interfaces. The creation of such meshes is a tedious and expensive undertaking, and could become a bottleneck as is well understood in computational engineering. To surmount this difficulty, we propose to use a scalar field to define each particle's location and geometry. The particle-electrolyte interface is implicitly modeled via the so-called the embedded interface method. Such a treatment allows the imposition of interface conditions on electrostatic, species density and deformation fields within two- or three-dimensional elements by extending discontinuous finite element methods
[18, 19, 20, 21, 22, 23]. Thus, while our treatment bears clear similarities to the cohesive zone-based treatments of fracture discussed above [7, 8] and to the treatment that introduced surface elements for chemo-mechanics [10], the non-conforming meshes enabled by discontinuous finite elements allows greater flexibility for arbitrary multi-particle configurations. We exploit the natural treatment of discontinuous fields afforded by this approach to exactly account for the discontinuity of Li/Li\({}^{+}\) concentration fields and in the displacement field post-fracture at the particle-electrolyte interface. We additionally define distinct particle and electrolyte electric potentials and allow them to change discontinuously at the interface. We use this direct representation of discontinuities to drive interface charge transfer reactions and traction-displacement relations.
This work is organized as follows: In Section 2, we describe the standard governing equations, kinematics and constitutive relations of the coupled electro-chemo-mechanics in solid-state batteries in the traditional setting-i.e., without accounting for interfaces. In doing so, we make connections to the derivations in Ganser et al. [4]. In Section 3 we introduce the treatment of interfaces in continuum physics, including the idea of implicitly representing the particle's boundary with a scalar field. We also describe the numerical treatment of interface conditions, drawing upon finite element design. In Section 5, we briefly describe our workflow for efficiently generating microstructures on a regular, Cartesian mesh based on the proposed approach. In Section 6, we present the results of simulations under a variety of coupling conditions for two- and multi-particle microstructures. In Section 7, we offer a discussion of our treatment, place it in context and suggest directions for its extension.
## 2 The electro-chemo-mechanics of solid state batteries
Newman's work has laid the foundation for modeling electrochemical systems and has been widely used in battery problems [24, 25, 26, 27, 28, 29]. In the past, we, among others, have extended this body of work to couple nonlinear mechanics with electrochemistry at both, the homogenized and particle-resolved scales [30, 31]; however, our previous work has been for a system with a liquid electrolyte. We lay out the electro-chemo-mechanical problem for the case of a solid electrolyte. Rather than repeating the derivation from first principles that has been presented in the literature, we make connections to that work [4].
We denote the continuum domain in its reference configuration by \(\Omega_{0}\) (Figure 1) and allow it to contain closed interfaces \(\Gamma^{\mathrm{c}}_{01},\ldots\Gamma^{\mathrm{c}}_{0m}\) and \(\Gamma^{\mathrm{s}}_{01},\ldots,\Gamma^{\mathrm{s}}_{0n}\). Each \(\Gamma^{\mathrm{c}}_{0i},\ i\in\{1,\ldots,m\}\) is the boundary of an open subdomain \(\Omega^{\mathrm{-c}}_{0i}\) that represents a cathode particle; i.e., \(\overline{\Omega^{\mathrm{-c}}_{0i}}=\Omega^{\mathrm{-c}}_{0i}\cup\overline{ \Gamma^{\mathrm{s}}_{0i}}\) and each \(\Gamma^{\mathrm{a}}_{0i},\ i\in\{1,\ldots,n\}\) is the boundary of an open subdomain \(\Omega^{\mathrm{-a}}_{0i}\) that represents an anode particle; i.e., \(\overline{\Omega^{\mathrm{-a}}_{0i}}=\Omega^{\mathrm{-a}}_{0i}\cup\Gamma^{ \mathrm{a}}_{0i}\). The complement \(\Omega_{0}\setminus\cup_{i=1}^{m}\overline{\Omega^{\mathrm{-c}}_{0i}}\cup_{ i=1}^{m}\overline{\Omega^{\mathrm{-a}}_{0i}}=\Omega^{\mathrm{+}}_{0}\). The simplest rendering of the solid state battery is with \(\Omega^{\mathrm{+}}_{0}\) being the multiply connected solid electrolyte subdomain, \(\Omega^{\mathrm{-c}}_{0i},\Omega^{\mathrm{-a}}_{0j}\) being the cathode/anode particles and \(\Gamma^{\mathrm{c}}_{0i},\Gamma^{\mathrm{a}}_{0j}\) being the corresponding cathode/anode-electrolyte
interfaces. Additional subdomains of binders and current collectors will be made for numerical examples, but we avoid the tedious details here in the interest of brevity. Additionally, since the mathematical development is partly agnostic to the distinction between cathode and anode, we will use \(\Omega_{0}^{-}\) for an active particle and \(\Gamma_{0}\) as its interface, wherever the difference is inconsequential.
### Coupling conditions and governing equations
#### 2.1.1 Lithiation, intercalation and kinematics
The solid electrolyte \(\Omega^{+}\) hosts Li\({}^{+}\) cations and the active particles, \(\Omega^{-}\) are intercalated by Li. The discharge/charge reactions at the interfaces \(\Gamma_{0}\) are:
\[\text{Discharge:}\quad\text{Li}\to\text{Li}^{+}+\text{e}^{-}\text{ at }\Gamma^{\text{a}},\qquad\text{Li}^{+}+\text{e}^{-}\to\text{Li at }\Gamma^{\text{c}} \tag{1a}\] \[\text{Charge:}\quad\text{Li}^{+}+\text{e}^{-}\to\text{Li at }\Gamma^{\text{a}},\qquad\text{Li}\to\text{Li}^{+}+\text{e}^{-}\text{ at }\Gamma^{\text{c}} \tag{1b}\]
Lithium intercalation causes lattice expansion or contraction depending on the active particle chemistry. This chemically driven deformation must be incorporated with the total deformation gradient, \(\mathbf{F}=\partial\mathbf{\varphi}/\partial\mathbf{X}=\mathbf{1}+\partial\mathbf{u}/\partial\mathbf{X}\), where \(\mathbf{X}\) is the reference position, \(\mathbf{\varphi}\) is the deformation and \(\mathbf{u}\) is the displacement field. The multiplicative decomposition \(\mathbf{F}=\mathbf{F}^{\mathbf{c}}\mathbf{F}^{\text{c}}\) achieves this via the elastic and chemical parts of the deformation gradient \(\mathbf{F}^{\mathbf{c}}\) and \(\mathbf{F}^{\text{c}}\), respectively. Here, we will consider intercalation strain in the active particles, only, and \(\mathbf{F}^{\text{c}}\) will be a function of the Li molar concentration \(c_{\text{Li}}\), which is defined on the deformed configuration of active particles, \(\Omega^{-}=\mathbf{\varphi}(\Omega_{0}^{+})\). Cation molar concentrations \(c_{\text{Li}^{+}}\) are defined on the deformed configuration of the electrolyte \(\Omega^{+}=\mathbf{\varphi}(\Omega_{0}^{+})\). The corresponding fluxes are \(\mathbf{j}_{\text{Li}}\) and \(\mathbf{j}_{\text{Li}^{+}}\) on \(\Omega^{-}\) and \(\Omega^{+}\), respectively. For consistency with the preceding treatment, the electric potential will be denoted by \(\phi_{\text{e}}\) in the solid electrolyte \(\Omega^{+}\), and \(\phi_{\text{p}}\) in the active particle subdomains \(\Omega^{-}\), respectively. In what follows, the electro-chemical governing equations will be posed in the deformed configurations \(\Omega^{\pm}\) with interfaces \(\Gamma^{\text{a}},\Gamma^{\text{c}}\), while those for mechanics will be in the reference configuration \(\Omega_{0}\) with interfaces \(\Gamma^{\text{a}}_{0},\Gamma^{\text{c}}_{0}\). Transformations will be invoked only as needed, and not broadly.
#### 2.1.2 Mass and charge transport
In the active particles, \(\Omega^{-}\), Li transport reduces to a conservation equation
\[\frac{\partial c_{\text{Li}}}{\partial t}+\nabla\cdot\mathbf{j}_{\text{Li}}=0 \qquad\text{with}\qquad\mathbf{j}_{\text{Li}}=-D_{\text{Li}}\nabla c_{\text{Li}} \quad\text{in }\Omega^{-} \tag{2}\]
where \(D_{\text{Li}}\) is the diffusivity.
The Li\({}^{+}\) cations are also governed by a conservation equation over \(\Omega^{+}\) that has the same form as (2):
\[\frac{\partial c_{\text{Li}^{+}}}{\partial t}+\nabla\cdot\mathbf{j}_{ \text{Li}^{+}} =0\qquad\text{with}\qquad\mathbf{j}_{\text{Li}^{+}}=-D_{\text{Li}^{+}} \nabla c_{\text{Li}^{+}}+\frac{t_{+}}{F}\mathbf{i}^{+}\quad\text{in }\Omega^{+}, \tag{3a}\] \[\mathbf{j}_{\text{Li}^{+}}\cdot\mathbf{n}^{+} =0,\quad\text{on }\partial\Omega^{+}\backslash\Gamma\] (3b) \[-\mathbf{i}^{+}\cdot\mathbf{n}^{+} =i_{\text{ext}}\quad\text{on }\partial\Omega^{+}\backslash\Gamma \tag{3c}\]
where \(D_{\text{Li}^{+}}\) is the cation diffusivity, \(t_{+}\) is the transference number (the fraction of the total current carried by the cations) and \(F\) is the Faraday constant. The current is given by
\[\mathbf{i}^{+}=-\kappa_{\text{e}}\nabla\phi_{\text{e}}-\frac{2R\theta\kappa^{+}}{ F}(1-t_{+})\nabla\ln c_{\text{Li}^{+}} \tag{4}\]
where \(\kappa_{\text{e}}\) is the electrolyte's conductivity, \(R\) is the universal gas constant and \(\theta\) is the temperature. Equation (4) corresponds to the general form [4] reduced to a single charged species, dilute in terms of \(c_{\text{Li}^{+}}\).
We turn to the question of boundary conditions for the Li transport equation (2) over \(\Omega^{-}\). Instead of boundary conditions, interface conditions hold on \(\Gamma=\partial\Omega^{-}\), for Li\({}^{+}\) transport, and are discussed below. However, the vanishing flux boundary condition (3b) and the current continuity boundary condition (3c) hold on \(\partial\Omega^{+}\backslash\Gamma\); this is the external boundary of the electrolyte where it connects to the current collector.
#### 2.1.3 Electrostatics
The electric potential in the active particles is governed by Ohm's law-alternately Gauss' law subject to the electroneutrality condition:
\[\nabla\cdot(-\kappa_{\text{p}}\nabla\phi_{\text{p}})=0\quad\text{in }\Omega^{-} \tag{5}\]
where \(\kappa_{\text{p}}\) is the active particle's conductivity. The electric potential in the electrolyte also satisfies the electroneutrality condition:
\[\nabla\cdot\left(-\kappa_{\text{e}}\nabla\phi_{\text{e}}-\frac{2R \theta\kappa_{\text{e}}}{F}(1-t_{+})\nabla\ln c_{\text{Li+}}\right) =0\quad\text{in}\ \Omega^{+} \tag{6a}\] \[-\left(-\kappa_{\text{e}}\nabla\phi_{\text{e}}-\frac{2R\theta \kappa_{\text{e}}}{F}(1-t_{+})\nabla\ln c_{\text{Li+}}\right)\cdot\mathbf{n}^{+} =i_{\text{ext}}\quad\text{on}\ \partial\Omega^{+} \tag{6b}\]
As is the case for mass transport over \(\Omega^{-}\), boundary conditions on (5) are replaced by interface conditions on \(\Gamma=\partial\Omega^{-}\). These will involve \(\mathbf{j}^{-},\mathbf{j}^{+}\) and depend on \(\phi_{\text{p}},\phi_{\text{e}}\), thus providing the additional condition on \(\phi_{\text{e}}\) needed at \(\Gamma\) and coupling the mass/charge transport and electrostatic equations.
#### 2.1.4 Chemo-mechanics driven by finite intercalation strains
Here, we restrict our model chemistries to those in which intercalation strain arises only in the active particles. The kinematics of intercalation strain are modelled by a multiplicative decomposition of the deformation gradient. This is a treatment that is common to models of chemically induced strain such as from thermal oxidation of silicon [19, 21], lithium intercalation in liquid and solid electrolyte batteries [30, 31, 4, 5, 6, 7, 8, 9, 10], as well as to phenomena of biological growth [32, 33, 34]. The multiplicative decomposition is introduced as:
\[\mathbf{F} =\mathbf{F}^{\text{e}}\mathbf{F}^{\text{c}} \tag{7a}\] \[\mathbf{F}^{\text{c}} =\left(g(c_{\text{Li}})\right)^{1/3}\mathbf{1}. \tag{7b}\]
where \(\mathbf{F}^{\text{c}}\) is the chemical component, which in general introduces incompatibility to the deformation gradient field, and the intercalation function \(g(c_{\text{Li}})\) is specified below. Compatibility is restored by the elastic component of the deformation gradient, \(\mathbf{F}^{\text{e}}\), which is also incompatible in general. The multiplicative decomposition is local by definition, as suggested by its illustration for a neighborhood in Figure 1. Generally non-uniform fields \(c_{\text{Li}}\) introduce inhomogeneous \(\mathbf{F}^{\text{c}}\). For chemo-mechanics of hyperelastic solids, the stresses depend on \(\mathbf{F}^{\text{e}}\) in an objective manner. The strain energy density function is a component of the free energy density: \(\psi_{\text{in}}(\mathbf{F}^{\text{e}})=\widehat{\psi}_{\text{in}}(\mathbf{E}^{\text{ e}})\) for the elastic Green-Lagrange strain tensor \(\mathbf{E}^{\text{e}}=\frac{1}{2}(\mathbf{F}^{\text{e}}\mathbf{F}^{\text{e}}-\mathbf{1})\). Invoking the right Cauchy-Green tensor \(\mathbf{C}=\mathbf{F}^{\text{T}}\mathbf{F}\) we also have \(\mathbf{E}^{\text{e}}=\frac{1}{2}(\mathbf{F}^{\text{e}^{\text{T}}}\mathbf{C}\mathbf{F}^{\text {e}^{-1}}-\mathbf{1})\). We therefore write \(\psi=\widehat{\psi}_{\text{m}}(\mathbf{C},c_{\text{Li}})\), making the chemo-mechanical coupling clear in the strain energy density.
The first Piola-Kirchhoff stress is \(\mathbf{P}=\mathbf{F}^{\text{e}}(\partial\widehat{\psi}_{\text{m}}/\partial\mathbf{E}^{ \text{e}})\mathbf{F}^{\text{e}^{-1}}\) and satisfies the equilibrium equation with boundary conditions
\[\mathrm{DIV}[\mathbf{P}]=\mathbf{0} \text{in}\ \Omega_{0} \tag{8a}\] \[\mathbf{u}=\bar{\mathbf{u}} \text{on}\ \partial\Omega_{0u}\] (8b) \[\mathbf{P}\mathbf{N}=\mathbf{T} \text{on}\ \partial\Omega_{0P} \tag{8c}\]
Here, Dirichlet boundary conditions on \(\mathbf{u}\) are \(\bar{\mathbf{u}}=\mathbf{0}\) applied where the current collector connects to the solid electrolyte or an active particle. Neumann boundary conditions \(\mathbf{T}=\mathbf{0}\) are applied on lateral surfaces of the configurations presented in Section 6. The above description holds in the absence of interfaces, the case of interest, which will be developed in Section 3.2.
### Electro-chemo-mechanical coupling in the free energy
The free energy density defined on \(\Omega_{0}\) has contributions from the electrostatic displacement and the Li/Li\({}^{+}\) concentrations in addition to its mechanical component from the strains. The coupled electro-chemo-mechanics is a consequence and is reflected in governing equations and constitutive relations. We begin by writing
\[\psi=\psi_{\text{e}}+\psi_{\text{c}}+\psi_{\text{m}} \tag{9}\]
for electrostatic and chemical components \(\psi_{\text{e}}\) and \(\psi_{\text{c}}\), respectively.
For the electrostatic component we write
\[\psi_{\text{e}}=\widehat{\psi}_{\text{e}}(\mathbb{D})=\frac{1}{2\epsilon_{0}} \mathbb{D}\cdot\mathbf{\epsilon}_{r}^{\pm^{-1}}\mathbb{D} \tag{10}\]
where \(\epsilon_{0}\) is the permittivity of vacuum, \(\mathbf{\epsilon}_{r}^{\pm}\) is the relative permittivity of the active particle/electrolyte and \(\mathbb{D}\) is the electric displacement vector on \(\Omega_{0}\). The electric field on \(\Omega_{0}\) is
\[\mathbb{E}=\frac{\partial\psi_{\text{e}}}{\partial\mathbb{D}}. \tag{11}\]
The electric field on \(\Omega\) is \(\mathbf{e}=\mathbf{F}^{-}\mathbb{T}\mathbb{E}\) and satisfies \(\mathbf{e}_{\text{e/p}}=-\nabla\phi_{\text{e/p}}\). Here we solve the governing electrostatics directly in terms of \(\phi_{\pm}\) in (5) and (6a) by defining the current:
\[\mathbf{i}^{-}=-\kappa_{\text{p}}\nabla\phi_{\text{p}},\quad-\nabla \cdot\mathbf{i}^{-}=0 \tag{12a}\] \[\mathbf{i}^{+}=-\kappa_{\text{e}}\nabla\phi_{\text{e}}-\frac{2R\theta \kappa_{\text{e}}}{F}(1-t_{+})\nabla\ln c_{\text{Li}+},\quad-\nabla\cdot\mathbf{i} ^{+}=0 \tag{12b}\]
Equations (10)-(12b) relate the electrostatic governing equations to the electrostatic free energy density. Furthermore, (12a), (12b) can be obtained as the Euler-Lagrange equations arising from the extremization of \(\widehat{\psi}_{\text{e}}(\mathbb{D})\).
The chemical component of the free energy is
\[\psi_{\text{c}}=\widehat{\psi}_{\text{c}}(c_{\text{Li}},c_{\text{Li}^{+}})= \text{det}[\mathbf{F}]c_{\text{Li}}\mu_{\text{Li}}^{\text{ref}}+R\theta\int \limits_{\text{c}_{\text{Li}^{+}}^{\text{ref}}}^{\alpha_{\text{Li}}}\ln\left( \frac{c_{\text{Li}}}{c_{\text{Li}}^{\text{ref}}}\right)\text{d}c_{\text{Li}}+ \text{det}[\mathbf{F}]c_{\text{Li}^{+}}\mu_{\text{Li}^{+}}^{\text{ref}}+R\theta \int\limits_{\text{c}_{\text{Li}^{+}}^{\text{ref}}}^{c_{\text{Li}^{+}}}\ln \left(\frac{c_{\text{Li}^{+}}}{c_{\text{Li}^{\text{ref}}}^{\text{ref}}} \right)\text{d}c_{\text{Li}^{+}} \tag{13}\]
where \(\mu_{\text{Li}}^{\text{ref}}\) and \(\mu_{\text{Li}^{+}}^{\text{ref}}\) are molar reference chemical potentials that are independent of \(c_{\text{Li}},c_{\text{Li}^{+}},\phi_{\pm},\mathbf{F}\). The chemical potentials of Li and Li\({}^{+}\) have contributions \(\mu_{\alpha_{\text{i}}}=\partial\psi_{\text{c}}/\partial c_{\text{Li}}\) and \(\mu_{\text{c}_{\text{Li}^{+}}}=\partial\psi_{\text{c}}/\partial c_{\text{Li}^ {+}}\):
\[\mu_{\alpha_{\text{i}}} =\text{det}[\mathbf{F}]\mu_{\text{Li}}^{\text{ref}}+R\theta\ln\left( \frac{c_{\text{Li}}}{c_{\text{Li}}^{\text{ref}}}\right) \tag{14a}\] \[\mu_{\text{c}_{\text{Li}^{+}}} =\text{det}[\mathbf{F}]\mu_{\text{Li}^{+}}^{\text{ref}}+R\theta\ln \left(\frac{c_{\text{Li}^{+}}}{c_{\text{Li}^{\text{ref}}}^{\text{ref}}}\right) \tag{14b}\]
of which, the first sub-equation yields the form of the Fickian diffusion term in (2) for the dilute limit of \(\alpha_{\text{Li}}\). For the solid electrolyte, we introduce the transference number, \(t_{+}\), representing the fraction of current carried by Li\({}^{+}\) in the absence of diffusion. Some algebra brings us to the form of the Li\({}^{+}\) flux in (3a), which is in agreement with the treatment in [4].
The forms of the equations (2) and (3a) are obtained from "purely electro-chemical" contributions to \(\mu_{\alpha_{\text{Li}}}\) and \(\mu_{\text{c}_{\text{Li}^{+}}}\). Chemo-mechanical coupling furnishes a further driving force, whose form depends on the strain energy density function. Here, we use the St. Venant-Kirchhoff model for the solid electrolyte and active particles, with Lame constant \(\lambda_{\text{e/p}}\) and shear modulus \(G_{\text{e/p}}\)
\[\widehat{\psi}_{\text{m}}(\mathbf{C},c_{\text{Li}})=\frac{1}{2}\lambda_{\text{e/p }}\left(\text{tr}\left[\mathbf{E}^{\text{e}}\right]\right)^{2}+G_{\text{e/p}}\mathbf{E }^{\text{e}}\colon\mathbf{E}^{\text{e}}\]
which on accounting for the elasto-chemo decomposition of \(\mathbf{F}\) in (7a) and (7b) yields the following form in the active particles:
\[\widehat{\psi}_{\text{m}}^{-}(\mathbf{C},c_{\text{Li}})=\frac{1}{2}\lambda_{\text{p }}\left(\frac{\left(g(c_{\text{Li}})\right)^{-2/3}}{2}\text{tr}\left[\mathbf{C} \right]-\frac{3}{2}\right)^{2}+G_{\text{p}}\left[\frac{1}{2}(\left(g(c_{\text{ Li}})\right)^{-2/3}\mathbf{C}-\mathbf{1})\right]\colon\,\left[\frac{1}{2}( \left(g(c_{\text{Li}})\right)^{-2/3}\mathbf{C}-\mathbf{1})\right]. \tag{15}\]
The total chemical potential in the active particles therefore is
\[\mu_{\alpha_{\text{Li}}}^{-}(\mathbf{C},c_{\text{Li}})=\text{det}[\mathbf{F}]\mu_{ \text{Li}}^{\text{ref}}+R\theta\ln\left(\frac{c_{\text{Li}}}{c_{\text{Li}}^{ \text{ref}}}\right)+\left(\lambda_{\text{e/p}}+\frac{2G_{\text{e/p}}}{3} \right)\left(\frac{\left(g(c_{\text{Li}})\right)^{-2/3}}{2}\text{tr}\left[\mathbf{C }\right]-\frac{3}{2}\right)\left(-\frac{1}{3}\left(g(c_{\text{Li}})\right)^{-4 /3}\text{tr}\left[\mathbf{C}\right]\right), \tag{16}\]
making explicit the chemo-mechanical coupling in terms of \(\mathbf{C}\) and \(c_{\text{Li}}\). A more transparent form, which can be arrived at via elementary tensor calculus and by mapping of stress measures between \(\Omega_{0}\) and \(\Omega\) is
\[\mu_{\alpha_{\text{Li}}}^{-}(\mathbf{C},\alpha_{\text{Li}})=\text{det}[\mathbf{F}]\mu_{ \text{Li}}^{\text{ref}}+R\theta\ln\left(\frac{c_{\text{Li}}}{c_{\text{Li}}^{ \text{ref}}}\right)-\frac{1}{3}\text{det}[\mathbf{F}]\left(g(\alpha_{\text{Li}}) \right)^{-2/3}\text{tr}[\mathbf{\sigma}], \tag{17}\]
where \(\mathbf{\sigma}\) is the Cauchy stress.
The Li flux in active particles is
\[\mathbf{j}_{\text{Li}}=-M_{\text{Li}}c_{\text{Li}}(1-\frac{\alpha_{\text{Li}}}{c_{ \text{Li}}^{\text{max}}})\nabla\mu_{\alpha_{\text{Li}}}^{-}\]
where \(M_{\text{Li}}c_{\text{Li}}(1-c_{\text{Li}}/c_{\text{Li}}^{\text{max}})\) is the mobility. For dilute conditions, \(c_{\text{Li}}\ll c_{\text{Li}}^{\text{max}}\), we have
\[\mathbf{j}_{\text{Li}}=-M_{\text{Li}}R\theta\nabla c_{\text{Li}}-M_{\text{Li}}c_{ \text{Li}}\nabla\left(\text{det}[\mathbf{F}]\left(g(c_{\text{Li}})\right)^{-2/3} \frac{1}{3}\text{tr}[\mathbf{\sigma}]\right), \tag{18}\]
where \(M_{\text{Li}}R\theta=D_{\text{Li}}\). Noting that \(\frac{1}{3}\text{tr}[\mathbf{\sigma}\) is the hydrostatic stress, the traditional form of pressure gradient-driven mass transport is seen in the flux relation. This constitutive model satisfies the dissipation inequality; see Ganser et al. [4] for a comprehensive treatment, which we do not revisit here.
## 3 Continuum treatment of interfaces
The treatment of interfaces in continuum physics is natural, especially when approached in the integral form of the governing equations [35, 36]. We lay out the equations in weak form beginning with mass balance.
### Mass balance for a scalar field in the presence of an interface
For mass balance of a single component whose concentration is denoted by \(c(\mathbf{x},t)\) the strong form in the presence of an interface \(\Gamma\subset\Omega\) is:
\[\begin{split}\frac{\partial c}{\partial t}+\nabla\cdot\mathbf{j}=0 &\text{in}\quad\Omega\backslash\Gamma,\\ c_{0}(\mathbf{x},0)=\bar{c}_{0}(\mathbf{x})&\text{on} \quad\Omega\backslash\Gamma,\\ c(\mathbf{x},t)=\bar{c}(\mathbf{x},t)&\text{on}\quad \partial\Omega^{c}\times[0,T],\\ -\mathbf{j}\cdot\mathbf{n}=\widetilde{j}(\mathbf{x},t)&\text{on} \quad\partial\Omega^{j}\times[0,T].\end{split} \tag{19}\]
Additionally, we allow for interface reactions on \(\Gamma\) with a rate \(R(\mathbf{x},c^{+},c^{-})\).
#### 3.1.1 The weak form of scalar transport equations with an interface
The weak form of mass balance equation is written as: Given \(w\in\mathcal{V}\) find \(c\in\mathcal{S}\) such that
\[\int_{\Omega}w\frac{\partial c}{\partial t}dV=\int_{\Omega}\nabla w\cdot\mathbf{j} dV+\int_{\partial\Omega}w\widetilde{j}dS-\int_{\Gamma}w[\mathbf{j}\cdot\mathbf{n}] \mathbb{d}S \tag{20}\]
where the flux discontinuity is \([\mathbf{j}\cdot\mathbf{n}]=\mathbf{j}^{+}\cdot\mathbf{n}^{+}+\mathbf{j}^{-}\cdot\mathbf{n}^{-}\). We recall the important technical point that the integral over \(\Omega\) is strictly over \(\Omega^{+}\cup\Omega^{-}=\Omega\backslash\Gamma\). However for regular (non-singular) integrands, the integrals over either domain are equal since \(\Gamma\) is a set of zero measure in \(\mathbb{R}^{3}\). Here, we are interested in interface reactions of the form introduced above, that drive the flux discontinuity:
\[[\mathbf{j}\cdot\mathbf{n}]=R(\mathbf{x},c^{+},c^{-}). \tag{21}\]
The flux continuity condition across \(\Gamma\) is \([\mathbf{j}\cdot\mathbf{n}]=0\) for \(R=0\) Since Equations (20) and (21) are restricted to being only in terms of \(c\), we have not introduced additional fields in the functional dependence of \(R\). An extension of that functional dependence is natural for electro-chemical coupling and is considered below. We first rewrite (20) as weak forms over \(\Omega^{+}\) and \(\Omega^{-}\), specifying that the transported species in an active particle, \(\Omega^{-}\), is Li with concentration \(c_{\text{Li}}\), and in the solid electrolyte, \(\Omega^{+}\), it is Li\({}^{+}\) with concentration \(c_{\text{Li}^{+}}\). We have [31]:
\[\begin{split}\int_{\Omega^{-}}w\frac{\partial c_{\text{Li}}}{ \partial t}dV&=\int_{\Omega^{-}}\nabla w\cdot\mathbf{j}_{\text{Li}} dV-\int_{\partial\Omega^{-}}w\mathbf{j}_{\text{Li}}\cdot\mathbf{n}dS-\int_{ \Gamma}w\mathbf{j}_{\text{Li}}\cdot\mathbf{n}^{+}dS\\ \int_{\Omega^{+}}w\frac{\partial c_{\text{Li}^{+}}}{\partial t} dV&=\int_{\Omega^{+}}\nabla w\cdot\mathbf{j}_{\text{Li}^{+}} dV-\int_{\partial\Omega^{+}}w\mathbf{j}_{\text{Li}^{+}}\cdot\mathbf{n}dS-\int_{ \Gamma}w\mathbf{j}_{\text{Li}^{+}}\cdot\mathbf{n}^{-}dS,\end{split} \tag{22}\]
where the fluxes \(\mathbf{j}_{\text{Li}}\) and \(\mathbf{j}_{\text{Li}^{+}}\) satisfy (2), (3a) and (4). We similarly rewrite the interface reaction equation:
\[\mathbf{j}_{\text{Li}^{+}}\cdot\mathbf{n}^{+}+\mathbf{j}_{\text{Li}}\cdot\mathbf{n}^{-}=R(\mathbf{ x},c^{+},c^{-})\text{ on }\Gamma. \tag{23}\]
#### 3.1.2 The embedded interface treatment
The mathematical treatment of the interface is central to this work. We use finite element methods, and Figure 2 illustrates a feature-conforming finite element mesh for circular particles. While following the interface is relatively easy for simple geometries, arbitrary particle shapes lead to increasingly complicated meshes. Here, rather than adopt
conforming meshes to resolve active particles geometries, we represent the particle-electrolyte interface, \(\Gamma\) as the zero levelset of a scalar function, \(\eta(\mathbf{x})\) against a background Cartesian mesh, as also illustrated in Figure 2. Thus, the following will be implied whenever \(\Gamma\) is referred to, especially in the mesh-based context:
\[\mathbf{x}\in\Gamma,\quad\forall\mathbf{x}\text{ s.t. }\eta(\mathbf{x})=0 \tag{24}\]
An element, \(\Omega_{e}\) that is intersected by a subset of the interface \(\Gamma_{e}\subset\Gamma\) has subsets \(\Omega_{e}^{+}\) and \(\Omega_{e}^{-}\) such that \(\overline{\Omega_{e}}=\overline{\Omega_{e}^{+}\cup\Omega_{e}^{-}\cup\Gamma}\). Furthermore the fluxes local to this element admit jumps
\[\llbracket\mathbf{j}_{\mathbf{i}\mathbf{i}^{+}}\rrbracket =\mathbf{j}_{\mathbf{i}\mathbf{i}^{+}}^{+}\cdot\mathbf{n}^{+}+\mathbf{j}_{\mathbf{i}\mathbf{i }^{+}}^{-}\cdot\mathbf{n}^{-}\text{ on }\Gamma,\quad\text{s.t. }\mathbf{j}_{\mathbf{i}\mathbf{i}^{+}}^{-}=\mathbf{0}\text{ in }\Omega_{e}^{-}, \tag{25a}\] \[\llbracket\mathbf{j}_{\mathbf{i}\mathbf{i}}\rrbracket =\mathbf{j}_{\mathbf{i}\mathbf{i}}^{+}\cdot\mathbf{n}^{+}+\mathbf{j}_{\mathbf{i}\mathbf{i}}^{ -}\cdot\mathbf{n}^{-}\text{ on }\Gamma,\quad\text{s.t }\mathbf{j}_{\mathbf{i}\mathbf{i}}^{+}=\mathbf{0}\text{ in }\Omega_{e}^{+}. \tag{25b}\]
The above equations mean that the flux of \(\text{Li}^{+}\) vanishes in the particle and flux of Li vanishes in the electrolyte. However, the respective quantities \(\mathbf{j}_{\mathbf{i}\mathbf{i}^{+}}^{-}\) and \(\mathbf{j}_{\mathbf{i}\mathbf{i}}^{+}\) must be represented in \(\Omega_{e}^{-}\) and \(\Omega_{e}^{+}\). This also implies that the concentrations \(c^{+}\) and \(c^{-}\) must be represented in \(\Omega_{e}^{-}\) and \(\Omega_{e}^{+}\), respectively, and that these fields also suffer discontinuities
\[\llbracket c_{\mathbf{i}\mathbf{i}^{+}}^{+}\rrbracket =c_{\mathbf{i}\mathbf{i}^{+}}-c_{\mathbf{i}\mathbf{i}^{+}}^{-}\text{ on }\Gamma,\quad\text{s.t. }c_{\mathbf{i}\mathbf{i}^{+}}^{-}=0\text{ in }\Omega_{e}^{-}, \tag{26a}\] \[\llbracket c_{\mathbf{i}\mathbf{i}}\rrbracket =c_{\mathbf{i}\mathbf{i}}^{+}-c_{\mathbf{i}\mathbf{i}}^{-}\text{ on }\Gamma,\quad\text{s.t. }c_{\mathbf{i}\mathbf{i}}^{+}=0\text{ in }\Omega_{e}^{+}. \tag{26b}\]
The combination of (23), with jump conditions (25a) and (25b) is to be imposed on \(\Gamma\cap\Omega_{e}\). For brevity we introduce the notation \(\Omega_{e}^{\Gamma}\) for an element that satisfies \(\Gamma\cap\Omega_{e}\neq\emptyset\)
In this work we treat the above discontinuous fields by the strong discontinuity approach, which has been used in reaction-transport problems previously [19, 21]. Originally introduced for displacement discontinuities in inelastic solids [18, 37, 38, 23, 39, 40], here, we apply it more widely to the reaction-transport, nonlinear elastic fracture as well as electrostatic problems-that is, to all of the electro-chemo-mechanics of solid state batteries. Starting with the reaction-transport problem we consider the treatment of a concentration field, \(c\), admitting a discontinuity:
\[c=\bar{c}+\llbracket c\rrbracket,\quad\text{where }\llbracket c\rrbracket =H_{\Gamma}\xi \tag{27}\]
with \(\bar{c}\) being the continuous component, \(H_{\Gamma}\) the Heaviside on \(\Gamma\) defined by
\[H_{\Gamma}(\mathbf{x})=\begin{cases}1&\text{if }\mathbf{x}\in\ \Omega^{+}\\ 0&\text{if }\mathbf{x}\in\ \Omega^{-}\end{cases} \tag{28}\]
and \(\xi\) a scalar. Equation (27) can be rewritten over \(\Omega_{e}^{\Gamma}\) as
\[c=\bar{c}+M_{\Gamma}\xi,\quad\text{in }\Omega_{e}^{\Gamma} \tag{29}\]
Figure 2: Left: conforming finite element mesh resolving an idealized two-dimensional particle geometry; right: uniform Cartesian mesh, with the interface \(\Gamma\) passing through the elements \(\Omega_{e_{1}}^{\Gamma},\Omega_{e_{2}}^{\Gamma},\dots\).
where \(\tilde{c}\) is the representation from continuous basis functions and \(M_{\Gamma}\) satisfies
\[M_{\Gamma}(\mathbf{x})=H_{\Gamma}(\mathbf{x})-\chi(\mathbf{x}). \tag{30}\]
This accounts for the difference between the true discontinuity \(H_{\Gamma}(\mathbf{x})\) and its approximation in the continuous basis \(\chi(\mathbf{x})\). The latter function has the property
\[\chi(\mathbf{x})=\begin{cases}1&\text{if }\mathbf{x}\in\ \Omega^{+}\backslash\Omega_{e}^{ \Gamma}\\ 0&\text{if }\mathbf{x}\in\ \Omega^{-}\backslash\Omega_{e}^{\Gamma},\end{cases} \tag{31}\]
which enforces \(M_{\Gamma}(\mathbf{x})=0\) in elements that do not intersect \(\Gamma\), and is only non-zero in elements intersecting \(\Gamma\).
In the simplest case, which we adopt here, \(\llbracket c(\mathbf{x})\rrbracket\) does not vary along \(\Gamma\) within \(\Omega_{e}^{\Gamma}\). See Fig. 3 for an illustration of this case. The treatment here follows the non-conforming finite element approach where variations along \(\Gamma\) are obtained by \(\xi\) taking on different values in adjacent elements \(\Omega_{e_{1}}^{\Gamma}\) and \(\Omega_{e_{2}}^{\Gamma}\) both of which have non-empty intersections with \(\Gamma\). Then restricting to Lagrange polynomial basis functions \(N^{A}(\mathbf{x})\) that have the Kronecker-delta property \(N^{A}(\mathbf{x}_{B})=\delta_{AB}\) at finite element nodes placed at \(\mathbf{x}_{B}\in\Omega_{e}^{\Gamma}\) a convenient representation for \(\chi(\mathbf{x})\) is
\[\chi(\mathbf{x})=\sum_{A,\ \text{s.t.}\mathbf{x}_{A}\in\Omega_{e}^{\Gamma}}N^{A}(\mathbf{ x}). \tag{32}\]
where \(\Omega_{e}^{\Gamma^{+}}\) is the subdomain into which \(\mathbf{n}^{+}\) points.
#### 3.1.3 The weak forms for Li transport with reactions across a particle-electrolyte interface
The treatment in terms of weak forms in Section 3.1.1 is made more specific for Li\({}^{+}\) transport through the electrolyte, reaction with electrons \(e^{-}\) at the particle-electrolyte interface and transport of Li through the particle:
\[\int_{\Omega^{-}}w\frac{\partial\alpha_{\text{Li}}}{\partial t}dV =\int_{\Omega^{-}}\nabla w\cdot\mathbf{j}_{\text{Li}}dV-\int_{\partial \Omega^{-}}w\mathbf{j}_{\text{Li}}\cdot\mathbf{n}dS-\int_{\Gamma}w\mathbf{j}_{\text{Li}} \cdot\mathbf{n}^{+}dS \tag{33a}\] \[\mathbf{j}_{\text{Li}} =-D_{\text{Li}}\nabla\alpha_{\text{Li}}-M_{\text{Li}}\alpha_{ \text{Li}}\nabla\left(\text{det}[\mathbf{F}]\left(g(\alpha_{\text{Li}})\right)^{- 2/3}\frac{1}{3}\text{tr}[\mathbf{\sigma}]\right)\] (33b) \[\int_{\Omega^{+}}w\frac{\partial\alpha_{\text{Li}^{+}}}{\partial t}dV =\int_{\Omega^{+}}\nabla w\cdot\mathbf{j}_{\text{Li}^{+}}dV-\int_{ \partial\Omega^{+}}w\mathbf{j}_{\text{Li}^{+}}\cdot\mathbf{n}dS-\int_{\Gamma}w\mathbf{j}_ {\text{Li}^{+}}\cdot\mathbf{n}^{-}dS\] (33c) \[\mathbf{j}_{\text{Li}^{+}} =-D_{\text{Li}^{+}}\nabla\alpha_{\text{Li}^{+}}-\frac{t_{+}}{F} \left(\kappa_{e}\nabla\phi_{\text{e}}+\frac{2R\theta\kappa_{e}}{F}(1-t_{+}) \nabla\ln\alpha_{\text{Li}^{+}}\right) \tag{33d}\]
Additionally, charge transfer kinetics are imposed as an interface condition on \(\Gamma\) as detailed in (39).
Figure 3: Illustration of the discontinuous basis function, \(M_{\Gamma}\).
1.4 The weak form for the Poisson equation for electric fields with a particle-electrolyte interface
The electric potential in the active particles and the electrolyte also suffers jumps at \(\Gamma\) when modelled within the embedded interface treatment of Section 3.1.2:
\[\llbracket\phi_{\text{e}}\rrbracket =\phi_{\text{e}}^{+}-\phi_{\text{e}}^{-}\text{ on }\Gamma,\quad\text{s.t. }\phi_{\text{e}}^{-}=0\text{ in }\Omega_{e}^{-} \tag{34}\] \[\llbracket\phi_{\text{p}}\rrbracket =\phi_{\text{p}}^{+}-\phi_{\text{p}}^{-}\text{ on }\Gamma,\quad\text{s.t. }\phi_{\text{p}}^{+}=0\text{ in }\Omega_{e}^{+}. \tag{35}\]
and gradient conditions
\[\llbracket\nabla\phi_{\text{e}}\rrbracket =\nabla\phi_{\text{e}}^{+}\cdot\mathbf{n}^{+}+\nabla\phi_{\text{e}}^{ -}\cdot\mathbf{n}^{-}\text{ on }\Gamma,\quad\text{s.t}\,\nabla\phi_{\text{e}}^{-}\cdot\mathbf{n}^{-}=\mathbf{0} \text{ in }\Omega_{e}^{-}, \tag{36a}\] \[\llbracket\nabla\phi_{\text{p}}\rrbracket =\nabla\phi_{\text{p}}^{+}\cdot\mathbf{n}^{+}+\nabla\phi_{\text{p}}^{ -}\cdot\mathbf{n}^{-}\text{ on }\Gamma,\quad\text{s.t. }\nabla\phi_{\text{p}}^{+}\cdot\mathbf{n}^{+}=\mathbf{0} \text{ in }\Omega_{e}^{+}, \tag{36b}\]
The discontinuous scalar fields \(\phi_{\text{p}}\) and \(\phi_{\text{e}}\) can be represented by the basis function \(M_{\Gamma}\) over elements \(\Omega_{e}^{\Gamma}\). The corresponding weak forms are:
\[\int\limits_{\Omega^{-}}\nabla w\cdot\nabla\phi_{\text{p}}dV =0 \tag{37}\] \[\int\limits_{\Omega^{+}}\kappa_{\text{e}}\nabla w\cdot\left( \nabla\phi_{\text{e}}+\frac{2R\theta}{F}(1-t_{+})\nabla\ln C_{\text{Li}^{+}} \right)dV =\int\limits_{\partial\Omega^{+}}i_{\text{ext}}\mathrm{d}S, \tag{38}\]
where the Neumann boundary condition imposes current continuity from (3c). This treatment of \(\phi_{\text{p}}\) and \(\phi_{\text{e}}\) is combined with that of \(j_{\text{Li}}\) and \(j_{\text{Li}^{+}}\) in Section 3.1.3 into interface conditions that are given by Butler-Volmer charge transfer kinetics, now reinterpreted in terms of discontinuous fields
\[\mathbf{j}_{\text{Li}^{+}}\cdot\mathbf{n}^{+}+\mathbf{j}_{\text{Li}}\cdot\mathbf{n}^{-}=j_{0} \left(\exp\left(\frac{\alpha_{\text{e}}F}{R\theta}(\phi_{\text{p}}^{-}-\phi_{ \text{e}}^{+}-U)\right)-\exp\left(-\frac{\alpha_{\text{e}}F}{R\theta}(\phi_{ \text{p}}^{-}-\phi_{\text{e}}^{+}-U)\right)\right)\quad\text{on }\Gamma \tag{39}\]
### Finite strain kinematics with a discontinuous displacement vector field
The interface between solid electrolytes and active particles is susceptible to fracture, especially with brittle ceramics such LLZO. The equation of mechanical equilibrium translates to traction continuity on \(\Gamma\):
\[\llbracket\mathbf{PN}\rrbracket=\mathbf{P}^{+}\mathbf{N}^{+}+\mathbf{P}^{-}\mathbf{N}^{-}=\mathbf{0} \quad\text{on }\Gamma. \tag{40}\]
In this work, the treatment of interface fracture is not via the creation of free surfaces, which bear zero traction, but as a sharp interface with degraded traction. The finite separation of the free surfaces is represented by a displacement discontinuity. This treatment of interface fracture and the ensuing mechanics follows the strong discontinuity approach [18, 37, 38, 23, 39, 40, 41]. We have the following decomposition of the deformation into continuous and discontinuous components:
\[\mathbf{\varphi}(\mathbf{X})=\bar{\mathbf{\varphi}}+\llbracket\mathbf{\varphi}\rrbracket H_{ \Gamma_{0}}(\mathbf{X}) \tag{41}\]
where \(H_{\Gamma_{0}}(\mathbf{X})\) is the Heaviside function with respect to the reference configuration. This leads to the deformation gradient:
\[\mathbf{F}=\bar{\mathbf{F}}+\llbracket\mathbf{\varphi}\rrbracket\otimes\mathbf{N}\delta_{ \Gamma_{0}} \tag{42}\]
where \(\bar{\mathbf{F}}\) is the regular (non-singular) component, \(\delta_{\Gamma_{0}}\) is the one-dimensional Dirac-delta at \(\Gamma_{0}\), \(\mathbf{N}\) is the normal vector to \(\Gamma_{0}\) in the reference configuration, and the tensor product defines the singular component of \(\mathbf{F}\). Following the approach in Section 3.1.2 as well as in [20, 40] we first write
\[\bar{\mathbf{\varphi}}^{h}=\sum_{A}N^{A}(\mathbf{X})\mathbf{d}^{A} \tag{43}\]
for basis functions \(N^{A}\), which allows us to rewrite the deformation gradient for elements \(\Omega_{e}^{\Gamma}\):
\[\mathbf{F}^{h}=\text{Grad}[\bar{\mathbf{\varphi}}^{h}]+\tilde{\mathbf{F}}^{h} \tag{44}\]
where \(\text{Grad}\bar{\mathbf{\varphi}}^{h}=\sum_{A}\text{Grad}N^{A}\otimes\mathbf{d}\). Here, \(\tilde{\mathbf{F}}^{h}\) can be considered either as an enhanced strain as in the original strong discontinuity treatment [18] or a fine scale strain in the variational multiscale setting [20]. In either case it can be expressed in the form
\[\tilde{\mathbf{F}}^{h}=-\mathbf{\alpha}\otimes\frac{\partial\chi(\mathbf{X})}{\partial\mathbf{ X}}+\mathbf{\alpha}\otimes\mathbf{N}\delta_{\Gamma_{0}}. \tag{45}\]
The enhanced strain and variational multiscale treatments lead to two equations in weak form:
\[\int\limits_{\Omega_{0}}\text{Grad}\boldsymbol{w}^{h}\colon\boldsymbol{P} \mathrm{d}V=\boldsymbol{0} \tag{46a}\] \[\sum\limits_{\Omega_{\mathrm{e}_{1}}^{\mathrm{r}}\cup,\ldots, \Omega_{\mathrm{e}_{n}}^{\mathrm{r}}\Omega_{\mathrm{e}}^{\mathrm{r}}}\int \limits_{\boldsymbol{\bar{H}}}\colon\boldsymbol{P}\mathrm{d}V=\boldsymbol{0} \tag{46b}\]
In particular (46b) leads to
\[\sum\limits_{e\in\{e_{1},\ldots e_{n}\}}\int\limits_{\Omega_{\mathrm{e}}^{ \mathrm{r}}}\boldsymbol{\beta}\cdot\boldsymbol{P}\frac{\partial\chi(\boldsymbol {X})}{\partial\boldsymbol{X}}\mathrm{d}V=\sum\limits_{e\in\{e_{1},\ldots e_{n} \}}\int\limits_{\Gamma_{\mathrm{e}}}\boldsymbol{\beta}\cdot\boldsymbol{P} \boldsymbol{N}\mathrm{d}S \tag{47}\]
with \(\boldsymbol{\beta}\) being the variation associated with the enhanced strain or with the fine scale strain, depending on the treatment and \(\Gamma_{e}=\Gamma\cap\Omega_{e}^{\mathrm{r}}\). For \(\boldsymbol{\beta}\) uniform over \(\Omega_{e}^{\mathrm{r}}\), and writing \(\boldsymbol{P}\boldsymbol{N}=\boldsymbol{T}_{\Gamma}\), the traction on \(\Gamma\), this leads to
\[\boldsymbol{\underset{e\in\{e_{1},\ldots e_{n}\}}{\boldsymbol{A}}}\int \limits_{\Omega_{\mathrm{e}}^{\mathrm{r}}}\boldsymbol{P}\frac{\partial\chi( \boldsymbol{X})}{\partial\boldsymbol{X}}\mathrm{d}V=\boldsymbol{\underset{e \in\{e_{1},\ldots e_{n}\}}{\boldsymbol{A}}}\int\limits_{\Gamma_{\mathrm{e}}} \boldsymbol{T}_{\Gamma}\mathrm{d}S, \tag{48}\]
where \(\boldsymbol{A}\) is the finite element assembly operator. Here we assume the shear component of the traction \(\boldsymbol{T}_{\Gamma}\) to vanish
\[\boldsymbol{T}_{\Gamma}=T_{\Gamma N}\boldsymbol{N} \tag{49}\]
and \(\boldsymbol{\alpha}=\xi_{N}\boldsymbol{N}\) in (45), that is the crack opening on \(\Gamma\) is only in the normal direction. For the traction-separation relationship we use a linear law [18, 22, 39, 41]:
\[T_{\Gamma N}=\text{max}\left\{0,f_{t}-K\xi_{N}\right\}, \tag{50}\]
where \(K\) is the softening modulus and \(f_{t}\) is the threshold of the maximum stress.
### Solution of finite element equations
The weak forms for mass transport (33a), charge transport (33c), electrostatics (37) and (38), and mechanics (46a), (46b) involve discontinuous fields \([\![\alpha_{\mathrm{i}}]\!]\), \([\![\zeta_{\mathrm{i}^{\mathrm{+}}}]\!]\), \([\![\phi_{\mathrm{p}}]\!]\) and \([\![\phi_{\mathrm{e}}]\!]\), and \([\![\varphi]\!]\), respectively. These fields correspond to scalar unknowns denoted by \(\xi\) in (29) for \([\![\alpha_{\mathrm{i}}]\!]\), \([\![\zeta_{\mathrm{i}^{\mathrm{+}}}]\!]\), \([\![\phi_{\mathrm{p}}]\!]\) and \([\![\phi_{\mathrm{e}}]\!]\) and \(\boldsymbol{\alpha}=\xi_{N}\boldsymbol{N}\) in (45). The corresponding weak forms are implemented as finite element residuals
\[\left\{\begin{array}{l}\bar{\boldsymbol{R}}_{\mathrm{Li}}\\ \bar{\boldsymbol{R}}_{\mathrm{Li}^{\mathrm{+}}}\\ \bar{\boldsymbol{R}}_{\mathrm{p}}\\ \bar{\boldsymbol{R}}_{\mathrm{e}}\\ \bar{\boldsymbol{R}}_{\mathrm{\varphi}}\end{array}\right\}=\boldsymbol{0} \tag{51}\]
where the respective residual vectors are \(\bar{\boldsymbol{R}}_{\mathrm{Li}},\ldots,\bar{\boldsymbol{R}}_{\varphi}\). The local element residuals for element \(e\) have the form
\[\left\{\begin{array}{l}\bar{\boldsymbol{R}}^{e}\\ \bar{\boldsymbol{R}}_{\mathrm{e}}^{e}\end{array}\right\}=\boldsymbol{0}. \tag{52}\]
Of these components, \(\bar{\boldsymbol{R}}_{\mathrm{e}}^{e}=\boldsymbol{0}\) represents the contribution imposing the conditions in (25a), (25b), (36a), (36b) and (46b), respectively. The corresponding finite element degrees of freedom \(\xi\) and \(\boldsymbol{\alpha}\) are local to the element \(e\) and therefore so are their respective variations. Therefore, \(\bar{\boldsymbol{R}}_{\mathrm{e}}^{e}=\boldsymbol{0}\) can be solved locally without assembly into the global residual. In our implementation these nonlinear finite element equations are solved by generating the corresponding jacobians via automatic differentiation using the Sacado library of the Trilinos project [42]. The jacobian submatrices are extracted from the automatic differentiation implementation and used to iteratively update the jump degrees of freedom, \(\xi\) and \(\boldsymbol{\alpha}\) locally at the element level. These local iterates yield updated residuals \(\boldsymbol{R}_{\mathrm{Li}},\boldsymbol{R}_{\mathrm{Li}^{\mathrm{+}}}, \boldsymbol{R}_{\mathrm{p}},\boldsymbol{R}_{\mathrm{e}},\boldsymbol{R}_{\varphi}\) and jacobian submatrices that are reassembled into a global residual represented as
\[\boldsymbol{R}(\boldsymbol{d})=\left\{\begin{array}{l}\boldsymbol{R}_{ \mathrm{Li}}\\ \boldsymbol{R}_{\mathrm{e}^{\mathrm{+}}}\\ \bar{\boldsymbol{R}}_{\mathrm{p}}\\ \boldsymbol{R}_{\mathrm{e}}\\ \boldsymbol{R}_{\varphi}\end{array}\right\}(\boldsymbol{d}) \tag{53}\]
for a global degrees of freedom vector \(\boldsymbol{d}\) that includes nodal values corresponding to all the fields: \(\alpha_{\mathrm{i}},\alpha_{\mathrm{i}^{\mathrm{+}}},\phi_{\mathrm{p}},\phi_{ \mathrm{e}},\boldsymbol{u}\) and solved as
\[\boldsymbol{R}(\boldsymbol{d})+\frac{\partial\boldsymbol{R}}{\partial \boldsymbol{d}}\delta\boldsymbol{d}=\boldsymbol{0} \tag{54}\]
where \(\partial\boldsymbol{R}/\partial\boldsymbol{d}\) is computed by automatic differentiation.
## 4 Coupling between electrochemistry and nonlinear mechanics
### Degradation of interface charge transfer kinetics with crack opening
The normal crack opening displacement creates a loss of contact between the electrolyte and active particle at their interface, \(\Gamma\). The degradation of charge transfer kinetics is modelled here by replacing the Butler-Volmer prefactor, \(j_{0}\) with a sigmoid function of the form
\[j_{0}(\xi_{N})=\frac{j_{0}}{1+\exp((\xi_{N}-\xi_{\text{max}})/l)}, \tag{55}\]
See Fig 4. This represents one aspect coupling mechanics with electrochemical degradation.
### Stress dependent kinetics
In addition to electrochemical degradation due to interface fracture, we account for stress dependent kinetics following Ref [21]. The diffusivity in the solid electrolyte can be stress dependent. Tensile stresses cause a local expansion of the solid electrolyte's crystal structure. This typically lowers the energy barrier for diffusive hops of Li or Li\({}^{+}\) and vacancies and enhances diffusion. In terms of transition state theory, the activation energy for diffusion is modified by a work-like term of the form \(\mathbf{\sigma}:\mathbf{V_{\text{D}}}\)[43, 44, 11], where \(\mathbf{\sigma}\) is the Cauchy stress and \(\mathbf{V_{\text{D}}}\) is the activation volume tensor for diffusion. In the absence of detailed experimental or first principles computations on its tensorial character, we adopt an isotropic model: \(\mathbf{V_{\text{D}}}=V_{\text{D}}\mathbf{1}\). This leads to the following stress-dependent diffusivity [43, 11], which is applied to \(D_{\text{Li}}\) and \(D_{\text{Li}^{+}}\) in (33b) and (33d), respectively:
\[D(\mathbf{\sigma})=D_{0}\exp\left(\frac{\text{tr}[\mathbf{\sigma}]V_{\text{D}}}{kT} \right). \tag{56}\]
The reaction rate also has a stress dependence with similar origins. However, since charge transfer occurs on the interface, \(\Gamma\), we model the corresponding activation volume tensor to have the form \(\mathbf{V_{\text{R}}}=V_{\text{R}}\mathbf{n}\otimes\mathbf{n}\), which differs from the hydrostatic stress-dependence in Ref [11]. The prefactor in the Butler-Volmer model is further modified to
\[j_{0}(\mathbf{\sigma})=j_{0}\exp\left(\frac{\mathbf{n}\cdot\mathbf{\sigma n}V_{\text{R}}}{ kT}\right). \tag{57}\]
Figure 4: The sigmoid degradation function of charge transfer kinetics with \(l=0.0001\)\(\mu\)m and \(\xi_{\text{max}}=0.0005\)\(\mu\)m.
The final form of the charge transfer kinetics with fracture-induced degradation and stress-dependence is obtained from (39), (55) and (57):
\[\mathbf{j}_{\mathrm{L^{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{}}}}}}}}}}}}}}}}\cdot\mathbf{n}^{+}+ \mathbf{j}_{\mathrm{L^{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\}}}}}}}}}}}}}}}\cdot\mathbf{n}^{-}= \frac{j_{0}}{1+\exp((\xi_{N}-\xi_{\mathrm{max}})/l)}\] \[\times\exp\left(\frac{\mathbf{n}\cdot\mathbf{\sigma n}V_{\mathrm{R}}}{kT}\right)\] \[\times\left(\exp\left(\frac{\alpha_{a}F}{R\theta}(\phi_{\mathrm{p} }^{-}-\phi_{\mathrm{e}}^{+}-U)\right)-\exp\left(-\frac{\alpha_{a}F}{R\theta}( \phi_{\mathrm{p}}^{-}-\phi_{\mathrm{e}}^{+}-U)\right)\right)\quad\text{on }\Gamma. \tag{58}\]
## 5 Efficient generation of multi-particle configurations
Fig 5 illustrates the workflow by which we generate multi-particle configurations. While this example begins with the definition of elliptical particles in an idealized geometry, it can be extended to work with micrographs of active particles in a solid state electrolyte by an easy replacement of the steps in the bottom row of Fig 5. The image processing feature embedded in the workflow allows the seamless recognition of arbitrary particle shapes and fits an elliptical interface around each particle. The workflow also recognizes the binder strokes between the particles represented by the Chartreuse green color in the top row of Fig 5 and defines the sub-domain for additive material. Given Cartesian mesh data which includes only the number of elements in each direction, the workflow then maps each element to sub-domains recognized from the image. Figure 6 illustrates these six sub-domains, namely (i) Anode particle (ii) Cathode particle (iii) Solid Electrolyte (SE) (iv) Anode-SE interface (v) Cathode-SE interface, and (vi) Additive. As discussed in 3.1.2, the mathematical treatment of the interfaces allows the meshes around these sub-domains to be Cartesian with the quadrilateral elements holding information for each sub-domain.
Figure 5: The image-based mesh generation workflow.
## 6 Multiphysics computations on solid state batteries
We demonstrate the multiphysics computational framework on a solid state battery chemistry, first by considering idealized configurations with single anode and cathode particles, and next on a multi-particle configuration. With the single anode/cathode particle cases, we separate out the effects of stress-dependent kinetics and interface fracture on Li and Li\({}^{+}\) concentration fields, electrode and electrolyte potentials. We consider a single discharge-charge cycle, postponing a detailed study of cycling to a future communication.
We adopt the St. Venant-Kirchhoff model of nonlinear elasticity. The first Piola-Kirchhoff stress, expressed in terms of the elastic deformation gradient and Green-Lagrange strain, \(\bar{\mathbf{E}}^{\text{e}}\), is:
\[\mathbf{P}=\bar{\mathbf{F}}^{\text{e}}\left(\lambda\text{tr}[\bar{\mathbf{E}}^{\text{e}}] \mathbf{1}+2G\bar{\mathbf{E}}^{\text{e}}\right)\bar{\mathbf{F}}^{\text{e}^{-1}} \tag{59}\]
for Lame parameter, \(\lambda\) and shear modulus \(G\) for the respective materials (electrolyte, active particles, binder) obtained from the reported Young's modulus \(E\) and Poisson ratio \(\nu\) for the solid materials: \(\lambda=\nu E/((1+\nu)(1-2\nu))\) and \(G=E/(2(1+\nu))\).
Equation (7b) is now modified so that the chemical component of the regular part of the deformation gradient \(\bar{\mathbf{F}}^{\text{e}}\) is specified by the intercalation function, \(g(c_{\text{Li}})\), which is defined as:
\[g(c_{\text{Li}})=\left(\left(\frac{c_{\text{Li}}-c_{0}}{c_{\text{Li}}^{\text{ max}}-c_{\text{Li}}^{\text{min}}}\right)r_{\text{s}}+1\right)^{3} \tag{60}\]
where \(c_{0}\) is the initial Li concentration and the swell ratio \(r_{\text{s}}\) is obtained from the maximum volume change \(\Delta V\) reported for the individual electrodes: \(r_{\text{s}}=\left(1+\Delta V\right))^{1/3}-1\).
### An idealized single anode/cathode particle configuration
We consider a domain \(80\times 80\)\(\mu\)m to define the cell with single anode and cathode particles. In all the figures that follow the anode particle is to the left and the cathode to the right. The simulations are of a Lithium titanate (LTO) anode, LCO cathode, and \(\beta\)-Li\({}_{3}\)PS\({}_{4}\) solid electrolyte. The properties of these materials and parameters used for the
Figure 6: Material labels of the multi-particle configuration generated in Fig 5.
simulations have been summarised in Table 1. To demonstrate the effect of stress-dependence on the kinetics in the absence of fracture, we present the examples of (i) stress-independent kinetics (ii) stress-dependent diffusion (iii) stress-dependent reaction, and (iv) Combined effects of stress-dependent diffusion, and reaction. We also discuss the effect of stress and stress-induced fracture on the charge transfer process.
Fig 9 shows the distribution of Li initially, at the end of the discharge and at the fully recharged state, computed with a version of the model in which stress effects on kinetics have been suppressed in Eqs (56-58) by setting the activation volumes \(V_{\rm D},V_{\rm R}=0\). Figure 7 shows the jump in electrolyte potential \(\phi_{\rm e}\) at \(\Gamma\) from non-zero values in the electrolyte to zero in the active particles. The discontinuity is imposed by the basis function \(M_{\Gamma}\). However, in the plot, this discontinuity undergoes a smooth interpolation over a single element \(\Omega_{\rm e}^{\Gamma}\). Similarly, the electrode potential \(\phi_{\rm p}\) also suffers a jump from non-zero in the active particles to zero everywhere in the electrolyte, as shown in Figure 8.
To represent the stress effects on the distribution of Li, the simulation results shown in Fig 9 are considered as a baseline to which stress effects on diffusion and/or reaction rate are compared in the absence of fracture. We draw attention to the discontinuous \(c_{\rm Li}\) field. It has the same smoothed interpolation of discontinuities seen in Figures 7 and 8. For comparison of stress effects on the kinetics, we define \(\Delta_{\rm{C}_{\rm{Li}}}\) as the deviation in Li concentration, \(c_{\rm{Li}}\), from the baseline case with stress-independent kinetics. Fig 10 shows a detailed comparison of the distribution of \(\Delta_{\rm{C}_{\rm{Li}}}\) in the cathode active particle at the end of the discharge half-cycle. The distribution shown in the left particle of Fig 10 is obtained by setting \(V_{\rm D}=5.807\times 10^{-30}\) m\({}^{3}\)[47], \(V_{\rm R}=0\), activating stress-dependent diffusion and suppressing the stress-dependent reaction rate. Due to the Dirichlet boundary condition near the current collector and intercalation strain in
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Symbol** & **Name** & **Unit** & **Anode** & **SE** & **Cathode** \\ \hline \multicolumn{6}{c}{**Constant**} \\ \(F\) & Faraday’s constant & pC/pmol & - & 96487 & - \\ \(R\) & Universal gas constant & pJ/(pmol-K) & - & 8.3143 & - \\ \(\theta\) & Temperature & K & - & 298 & - \\ \multicolumn{6}{c}{**Cell Geometry**} \\ \(L\) & Cell length & \(\mu\)m & - & 80 & - \\ \(W\) & Cell width & \(\mu\)m & - & 80 & - \\ \multicolumn{6}{c}{**Electrochemical Parameters**} \\ \(\alpha_{a}\) & Transfer coeff [30][31] & - & 0.5 & - & 0.5 \\ \(\kappa_{\rm e}\) & Conductivity of Li\({}^{+}\)[45] & p(\(\Omega\mu\)m)\({}^{-1}\) & - & \(1.6\times 10^{4}\) & - \\ \(D_{\rm Li}\) & Diffusivity of Li [30][31] & \(\mu\)m\({}^{2}\)/s & 0.5 & - & 0.5 \\ \(D_{\rm Li^{+}}\) & Diffusivity of Li\({}^{+}\)[46] & \(\mu\)m\({}^{2}\)/s & - & 1000 & - \\ \(t^{+}\) & Transference number[30][31] & - & - & 0.2 & - \\ \(c_{\rm Li^{\rm max}}^{\rm max}\) & Maximum Li conc (est.) [30][31] & pmol/\(\mu\)m\({}^{3}\) & 0.0262605 & - & 0.03675 \\ \(c_{\rm Li^{\rm min}}^{\rm min}\) & Minimum Li conc (est.) [30][31] & pmol/\(\mu\)m\({}^{3}\) & 0.000574 & - & 0.000825 \\ \(c_{\rm Li}^{0}\) & Initial Li conc & pmol/\(\mu\)m\({}^{3}\) & 0.0262605 & - & 0.000825 \\ \(c_{\rm Li^{+}}^{0}\) & Initial Li\({}^{+}\) conc (est.) & pmol/\(\mu\)m\({}^{3}\) & - & 0.002 & - \\ \(\nabla_{\rm R},V_{\rm D}\) & Activation volumes[47] & m\({}^{3}\) & \(5.807\times 10^{-30}\) & - & \(5.807\times 10^{-30}\) \\ \(\Delta V\) & Maximum volume change [48][49][50][51] & \% & 0 & - & 1.9 \\ \multicolumn{6}{c}{**Elasticity Parameters**} \\ \(E\) & Young’s modulus [52][53][54] & GPa & 30 & 10 & 190 \\ \(\nu\) & Poisson’s ratio & - & 0.3 & 0.3 & 0.3 \\ \(f_{t}\) & Fracture strength [55][56] & GPa & 0.3 & - & 0.3 \\ \hline \end{tabular}
\end{table}
Table 1: Electro-chemo-mechanical parameters.
stiff cathode particles, the tensile stresses are higher where the current collector connects to the solid electrolyte or an active particle. Eq 56 indicates that for \(V_{\rm D}>0\), a state of tensile hydrostatic stress will enhance transport by diffusion. As a result, both Li and Li\({}^{+}\) transport by diffusion is enhanced in this region providing more Li in the core of the cathode particle as seen in the left particle of Fig 10. Furthermore, the distribution of \(\Delta\Omega_{\rm Li}\) in the middle particle of Fig 10 can also be explained as a stress-driven enhancement. In this case, we activate stress-dependent reaction and suppress any stress effects on diffusion by setting \(V_{\rm R}=5.807\times 10^{-3}\) m\({}^{3}\)[47], \(V_{\rm D}=0\). For \(V_{\rm R}>0\), Eq 57 indicates that the higher tensile stresses in the interface near the current collector accelerate the interface charge transfer kinetics which can be seen in the form of two hotspots of additional Li in the middle particle of Fig 10. Lastly, we activate the stress effects on both diffusion and reaction by setting \(V_{\rm R}=V_{\rm D}=5.807\times 10^{-30}\) m\({}^{3}\). As expected, the distribution of \(\Delta\Omega_{\rm Li}\) shown in the right particle of Fig 10 is the result of enhanced diffusion and accelerated interface charge transfer kinetics and manifests as additional Li on the interface near the current collector in comparison with the previous two cases.
The central electrochemical phenomenon that we seek to capture with the fracture model is the degradation of charge transfer. To demonstrate this effect we extend the computations by activating the fracture model. Figure 11 shows in red the elements that fractured during the discharge cycle. As discussed above, the higher tensile stresses on the interface near the current collector result in crack initiation at the top edge of the elliptical particle and propagation along the interface nearer to the current collector. To show the degradation in Li transfer across the fractured interface as a result of Eq 55, Fig 12 presents the distribution of differences \(\Delta_{\rm f}c_{\rm Li}\) in the cathode particle, which is the deviation
Figure 8: The discontinuous electric potential field \(\phi_{\rm p}\) (V) at the end of the first discharge from a computation run with stress-independent kinetics and fracture suppressed.
Figure 7: The discontinuous electric potential field \(\phi_{\rm e}\) (V) at the end of the first discharge from a computation run with stress-independent kinetics and fracture suppressed.
of \(c_{\rm Li}\) from the case with stress-dependent-kinetics but fracture suppressed. Notably, even though the cracking is along a relatively small contour length of the interface, the opening induces degradation of the charge transfer kinetics as shown in Fig 12. Since the traction decreases with crack opening we expect that the tensile stress levels are lower in the particles and electrolyte. With additional discharge-charge cycles (not simulated here) this could lead to lower diffusivity enhancement, fewer Li\({}^{+}\) ions arriving at the interface, and a further decrease in Li levels within the cathode particle.
In addition to the loss in Li transfer across the interface in Fig 12, we also demonstrate the increase in the effective internal resistance \(R_{\rm eff}\) due to the interface opening. To evaluate \(R_{\rm eff}\), we consider the drop in potential difference \(\Delta\phi=U_{\rm evc}-V_{\rm T}\). Here, the terminal voltage \(V_{\rm T}\) is the potential difference across the boundaries of the cell given by \(V_{\rm T}=\phi_{\rm p_{0}}-\phi_{\rm p_{L}}\), where the subscripts \(0,L\) correspond to the left and right terminals of the cell. The open circuit voltage is \(U_{\rm evc}=U^{+}-U^{-}\). The half-cell potentials for the cathode and anode, respectively, \(U^{+}\)and \(U^{-}\) respectively are written as fits[31]:
\[U^{+}=\frac{-0.0923-7.8680x+50.0722x^{2}-122.2816x^{3}+82.9851x^{4}+140.2939x^ {5}-374.7350x^{6}+403.2464x^{7}-221.1915x^{8}+49}{-0.0218-1.9007x+11.7264x^{2}- 28.7848x^{3}+27.5427x^{4}-8.6343x^{5}} \tag{61a}\] \[U^{-}=0.2657+0.5551e^{-178.9799x}-0.0124\tanh\left(\frac{x-0.5573}{0.0282} \right)-0.0117\tanh\left(\frac{x-0.2393}{0.0486}\right)-0.0129\tanh\left(\frac {x-0.1749}{0.0348}\right)-0.01 \tag{61b}\]
where \(x=\bar{c}_{\rm Li}/\epsilon_{\rm Li}^{\rm max}\), the ratio of the volume averaged concentration to the maximum concentration for the respective electrode. The current density is the applied charge flux in the single particle simulations, \(i_{\rm ext}=15\) pA\(\mu\)m\({}^{-2}\).
Figure 10: Comparison plots showing the distribution of the differences \(\Delta\sigma_{\rm Li}\) (pmol/\(\mu\)m\({}^{3}\)) in the cathode particle compared to the baseline simulation with stress-independence in the kinetics. Left: stress-dependent diffusion. Middle: stress-dependent reaction. Right: combined effect of stress-dependent diffusion and reaction. All results are at the end of the first discharge and in the absence of interfacial fracture.
Figure 9: Li concentration field (pmol/\(\mu\)m\({}^{3}\)) in a single-particle cell with stress-independent kinetics and no fracture. Left: initial state. Middle: end of \(1^{\rm st}\) discharge. Right: end of \(1^{\rm st}\) charge
As relevant cell dimensions, for instance in a "jelly-roll" structure, we use the cell's charge transfer area \(A_{\text{cell}}=10^{3}\) cm\({}^{2}\), leading to the applied current, \(i_{\text{app}}=1.5~{}A\). Ohm's law for an effective resistance \(R_{\text{eff}}\) is \(i_{\text{app}}R_{\text{eff}}=\Delta\phi\). Using the values of \(x\), as defined above, obtained from the computations at the end of discharge for the cathode with/without fracture and the corresponding anode conditions, we find, \(U_{\text{OCV}}^{\text{nfracx}}=3.4110\) V, \(V_{\text{T}}^{\text{nfracx}}=3.0515\) V, yielding \(\Delta\phi^{\text{nfracx}}=0.3595\) V and \(R_{\text{eff}}^{\text{nfracx}}=0.2396\)\(\Omega\). For the fractured single particle, these quantities as extracted from the computations were: \(U_{\text{OCV}}^{\text{fracx}}=3.4110\) V, \(V_{\text{T}}^{\text{frac}}=3.0484\) V, yielding \(\Delta\phi^{\text{frac}}=0.3626\) V and \(R_{\text{eff}}^{\text{frac}}=0.2418\)\(\Omega\). Notably, while the computations clearly demonstrate a degradation of charge transfer into the cathode at the end of discharge (Figure 12), the average concentration difference is small, and the \(U_{\text{OCV}}\) is unchanged to the fourth decimal. However, the resulting potential field \(\phi_{\text{p}}\), being coupled with \(c_{\text{Li}}\) via the field equations (33a-38) and (58) is disturbed sufficiently due to fracture leading to a difference in \(\Delta\phi\) and therefore in \(R_{\text{eff}}\). While small, we note that fracture at the cathode-electrolyte interface does lead to an increased Ohmic resistance by \(2.2\) m\(\Omega\). This will grow as fracture progresses with cycling.
### Simulations with multiple particles
We demonstrate the robustness of the framework in its extension to modeling the stress-mediated electro-chemo-mechanics including fracture of multiple particles. Fig 6 is a multi-particle configuration generated by the workflow in Fig 5. Fig 13 is a detail from a computation on this configuration showing the Li concentration profiles at the initial state and at the end of the first discharge. Fig 14 shows the elements that fractured in white on the interfaces of cathode active particles during the discharge half-cycle. In addition to the mechanics of each particle, the deformation of the surrounding particles in the multiple-particle cell also enhances the tensile stresses at each particle interface resulting in further fracture developing in comparison to the single particle cell. To show the degradation in charge transfer across the interfaces of particles as a result of Eq (55), in Figure 15 we plot the distribution of differences \(\Delta_{\text{rCl}}\) similar to the fracture activated simulation in section 6.1. The degradation of Li transfer across the fractured interfaces of each particle is evident.
cation concentrations, and of the electric potential fields at the particle-electrolyte interfaces. It also naturally extends to the treatment of interface fracture via the strong discontinuity treatment. We have focused on interface fracture, motivated by the greater susceptibility to this mode of failure in solid state batteries with stiff ceramic electrolytes and active particles. Also of some interest in this regard is the recent work by Van der Ven et al. on the possible role of ferroelastic toughening mechanisms in preventing interface and intra-particle or electrolyte fracture [57]. To enable a coupled solution of the electro-chemo-mechanical equations with interface fracture, we have extended the discontinuous finite element treatment that has appeared previously under the strong discontinuity and variational multiscale frameworks.
We have addressed several aspects of electro-chemo-mechanical coupling: The intercalation strains drive the mechanics with lithiation and delithiation during discharge-charge cycles. The resulting stresses throughout the solid state battery influence the kinetics of diffusion as well as the interface reactions of charge transfer. Additionally, we have accounted for the degradation of charge transfer reactions due to interface fracture and separation of the electrolyte-particle surfaces. Our numerical simulations have demonstrated all these effects: discontinuous concentration and
Figure 12: Comparison of deviation in Li concentration \(\Delta_{t}\alpha_{Li}\) (pmol/\(\mu\)m\({}^{3}\)) in the cathode active particle between the fracture and fracture-suppressed cases at the end of the first discharge. Left: fracture-suppressed. Right: Fracture with cracked elements in white.
Figure 13: Li concentration field (pmol/\(\mu\)m\({}^{3}\)) in a multi-particle cell with stress-dependent kinetics and fracture enabled. Left: initial state. Right: end of first discharge.
electric potential fields, normal crack opening with fracture, the stress-influenced transport and reaction both in the absence of fracture and with its effect accounted for.
In this first communication of our framework we have focused on the computational methods and demonstrated the physics that they resolve over a single discharge-charge cycle. The work and results presented here are an early step toward simulating capacity fade over hundreds of cycles driven by the above phenomena and ultimately toward incorporating other coupled electro-chemo-mechanics.
Figure 14: Fractured elements in white on the interface between solid electrolyte and cathode active particles at the end of the first discharge.
Figure 15: Comparison of the deviation in Li concentration \(\Delta_{\mathrm{f}}c_{\mathrm{Li}}\) (pmol/\(\mu\)m\({}^{3}\)) in cathode active particle between cases with fracture and with fracture suppressed. Left: the end of the first discharge with fracture suppressed, for which \(\Delta_{\mathrm{f}}c_{\mathrm{Li}}=0\) by definition. Right: Fracture with cracked elements in white.
## Acknowledgements
We gratefully acknowledge the support of Toyota Research Institute, Award #849910: "Computational framework for data-driven, predictive, multi-scale and multi-physics modeling of battery materials". This work also used the Extreme Science and Engineering Discovery Environment (XSEDE) Comet at the San Diego Supercomputer Center and Stampede2 at The University of Texas at Austin's Texas Advanced Computing Center through allocation TG-MSS160003 and TG-DMR180072.
|
2309.07416 | M3-AUDIODEC: Multi-channel multi-speaker multi-spatial audio codec | We introduce M3-AUDIODEC, an innovative neural spatial audio codec designed
for efficient compression of multi-channel (binaural) speech in both single and
multi-speaker scenarios, while retaining the spatial location information of
each speaker. This model boasts versatility, allowing configuration and
training tailored to a predetermined set of multi-channel, multi-speaker, and
multi-spatial overlapping speech conditions. Key contributions are as follows:
1) Previous neural codecs are extended from single to multi-channel audios. 2)
The ability of our proposed model to compress and decode for overlapping
speech. 3) A groundbreaking architecture that compresses speech content and
spatial cues separately, ensuring the preservation of each speaker's spatial
context after decoding. 4) M3-AUDIODEC's proficiency in reducing the bandwidth
for compressing two-channel speech by 48% when compared to individual binaural
channel compression. Impressively, at a 12.6 kbps operation, it outperforms
Opus at 24 kbps and AUDIODEC at 24 kbps by 37% and 52%, respectively. In our
assessment, we employed speech enhancement and room acoustic metrics to
ascertain the accuracy of clean speech and spatial cue estimates from
M3-AUDIODEC. Audio demonstrations and source code are available online at
https://github.com/anton-jeran/MULTI-AUDIODEC . | Anton Ratnarajah, Shi-Xiong Zhang, Dong Yu | 2023-09-14T04:04:50Z | http://arxiv.org/abs/2309.07416v3 | # M\({}^{3}\)-AudiODEC: Multi-Channel Multi-Speaker Multi-Spatial Audio Codec
###### Abstract
We introduce M\({}^{3}\)-AUDIODEC, an innovative neural spatial audio codec designed for efficient compression of multi-channel (binaural) speech in both single and multi-speaker scenarios, while retaining the spatial location information of each speaker. This model boasts versatility, allowing configuration and training tailored to a predetermined set of multi-channel, multi-speaker, and multi-spatial overlapping speech conditions. Key contributions are as follows: 1) Previous neural codecs are extended from single to multi-channel audios. 2) The ability of our proposed model to compress and decode for overlapping speech. 3) A groundbreaking architecture that compresses speech content and spatial cues separately, ensuring the preservation of each speaker's spatial context after decoding. 4) M\({}^{3}\)-AUDIODEC's proficiency in reducing the bandwidth for compressing two-channel speech by 48% when compared to individual binaural channel compression. Impressively, at a 12.6 kbps operation, it outperforms Opus at 24 kbps and AUDIODEC at 24 kbps by 37% and 52%, respectively. In our assessment, we employed speech enhancement and room acoustic metrics to ascertain the accuracy of clean speech and spatial cue estimates from M\({}^{3}\)-AUDIODEC. Audio demonstrations and source code are available online1.
Footnote 1: [https://anton-jeran.github.io/MAD/](https://anton-jeran.github.io/MAD/)
Anton Ratnarajah\({}^{1}\), Shi-Xiong Zhang\({}^{2}\), Dong Yu\({}^{2}\)\({}^{1}\) University of Maryland, College Park, MD, USA \({}^{2}\) Tencent AI Lab, Bellevue, WA, USA binaural audio codec, spatial audio codec
## 1 Introduction
Neural audio codec (NACs) compress audio signals to minimize data storage and transmission. Present-day NACs can be grouped into hybrid techniques, which fuse conventional audio coding with neural speech synthesis [1, 2, 3], and end-to-end approaches [4, 5, 6, 7]. The latter offers notable enhancements in audio quality and adapts to varying audio content. However, most existing NACs target single-channel audio and single-speaker optimization [7]. Recognizing these gaps, our work introduces M\({}^{3}\)-AUDIODEC, a spatial audio codec tailored for efficient compression in multi-channel and multi-speaker contexts.
A key difference between single-channel and multi-channel audio is the latter's encapsulation of spatial localization alongside pure speech content (\(S[t]\)) [8]. This spatial context manifests in various acoustic facets such as early reflections, late reverberations, interaural time difference (ITD) and interaural level difference (ILD) between microphones. Mathematically, these aspects can be represented through the impulse response (IR) function, allowing the breakdown of speech content and multi-channel effects as:
\[B[t]=S[t]\approx IR[t]. \tag{1}\]
For overlapped multi-channel speech (\(OB[t]\)) with a fixed number of multiple speakers (\(M_{S}\)) in multiple different spatial locations, we can decompose their clean speech content \(S_{i}[t]\) and their multi-channel IR\(I_{R}[t]\) separately as follows:
\[OB[t]=\sum_{i=1}^{M_{S}}(S_{i}[t]\otimes IR_{i}[t]). \tag{2}\]
**Main Contribution:** We present a pioneering NAC architecture optimized for multi-channel multi-speaker overlapped speech, crucially retaining each speaker's spatial details. This architecture is illustrated in Fig.1. In contrast to the existing AUDIODEC model [7], our key contributions are: 1) expanding codec capabilities to multi-channel audio; 2) efficient overlapping speech compression; 3) separate compression of speech content and spatial cues; 4) achieving a high compression rate for multi-channel audio. Our model, operating at 12.6 kbps, can reconstruct a 48 kHz binaural speech signal with two spatially distinct speakers, significantly surpassing AUDIODEC and outperforming Opus [9] and Encodec [5]. Supplementary materials, including speech samples, spectrograms, and source code, are provided for future research1.
Footnote 1: [https://anton-jeran.github.io/MAD/](https://anton-jeran.github.io/MAD/)
## 2 Related World
**Traditional audio codecs:** Linear predictive coding-based audio codecs [10, 11] and model-based audio codecs [12] have been proposed in the past for speech coding, but their quality is limited. Among the traditional methods, Opus [9] and EVS [13] are state-of-the-art traditional audio codec architecture, and they can support different bitrates and sampling rates at high coding efficiency in real-time.
**Neural audio codecs:** End-to-end data-driven architectures are proposed to code mono and stereo audio with impressive performance [4, 5, 6]. The Encodec [5] can compress stereo audio by separately processing the left and right channels. This approach results in poor compression of stereo audio because the same speech content in both channels is coded twice. Our M\({}^{3}\)-AUDIODEC (MAD) can significantly reduce
the bandwidth by coding the speech content only once. Also, our network is efficiently designed to compress overlapped speech while preserving individual speakers' speech content and spatial acoustic features.
**Speech dereverbation and RIR Estimation:** Recently, NAC-based architectures have been proposed for audio-visual speech enhancement [14]. Similarly, in our work, we decode clean speech from multi-channel speech. Generative architectures are recently been proposed to estimate IR for the given spatial information [15, 16]. Encoder-decoder architectures have shown promising results in estimating IR from the reverberant speech signal [17, 18]. We propose NAC to estimate IR of one-second duration from multi-channel speech.
## 3 \(\mathbf{M^{3}}\)-Audiodec
We propose NAC to compress multi-channel (\(M_{C}\)) speech recording \(B(x)\) with a sampling rate of 48 kHz. Similar to typical NAC [4, 7], our model consists of an encoder, projector, quantizer and decoder modules. We propose simple and complex decoder architecture for single-speaker and multi-speaker (\(M_{S}\)) scenarios, respectively. Our proposed encoder architecture is the same for single-speaker and multi-speaker cases. We adapt the projector and quantizer from the AUDIODEC [7].
### Encoder Architecture
We pass the multi-channel speech to a common encoder consisting of a 1D Convolutional layer (CONV) with a kernel size (K) of 3, stride (S) 1 and the same number of input channels (IC = \(M_{C}\)) and output channels (OC). We pass the common encoder network output to the speech encoder and multi-channel IR encoder. The speech encoder follows the same architecture as AUDIODEC [7] and SoundStream [4]. Speech Encoder has a CONV (K = 7, S = 1, IC = \(M_{C}\), OC = 8 * \(M_{C}\)) followed by convolution blocks \(C_{B1}\). Each \(C_{B1}\) has three residual units (RU) with dilated CONV (dilation rates are 1, 3 and 9) followed by a CONV (K = 2 * S, S = S, IC = IC, OC = 2 * IC). We have 5 \(C_{B1}\)s with S = (2, 2, 3, 5, 5). Therefore, we downsample the speech content by a factor of 300. Our IR encoder is motivated by the IR estimator network [17]. The IR encoder has three CONV blocks \(C_{B2}\). Each \(C_{B2}\) has CONV followed by batch normalization (BN) and leaky ReLu. First \(C_{B2}\) does not have BN. The three \(C_{B2}\) has OC = (\(M_{C}\) * 64, \(M_{C}\) * 128, \(M_{C}\) * 256), K = (96001, 41, 41), S = (1500, 2, 2) and padding (P) = (48000, 20, 20). We significantly downsample IR content by a factor of 6000. All the CONVs are causal to make the network work in real time. The speech encoder and IR encoder output is projected to multi-dimensional space separately and quantized into codes using projector and quantizer modules proposed in AUDIODEC.
### Decoder Architecture
We propose two different architectures for the single-speaker and multi-speaker scenarios as follows:
**Single speaker:** We propose a speech decoder architecture to decode the clean speech and an IR decoder to decode multi-channel IR. We reconstruct the multi-channel speech from estimated clean speech and IR using Eq. 1. Both speech and IR decoders adapt the SoundStream decoder. Before inputting to decoder modules, we pass the code to CONV (IC = \(M_{C}\) * 32, OC = 512, K = 7, S = 1). The speech decoder has 5 CONV blocks \(C_{B3}\) with S = (5, 5, 3, 2, 2) followed by CONV with OC = 1, K = 7 and S = 1. Each \(C_{B3}\) has transposed convolutional layers (IC = IC, OC = 0.5 * IC, K = 2 * S, S) followed by three RU similar to the encoder. IR decoder has a similar network as the speech decoder except for the number of \(C_{B3}\). IR decoder has 6 \(C_{B3}\)s with S = (5, 5, 5, 4, 3, 2) and the final CONV has \(M_{C}\) output channels. We reconstruct clean speech of two-second duration and one-second multi-channel IR with a sampling rate of 48kHz.
**Multi speakers:** For the multi-speaker scenario, we replicate the speech decoder in the single-speaker scenario \(M_{S}\) times to decode the clean speech of \(M_{S}\) speakers. We perform speech separation in the decoder to force the network to preserve the speech content of individual speakers in the code. Instead of directly passing the output \(C\) of the CONV layer, we learn the representation of each speaker \(S_{i}\) by learning mask vector \(M_{i}\in[0,1]\). Similar to Conv-TasNet [19], the \(S_{i}\) is calculated by performing element-wise multiplication of \(C\) and \(M_{i}\). We pass \(S_{i}\) to the speech decoder modules to estimate the clean speech of each speaker. We use the same IR decoder as the single speaker network while we increase the number of channels of each layer by \(M_{S}\) times. Fig. 1 shows our model for two-speaker binaural speech (\(M_{S}\) = \(M_{C}\) = 2).
### Training Objective
We adapt the training paradigm proposed in AUDIODEC. We train the end-to-end network with the metric loss for 200k iterations. Then we replace our speech decoders with HiFiGAN [20] vocoders and train with the metric and adversarial loss for 500k iterations using HiFi-GAN-based multi-period and multi-scale discriminators [7]. For the multi-speaker scenario, we continue to train our end-to-end network with adversarial and metric loss after 200k iterations for an additional 160k iterations. Let \(B(x)\) and \(\hat{B}(x)\) denote the input and reconstructed multi-channel speech. We denote the ground truth (GT) and reconstructed clean speech of speaker \(i\) using \(S_{i}(x)\) and \(\hat{S}_{i}(x)\) respectively. \(IR_{i}(x)\) and \(\hat{IR}_{i}(x)\) represent their corresponding GT and reconstructed multi-channel IRs.
**Metric Loss:** We use the mel spectral loss (Eq. 3) and spectrogram loss as our metric loss for clean and multi-channel speech. In Eq. 3, \(MEL\) denotes the extraction of the mel spectrogram. \(\mathbb{E}\) denotes the expectation, and L1-norm and L2-norm are denoted by \(||.||_{1}\) and \(||.||_{2}\) respectively.
\[\mathcal{L}_{MEL}(x,\hat{x})=\mathbb{E}[||MEL(x)-MEL(\hat{x})||_{1}]. \tag{3}\]
For spectrogram loss, we calculate the mean square difference of the log magnitude of the GT speech spectrogram
(\(M_{spec}(x)\)) and estimated speech spectrogram (\(M_{spec}(\hat{x})\)) (Eq. 4).
\[\mathcal{L}_{MAG}(x,\hat{x})=\mathbb{E}[||M_{spec}(x)-M_{spec}(\hat{x})||_{2}^{2}]. \tag{4}\]
We calculated time-domain mean square error (MSE) between GT and estimated multi-channel IRs (Eq. 5) as our metric loss for estimated multi-channel IRs as follows:
\[\mathcal{L}_{IR}(b,\hat{b})=\mathbb{E}[||b-\hat{b}||_{2}^{2}]. \tag{5}\]
Our total metric loss is defined as follows:
\[\mathcal{L}_{MET}(x)=(\mathcal{L}_{MEL}(B(x),\hat{B}(x))+\mathcal{ L}_{MAG}(B(x),\hat{B}(x)))\] \[+\sum_{i=1}^{M_{S}}(\mathcal{L}_{MEL}(S_{i}(x),\hat{S}_{i}(x))+ \mathcal{L}_{MAG}(S_{i}(x),\hat{S}_{i}(x))\] \[+\mathcal{L}_{IR}(IR_{i}(x),\hat{IR}_{i}(x))), \tag{6}\]
where \(M_{S}\) is the total number of speakers in the speech.
**Adversarial Loss:** We train two HiFi-GAN discriminators for multi-channel and clean speech by optimizing the following objective function:
\[\mathcal{L}_{D}(x)=\mathbb{E}[\max{(0,1-D_{B}(B(x)))}+\max{(0,1+D_ {B}(\hat{B}(x))} \tag{7}\] \[+\sum_{i=1}^{M_{S}}(\max{(0,1-D_{S}(S_{i}(x)))}+\max{(0,1+D_{S}( \hat{S}_{i}(x)))})],\]
where \(D_{B}\) and \(D_{S}\) are the discriminators of multi-channel speech and clean speech respectively. We train our M\({}^{3}\)-AUDIODEC (MAD) with the following adversarial loss.
\[\mathcal{L}_{ADV}=\mathbb{E}[\max{(0,1-D_{B}(\hat{B}(x))+\sum_{i=1}^{M_{S}} \max{(0,1-D_{S}(\hat{S}_{i}(x)))}}]. \tag{8}\]
In addition to the \(\mathcal{L}_{MET}(x)\) and \(\mathcal{L}_{ADV}(x)\), we train our network with \(\mathcal{L}_{VQ}\)[21] applied to the VQ codebook. Our overall generator loss \(\mathcal{L}_{GEN}(x)\) is as follows:
\[\mathcal{L}_{GEN}(x)=\mathcal{L}_{MET}(x)+\lambda_{ADV}\mathcal{L}_{ADV}+ \lambda_{VQ}\mathcal{L}_{VQ}, \tag{9}\]
where \(\lambda_{ADV}\) and \(\lambda_{VQ}\) are the weights.
dataset into 33975 training, 750 validation and 752 test sets.
**Baselines:** Opus is the widely used audio codec in Zoom, Microsoft Team, Google Meet and YouTube and was standardized by the IETF in 2012. We use Opus as our traditional audio codec baseline. We also compare our approach using the state-of-the-art NAC for two-channel audio (Encoder) [5]. HiFi-Codec [6] and AUDIODEC [7] support only single-channel speech compression. Therefore, we separately compressed the left and right channels using HiFi-Codec and AUDIODEC. The pre-trained AUDIODEC in their official GitHub is trained only on clean speech. For a fair comparison, we trained AUDIODEC using our dataset. AUDIODEC is an improvised version of SoundStream [4] for speech coding. More details on our baseline are shown in Table 1.
**Ablation:** We evaluated three variations of our architecture to choose the best model for single speaker case. We evaluate the benefit of the HiFi-GAN vocoder by continuously training our network for 700k iterations with our simple speech decoder described in SS 3.2 (MAD-V1). In AUDIODEC, only mel spectral loss is used as a metric loss. Therefore, we train the network without Eq. 4 to evaluate the benefits of spectrogram loss (Eq. 4) (MAD-V2). MAD is trained on our proposed approach in SS 3. Due to computation complexity, we don't use the HiFi-GAN vocoder for our two-speaker model.
**Evaluation Metrics:** We evaluate our model by measuring the clean speech estimation quality using the widely used speech enhancement metric STOI [25] and BIR estimation quality using a set of BIR acoustic parameters. Reverberation time (\(T_{60}\)), direct-to-reverberant ratio (DRR), early-decay-time (EDT), and early-to-late index (CTE) are commonly used acoustic parameters to measure IRs [26, 27]. We calculate the mean absolute difference of the BIR acoustic parameters between the estimated and the ground truth BIRs.
We also measure the ability of our model to preserve interaural time difference (ITD) and interaural level difference (ILD) in reconstructed binaural speech. As proposed in previous work [28], we use generalized cross-correlation phase transform (GCC-PHAT) algorithm [29] to calculate the ITD error (Eq.10) between the left and right channels of ground truth speech (\(\hat{B}^{L}\), \(\hat{B}^{R}\)) and reconstructed speech (\(\hat{B}^{L}\), \(\hat{B}^{R}\)).
\[\mathbf{E_{ITD}}=\mathbb{E}[\|ITD(B^{L},B^{R})-ITD(\hat{B}^{L},\hat{B}^{R})\|]. \tag{10}\]
We define the ILD error for left channel (\(\mathbf{E_{ILDL}}\)) and right channel (\(\mathbf{E_{ILDR}}\)) as follows:
\[\mathbf{E_{ILDL}}=\mathbb{E}[20\log_{10}\frac{\|\hat{B}^{L}\|_{2}^{2}}{\|B^{L }\|_{2}^{2}}]. \tag{11}\]
\[\mathbf{E_{ILDR}}=\mathbb{E}[20\log_{10}\frac{\|\hat{B}^{R}\|_{2}^{2}}{\|B^{R }\|_{2}^{2}}]. \tag{12}\]
**Results:** Table 2 presents the ITD and ILD errors of the reconstructed binaural speech from different baselines and our approach. We can see that our approach gives the lowest ITD error for both single-speaker and two-speaker cases. Also, our approach outperforms ILD errors (\(\mathbf{E_{ILDL}}\), \(\mathbf{E_{ILDR}}\)) when compared to every baseline except for Encoder-48. Encoder-48 needs four times more bandwidth, the compression rate is around 50 times less than our approach, and only suitable for non-streamable usage. For a fair comparison, we compare our model with Encoder-12, and we observe that our approach outperforms by 18% and 3% for single-speaker and two-speaker cases, respectively. We also compare three different variations of our single-speaker model and observe that replacing a simple speech decoder with a HiFi-GAN vocoder improves ITD error by 29%, and adding spectrogram loss improves the clean speech estimation quality (STOI) by 15%.
Table 3 shows the BIR estimation error of our approach. We observe that improving the binaural speech estimation quality by using HiFi-GAN vocoder and spectrogram loss indirectly contributed to improved BIR estimation and reduced the overall error of \(T_{60}\), DRR, EDT, and CTE by 6.9%, 62.1%, 56.6% and 64% respectively in the single-speaker scenario. We can see that the performance of our two-speaker model is comparable to our single-speaker model MAD-V1.
## 5 Conclusion and Future Work
We introduced M3-AUDIODEC, an innovative multi-channel neural audio codec designed for both single-speaker and multi-speaker multi-spatial overlapped speech. Our approach outperforms traditional and neural audio codecs with similar bandwidth in preserving binaural acoustic effects by up to 52%. We propose a novel approach to compress the speech content and spatial details separately and show that our approach can significantly reduce the bandwidth of compressing binaural speech by 48% when compared to compressing each channel using AUDIODEC. Given the intricacy of this domain and the need to experience output speech via headphones, our evaluations centered on binaural two-speaker overlapped speech. Future endeavors will expand to encompass compression and decoding of overlapped speech involving varied speaker counts and spatial configurations.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Speakers Channel** & **Method** & \(\mathbf{T_{60}}\downarrow\)**DRR\(\downarrow\)** & **EDT\(\downarrow\)** & **CTE\(\downarrow\)** \\ & & & **(ms)** & **(dB)** & **(ms)** & **(dB)** \\ \hline Single & Left & MAD-V1 & 25.3 & 2.79 & 86.7 & 2.23 \\ Single & Left & MAD-V2 & **20.9** & 2.21 & 67.0 & 1.44 \\
**Single** & **Left** & **MAD (ours)** & 22.7 & **1.08** & **39.4** & **0.79** \\ \hline
**Two** & **Left** & **MAD (ours)** & **25.2** & **3.41** & **80.1** & **2.52** \\ \hline \hline Single & Right & MAD-V1 & 23.8 & 2.84 & 84.7 & 2.09 \\ Single & Right & MAD-V2 & **21.4** & 2.35 & 64.9 & 1.33 \\
**Single** & **Right** & **MAD (ours)** & 23.0 & **1.05** & **35.0** & **0.77** \\ \hline
**Two** & **Right** & **MAD (ours)** & **25.6** & **3.30** & **83.3** & **2.09** \\ \hline \hline \end{tabular}
\end{table}
Table 3: BIR estimation error of our approach for single-speaker and two-speaker scenarios. Training binaural speech with spectrogram loss indirectly improves the BIR estimation significantly. BIR estimation of our network for a two-speaker scenario is comparable to a single-speaker scenario with a simple speech decoder (MAD-V1). We report the average error of two speakers for the two-speaker case. |
2309.08725 | Three-dimensional magnetic resonance tomography with sub-10 nanometer
resolution | We demonstrate three-dimensional magnetic resonance tomography with a
resolution down to 5.99 +- 0.07 nm. Our measurements use lithographically
fabricated microwires as a source of three-dimensional magnetic field
gradients, which we use to image NV centers in a densely doped diamond by
Fourier-accelerated magnetic resonance tomography. We also present a compressed
sensing scheme for imaging of a spatially localized ensemble from undersampled
data, which allows for a direct visual interpretation without numerical
optimization. The resolution achieved in our work approaches the positioning
accuracy of site-directed spin labeling, paving the way to three-dimensional
structure analysis by magnetic-gradient based tomography. | Mohammad T Amawi, Andrii Trelin, You Huang, Paul Weinbrenner, Francesco Poggiali, Joachim Leibold, Martin Schalk, Friedemann Reinhard | 2023-09-15T19:24:36Z | http://arxiv.org/abs/2309.08725v1 | # Three-dimensional magnetic resonance tomography with sub-10 nanometer resolution
###### Abstract
We demonstrate three-dimensional magnetic resonance tomography with a resolution down to \(5.99\pm 0.07\) nm. Our measurements use lithographically fabricated microwires as a source of three-dimensional magnetic field gradients, which we use to image NV centers in a densely doped diamond by Fourier-accelerated magnetic resonance tomography. We also present a compressed sensing scheme for imaging of a spatially localized ensemble from undersampled data, which allows for a direct visual interpretation without numerical optimization. The resolution achieved in our work approaches the positioning accuracy of site-directed spin labeling, paving the way to three-dimensional structure analysis by magnetic-gradient based tomography.
In recent years, various nano-sensors, most prominently magnetic resonance force microscopy (MRFM) and nitrogen-vacancy (NV) centers, have enabled the detection of small ensembles of electron [1; 2] and nuclear [3; 4; 5] spins, partially down to the level of single spins. Translating this power from a mere detection to a three-dimensional imaging technique promises transformative applications. Three-dimensional imaging of color centers would enable selective addressing and readout of networks of coherently coupled color centers in densely doped samples [6; 7], or the detection of elementary particles by high-resolution mapping of the crystal strain they induce upon impact [8]. Applied to electron spins, it would enable three-dimensional imaging of spin-labeled proteins. Such a technique would in particular provide distance constraints for label distances of \(>80\) A and for proteins labeled with arbitrary many electron spins, filling two blind spots of present electron spin resonance spectroscopy. Applied to nuclear spins, it would provide an ultimate microscope, able to image within opaque samples with label-free chemical contrast.
Conceptually, the step from detection to imaging is straightforward. Applying a magnetic field gradient is all it takes to turn a magnetic resonance spectrum into a one-dimensional image. Multiple gradients along linearly independent directions can encode multi-dimensional images, most beautifully illustrated in the output of clinical magnetic resonance imaging scanners. If the gradients can be switched faster than the duration of the spectroscopy sequence, Fourier-accelerated techniques can acquire extended volumes in reasonable time [9]. In clinical scanners, these comprise thousands of voxels, and similar data volumes are expected for particle detectors or imaging of a densely spin-labeled protein. Yet, three-dimensional Fourier-accelerated imaging at the nanoscale has remained elusive.
Imaging by less scalable techniques has been demonstrated multiple times. Three-dimensional imaging of nuclear spins with atomic resolution has been achieved using the intrinsic field gradient emerging from the magnetic dipole field of a color center [10; 11]. However, only the closest few nanometers around a defect can be imaged by this approach so that it is limited to intrinsic spins in the diamond so far. One-dimensional and two-dimensional [12] imaging in static gradients has been demonstrated, including resolving two adjacent centers by the gradient field of a hard-drive write head [13]. Three-dimensional images of intrinsic electron spins in a diamond have been obtained using a static gradient positioned by a scanning probe [14]. Fourier acceleration has remained out of reach of these approaches where gradients cannot be switched within a spectroscopy sequence.
Fourier acceleration of imaging has been demonstrated [7; 9] using quickly switchable conductors as gradient sources, but experiments with nanoscale resolution have remained limited to one- and two-dimensional proofs of the concept. One-dimensional imaging in MRFM by current-driven gradients has very recently even achieved sub-Angstrom resolution [15].
Here we demonstrate Fourier-accelerated three-dimensional imaging with nanometer-scale resolution. The key is a device to produce three linearly independent magnetic field gradients from a two-dimensional layout of conductors (Fig. 1a). Three microfabricated wires, arranged in a U-shape structure, create linearly inde
pendent gradient fields in a plane few microns beneath the structure. This device is fabricated via lift-off photolithography on a diamond substrate hosting a dense (\([NV]\approx 0.13\) ppb) ensemble of NV centers Fig. 1(a). The U-microstructure consists of a 200 nm gold film on top of a 10 nm thick titanium layer. Each of its three arms is 5 \(\mathrm{\SIUnitSymbolMicro m}\) long and 500 nm wide. The top arm also serves as a microwave antenna to implement single-qubit gates. We generate switchable magnetic field gradients by sending currents, labeled \(\mathrm{I}_{1}\), \(\mathrm{I}_{2}\), and \(\mathrm{I}_{3}\) in Fig. 1(a), into the three arms of the U microstructure. All currents are terminated with a 50 \(\Omega\) resistance at the same vertex of the U structure. In all measurements a homogeneous bias magnetic field of \(B_{0}\approx 76\) G is applied along one of the four NV axes. This device is used to implement gradient echo pulse sequences, like the one-dimensional example shown in Fig. 1(b-d). A Hahn echo sequences decouples the NV centers from static and slowly fluctuating background fields, to enable \(T_{2}\)-limited sensing. A magnetic field gradient pulse, created by the current \(I_{1}\), is applied during one half of the echo sequence. This phase-encodes the position, because an NV center at point \(\vec{x}\) acquires a position-dependent phase shift
\[\phi(\vec{x},t)=\int_{0}^{t}\omega(\vec{B}_{I}(\vec{x},\tau))d\tau\approx \omega(\vec{B}_{I}(\vec{x}))t \tag{1}\]
where \(\omega(\vec{B}_{I}(\vec{x},\tau))\) denotes the shift in the Larmor frequency induced by the current \(I\) and the approximation holds for pulses close to a rectangular shape. At the end of the Hahn echo sequence, this phase shift translates into an oscillatory spin signal
\[\langle\hat{S}_{z}\rangle(t)=\begin{cases}(1+\cos(\omega(\vec{x})t)/2&\text{ trailing $\pi_{x}/2$ pulse}\\ (1+\sin(\omega(\vec{x})t)/2&\text{ trailing $\pi_{y}/2$ pulse}\end{cases}\]
For a distribution of NV centers, the oscillatory signals of all centers will linearly superpose to a characteristic beating pattern
\[\langle\hat{S}_{z}\rangle(t)=\Bigg{(}1+\sum_{\vec{x}_{NV}}\cos(\omega(\vec{x} _{NV})t)\Bigg{)}/2\]
(assuming a trailing \(\pi_{x}/2\) pulse). The various \(\omega(\vec{x}_{NV})\) can be recovered from this signal by an inverse Fourier transform, creating a one-dimensional image of the NV centers Fig. 1(d).
One major challenge of this experiment consists in creating sufficiently rectangular pulses to satisfy the approximation of eq. (1). This requires a stable current supply that moreover has to be controlled with a fast (100 MHz) bandwidth to ensure that the rising and falling edges are quasi-instantaneous, i.e. much shorter than one period of the current-induced Larmor frequency \(\omega(\vec{B}_{I})\). Stability within every current pulse is required, because any variation of \(\omega(\vec{B}(\vec{x},\tau))\) over the pulse will introduce a chirp in the time domain signal (Fig. 1(c)), which will blur the image in the frequency domain (Fig. 1(d)). Stability between successive experimental repetitions is required, because shot-to-shot fluctuations of the magnetic field induce decoherence (see below).
We experimentally address these constraints by two means (Fig. 1(e)). First, the current pulses are generated by switching a stable voltage source (Keithley 2230G-30-6) using fast switches (ic-Haus HGP), ensuring nearly rectangular pulses. Second, we correct for residual nonlinearities and fluctuations by measurement and online post-processing. We acquire the current integral \(\int_{0}^{t}I(\tau)d\tau\) for every pulse of every experimental repetition by hardware-integration on a fast A/D converter (Spectrum M4i:4451-x8), and use this value to define a new
Figure 1: Experimental setup and one-dimensional magnetic resonance tomography of NV centers. (a) Electron micrograph of a device as used in the present study. Currents in the three gold wires of a microfabricated U-Structure create three linearly independent magnetic field gradients in the densely doped diamond below the structure. (b) Pulse sequence for one-dimensional imaging. A magnetic gradient pulse (length \(t_{\textit{eff}}\)) inserted into a Hahn echo sequence phase-encodes position. The NV spin state is initialized and the spin projection \(\langle\hat{S}_{z}\rangle\) is read out optically. (c) Measurement result of (b). \(\pi/2_{x}\) and \(\pi/2_{y}\) denote the phase of the trailing \(\pi/2\) pulse in (b). (d) Fourier transform of a dataset like (c) extending to \(t=60\) μs. Every NV center gives rise to one peak at the Larmor frequency set by the magnetic field \(B_{I_{1}}(d)\) of the wire. \(d\) denotes the distance from wire \(I_{1}\). (e) Experimental setup. A single microwave generator, a \(90^{\circ}\) splitter and two microwave switches are used to implement the Hahn Echo sequence. A confocal microscope with an avalanche photodiode (APD) as a detector is used for NV center polarization and readout. The gradient currents \(I_{1},I_{2},I_{3}\) are created from a constant voltage source and can be pulsed by a fast switch. The voltage drop across the resistor is recorded by an A/D-converter and the pulse integral \(\int Idt\) is saved for every single current pulse.
time axis for all photonic measurements that removes chirps. Specifically, we make the following approximation (valid for nearly rectangular pulses and a nearly linear Zeeman shift)
\[\int_{0}^{t}\omega(\vec{B}_{I}(\vec{x},\tau))d\tau\approx\omega_{I,\textit{ref}} \frac{\int_{0}^{t}I(\tau)d\tau}{I_{0}}=\omega_{I,\textit{ref}}t_{\textit{eff}}\]
Here, \(\omega_{I,\textit{ref}}(\vec{x})\) is the shift in Larmor frequency induced by some reference current \(I_{0}\) and \(t_{\textit{eff}}\) denotes an "effective pulse duration". We thus absorb minor fluctuations of the current over the pulse into this redefined time coordinate \(t_{\textit{eff}}\). All time-domain plots in this paper will use \(t_{\textit{eff}}\) as time axis unless noted otherwise. This correction also suppresses shot-to-shot fluctuations, improving coherence [16].
We now extend this one-dimensional magnetic resonance tomography to three-dimensions, employing the three magnetic field gradients provided by the currents \(I_{1},I_{2},I_{3}\) of our device. Note that these gradients are linearly independent if the focal spot of the microscope is placed a few micrometers below the plane of the U-structure.
During the Hahn Echo sequence the pulses of these three currents are applied consecutively (see Fig. 2 (a)). Since the accumulated phase will just add up linearly, the resulting spin signal is given by:
\[\langle\hat{S}_{z}\rangle(t)=\Bigg{(}1+\sum_{\vec{x}_{NV}}\cos( \omega_{I_{1}}(\vec{x}_{NV})t_{1}+\\ \omega_{I_{2}}(\vec{x}_{NV})t_{2}+\omega_{I_{3}}(\vec{x}_{NV})t_{ 3})\Bigg{)}/2\]
(assuming a trailing \(\pi_{x}/2\) pulse).
In analogy to one-dimensional tomography the set of \((\omega_{I_{1}}(\vec{x}_{NV})\), \(\omega_{I_{2}}(\vec{x}_{NV})\), \(\omega_{I_{3}}(\vec{x}_{NV})\)) can be recovered from the three-dimensional time domain data (Fig. 2(b)) by a 3D inverse Fourier transform, forming a three-dimensional image. We note that this resulting image is distorted because the gradients, while linearly independent, are not fully orthogonal. While this distortion can in principle be corrected by computing and inverting the exact spatial distribution of the frequency shift \(\omega_{I}(\vec{x})\), the raw result of the Fourier transform is still a true three-dimensional image. This resulting image reveals individual NV centers in the diamond. Only NV centers within the confocal volume of the microscope can be imaged. For the given NV density \(5-15\) centers are expected to be seen.
One challenge of such multidimensional measurements is the large number of required data points in Fourier space, e.g. \(10^{6}\) points in Fig. 2. This number can be reduced by compressed sensing, where only a subset of points is acquired, and the image is reconstructed by numerical techniques like \(L_{1}\) minimization, exploiting the _a priori_ knowledge that the signal is a sparse set of discrete points [9]. Interestingly, our experimental setting allows for another compressed sensing approach. It does not require elaborate numerical reconstruction and exploits a different kind of _a priori_ knowledge: that the signal is restricted to a narrow region of interest, i.e. a narrow band in frequency space. In this special case, we can implement an effective "zoom" into this region of interest by undersampling the signal in the time domain, lowering the amount of data points. Undersampling leads to aliasing of the signal in frequency space. For suitable parameters, this will shift the signal frequency band to a contiguous low frequency window where it can still be recovered by the inverse Fourier transform, effectively implementing a zoom. We demonstrate a proof of concept of this idea in Fig. 3. The simulated one-dimensional time and frequency domain plots (left part Fig. 3 (a)) display a limited frequency band. When the time-domain signal is undersampled, i.e. sampled at a rate that the Nyquist frequency \(f_{Nyq}\) is smaller than the highest signal frequency, any signal at \(f>f_{Nyq}\) will be aliased to a frequency
\[f_{obs}=|f-2N\cdot f_{Nyq}|\, \tag{2}\]
where \(N\) is the integer minimizing \(|f-2Nf_{Nyq}|\). In (Fig. 3 (a), middle plot), the signal band (around \(f\approx 30\) MHz) is
Figure 2: Three-dimensional magnetic resonance tomography. (a) Pulse sequence. The sequence of Fig. 1 is extended to contain three magnetic gradient pulses from different wires. (b) Time domain data recorded from the sequence of (a) ending with \(\pi/2_{x}\). (c) Three-dimensional Fourier transform of the data in (b). The plot shows the square of the absolute value (spectral power) of the Fourier transform. \(d_{1},d_{2},d_{3}\) denote the distance to the respective wire (see supplementary). The bottom, left, and back faces show projections of the 3D data.
close to twice the Nyquist frequency (\(2\cdot f_{Nyq}=35\) MHz) and hence aliased to a region close to \(f=0\) MHz. Note that the aliasing involves mirroring of the signal band, because the signal is at a lower frequency than the closest even multiple of \(f_{Nyq}\).
We show this concept acquiring three separate two-dimensional measurements (Fig. 3 (b-g)), which display NV centers in a limited region of interest (upper right corner in Fig. 3 (b,c)), defined by the confocal volume of the microscope. This process of undersampling and aliasing implements a zoom into the region of interest (Fig. 3 (d-e)). Note that this process requires the signal to be confined to a limited window of frequencies. Since frequencies equal to an integer multiple of \(2f_{Nyq}\) will appear at the same \(f_{obs}\) (Equation. 2), signals outside the zoom window will fold back into the signal of interest. To prevent contamination of the resulting image, the signal should be bandpass limited, i.e. values outside the frequency range of the signal should be zero and the Nyquist frequency should not fall below the bandwidth of the NV signal spectrum. For a suitable parameter choice of the undersampling, a zoom can be achieved that exactly covers the region of interest (Fig. 3 (f,g)), allowing for the acquisition of a full image with a greatly reduced number of data points. In the specific example (Fig. 3 (f,g)), the two dimensions are undersampled by a factor of 6 and 3, reducing the number of data points by a factor of 18, i.e. more than an order of magnitude. Note that reconstruction and visualization are still feasible by an inverse Fourier transform (Fig. 3 (f)). \(L_{1}\) minimization is not required for reconstruction, but can still be implemented to improve the quality of the image and/or further reduce the number of data points required (Fig. 3 (c,e,g)). We finally note a constraint of the technique. The undersampled data points have to be placed equidistantly in time, since any jitter or chirp will lead to a spectral broadening in the frequency-space image. Since a variation of the gradient current over the duration of a pulse is indistinguishable from a variation in timing, this also places higher demands on the constancy of currents, i.e.
Figure 3: Aliasing magnification and speed-up of Fourier magnetic imaging. The plots in (a) show a simulated signal and its Fourier transform. Going through the columns from left to right the sampling rate is reduced resulting in a slower oscillation (red trace) and a lower Nyquist frequency (black-dashed line). Aliasing around the closest even multiple of the undersampled Nyquist frequency shifts the signal to a window close to \(f=0\), but does not change its shape. The Nyquist frequency of the undersampled signal has to be chosen large enough to cover the entire signal bandwidth. The signal is shifted to negative frequencies, and hence appears flipped in a frequency axis using \(|f|\), if it sits on the left of the closest even multiple of the Nyquist frequency. Panels b-g display measured 2D images of NV centers acquired by taking the FFT of the time domain signal (b, d, f) or by doing an L1 minimization of the time domain signal (c, e, g). (b,c) The Nyquist frequencies for each gradient direction (x and y axes) were set to be larger than the highest frequency in that direction (no aliasing). For (d,e) and (f,g) the measurement was done using an aliased grid, the aliased factor for each direction can be read in orange above FFT panel of each measurement. In (d,e) the undersampling parameters are such that the signal is flipped in both axes. For (f,g) the signal is flipped in the horizontal axis, but remains unflipped in the vertical axis. The aliased measurement (f,g) reduces the acquisition time by a factor of 10.
the requirement of rectangular current pulses discussed above.
We finally analyze the spatial resolution that is achieved in our measurement. This is defined by the magnitude of the magnetic field gradient, and the spectral resolution of the spectroscopy. The frequency resolution of Fourier-transformed data is given as the inverse of the length of the time domain signal. Analogous to that, the frequency (and thus spatial) resolution of our magnetic resonance tomography depends on how long we can make the gradient pulse length and still observe an oscillatory spin signal. The longest usable pulse is limited by the fact that the spin signal decays over time on a timescale of \(\approx 10\) us (see e.g. Fig. 4 (a-b)) because of shot-to-shot fluctuations of the gradient currents, which result in the decoherence of the NV centers. Denoting the timescale of this decay by \(T_{2,I}\) (i.e. the coherence time in the presence of the gradient current), the frequency resolution is
\[\Delta f=\frac{\sqrt{2}}{\pi T_{2,I}}\]
where \(\Delta f\) denotes full width at half-maximum (FWHM) of the peak in frequency space. See supplementary information for explanation of the \(\frac{\sqrt{2}}{\pi}\) factor. We extract \(T_{2,I}\) from a long 1D tomography data set, extending to several multiples of \(T_{2,I}\). We calculate the Fourier transform for short time window and "slide" this window over the whole range of the time domain signal. The resulting spectrogram (Fig. 4 (b)) shows the evolution of the NV spectrum with increasing gradient pulse lengths. The signal from a single spin appears as a horizontal line in a specific frequency band. The decaying power of the signal with increasing time in this band defines the SNR over the measurement (see supplementary). We Fit the SNR curve (Fig. 4 (c)) with a Gaussian \((e^{-t_{2/\theta\!f}^{2}/2T_{2}^{2}})\) to obtain \(T_{2,I}\). For the data of Fig. 4 we thus arrive at a coherence time of \(T_{2,I_{2}}=8.64\pm 0.1\)\(\mu\)s. Combined with a gradient of \(||\nabla\omega(\vec{x})||/2\pi=6.34\) kHz/nm, obtained from a numerical simulation of the gradient field (see supplementary), this corresponds to a spatial resolution of \(\sigma_{x,I_{2}}=8.22\pm 0.10\) nm. Similarly, we obtain \(\sigma_{x,I_{1}}=5.99\pm 0.07\) nm and \(\sigma_{x,I_{3}}=14.47\pm 0.50\) nm for the other two gradient currents. This resolution could be limited by several effects. First, shot-to-shot fluctuations of the current could shorten \(T_{2,I}\). We try to suppress this by hardware integrating every single current pulse (see above) and applying post-processing corrections, but this process is equally limited by electronic noise at a lower level. Second, a spatial drift of the current path between successive experimental repetitions can equally lead to a decrease of \(T_{2}\). A spatial drift could arise from heat expansion of the diamond and the conductors, but an expansion on the level of \(10^{-3}\) would require a temperature difference of \(\approx 1000\) K. Which seems unlikely. A drift of the current path within the conductor, due to local heating appears more reasonable. Intriguingly the product of \(\omega_{NV}T_{2}\), i.e. the relative stability of the gradient field differs between the three wires. This tentatively suggests that spatial drifts of the current in the wires are the limiting factor rather than electrical fluctuations, which would be expected to be the same in all wires.
In summary, we have demonstrated Fourier-accelerated 3D imaging of single spins with nanoscale resolution. We have also presented a compressed sensing scheme, which exploits a limited field of view, rather than sparseness of the data. Our experiments demonstrate that resolution in the sub-10 nm range can be achieved by switchable magnetic field gradients.
While our experiment has been performed on NV centers inside the diamonds, the device and measurement technique could equally be applied to dark spins outside of the diamond. Here a single NV center would merely serve as a detector to enable electron/nuclear spin spectroscopy on spins, while the entire process of imaging could be performed by the device presented here. Our compressed sensing technique of "Fourier zooming" will be especially advantageous in this setting where all the spins are confined to the nanoscale detection volume of a shallow NV center. Such a direct
Figure 4: Benchmarking of the spatial resolution for I\({}_{2}\). (a) time-domain signal of a one-dimensional tomography (sequence of Fig. 1(b)). Excerpts at different time windows are shown. (b) Spectrogram (windowed Fourier transform) of (a). The signal produced by the NV centers decays over a timescale of \(\approx 10\) μs. (c) Signal-to-noise ratio of (b), computed by integrating the power in the signal window marked in (b) and referencing it to the noise observed outside this window (see supplementary). (c) A Gaussian fit to the data yields a decay timescale \(T_{2,I_{2}}=8.64\pm 0.1\) μs. (d) Fourier transform (absolute value) of the time domain signal in (a).
3D imaging technique could image an arbitrary number of spins and constrain inter-spin distances larger than 80 A, which is not possible by current electron spin resonance spectroscopy. Shrinking the structure by one order of magnitude would even push the resolution into the range of A. Notably the \(T_{2}\) of established spin labels is sufficiently long for the spectroscopy presented here [17].
This work has been supported by the Deutsche Forschungsgemeinschaft (DFG, grants RE3606/1-2, RE3606/3-1 and excellence cluster MCQST EXC-2111-390814868, SFB 1477 "Light-Matter Interactions at Interfaces" (Project No. 441234705)) and the European Union (ASTERIQS, Grant Agreement No. 820394). Y.H. acknowledges financial support from the China Scholarship Council. The authors acknowledge the help of Regina Lange and Anja Clasen with taking the SEM picture and helpful technical discussions with John Marohn.
|
2309.06708 | Predicting Fatigue Crack Growth via Path Slicing and Re-Weighting | Predicting potential risks associated with the fatigue of key structural
components is crucial in engineering design. However, fatigue often involves
entangled complexities of material microstructures and service conditions,
making diagnosis and prognosis of fatigue damage challenging. We report a
statistical learning framework to predict the growth of fatigue cracks and the
life-to-failure of the components under loading conditions with uncertainties.
Digital libraries of fatigue crack patterns and the remaining life are
constructed by high-fidelity physical simulations. Dimensionality reduction and
neural network architectures are then used to learn the history dependence and
nonlinearity of fatigue crack growth. Path-slicing and re-weighting techniques
are introduced to handle the statistical noises and rare events. The predicted
fatigue crack patterns are self-updated and self-corrected by the evolving
crack patterns. The end-to-end approach is validated by representative examples
with fatigue cracks in plates, which showcase the digital-twin scenario in
real-time structural health monitoring and fatigue life prediction for
maintenance management decision-making. | Yingjie Zhao, Yong Liu, Zhiping Xu | 2023-09-13T04:13:11Z | http://arxiv.org/abs/2309.06708v1 | # Predicting Fatigue Crack Growth via Path Slicing and Re-Weighting
###### Abstract
Predicting potential risks associated with the fatigue of key structural components is crucial in engineering design. However, fatigue often involves entangled complexities of material microstructures and service conditions, making diagnosis and prognosis of fatigue damage challenging. We report a statistical learning framework to predict the growth of fatigue cracks and the life-to-failure of the components under loading conditions with uncertainties. Digital libraries of fatigue crack patterns and the remaining life are constructed by high-fidelity physical simulations. Dimensionality reduction and neural network architectures are then used to learn the history dependence and nonlinearity of fatigue crack growth. Path-slicing and re-weighting techniques are introduced to handle the statistical noises and rare events. The predicted fatigue crack patterns are self-updated and self-corrected by the evolving crack patterns. The end-to-end approach is validated by representative examples with fatigue cracks in plates, which showcase the digital-twin scenario in real-time structural health monitoring and fatigue life prediction for maintenance management decision-making.
## Introduction
Fatigue life prediction (FLP) is of critical importance for structural integrity design in, for example, aerospace and nuclear engineering [1]. After fatigue initiation with accumulated damage, fatigue cracks grow and can be monitored after the size reaches a detection threshold. In practice, periodic inspection is commonly arranged to identify flaws in the structural components. The information is fed into fracture mechanics analysis (FMA) where the remaining life can be calculated from empirical rules of fatigue crack growth (FCG). Predictive maintenance schemes could further reduce the life-cycle cost and increase system safety, which have been actively explored in recent studies [2].
However, the physics governing fatigue is entangled with the microstructural evolution of materials and the profiles of loading conditions [3, 4]. The microscopic processes of fatigue may involve plasticity, fracture, and phase transitions, which are defined by chemical compositions, and atomic-level and microscopic structures [5]. As a result, material fatigue, like fluid turbulence, becomes a complex system process of the microscopic components with their interaction spanning across multiple space and time scales [6]. The nonlinearity and heterogeneity embedded in the mathematical or data-driven models make predicting behaviors of these systems challenging [7]. Fingerprint features of materials and structural components as well as the history or path dependence of FCG make their responses susceptible to statistical noises and rare events resulting from intrinsic or extrinsic sources [8, 9]. Uncertainties thus exist in the microstructure-sensitive constitutive relations of materials and the loading conditions in experimental tests or under specific service conditions, respectively, which can alter the processes of material damage
and FCG [10].
Material responses during fatigue can be characterized by statistical data of the fatigue life or FCG rates obtained from experimental tests [11, 12]. With this knowledge, recent developments in data science and machine learning techniques allow engineers to take a data-driven approach to structural health monitoring (SHM) and FLP [13]. However, a practical solution that tackles the history dependence, statistical noises, and rare events has not been established yet [14, 15]. With digital libraries constructed from high-fidelity physical simulations, neural networks with specially designed architectures can extract the characteristic features and make predictions [16, 17]. The history dependence can be handled by the long short-term memory (LSTM) network by introducing gating and memory functions [18]. Surrogate models trained by machine learning techniques can reduce the computational costs significantly from physical modeling. Prediction in a real-time or digital-twin paradigm can thus be achieved, where the interaction with physical sensor networks (PSNs) can be included to update the model parameters, quantify the uncertainties, and evaluate the models based on the Bayesian theory [19, 20].
In this work, we develop a statistical learning framework to predict the growth of fatigue cracks and the life-to-failure of structural components. Digital libraries are constructed by finite element analysis (FEA). Variational autoencoder (VAE) is used to learn the latent representation of the fatigue cracks, followed by LSTM and feedforward neural network (FNN) for the history dependence and the life prediction of FCG, respectively. Path slicing and re-weighting techniques are introduced to address uncertainties in the service conditions. Fully resolved fatigue crack paths and accurate prediction of the remaining life are demonstrated by examples.
## Results
### The statistical learning framework.
In engineering, FLP can be achieved through techniques that can be expressed into \(4\) levels (Figure 1). Empirical models for the relations between the FCG rate (\(\mathrm{d}a/\mathrm{d}N\)) and the amplitude of stress intensity factors (SIFs, \(\Delta K\)) can be fed into analytical solutions of SIF under specific loading conditions and sample geometries [21]. FEA calculates the evolution of SIFs as fatigue cracks grow with high accuracy. The surrogate models constructed using, for example, statistical learning methods can reduce the computational costs. Digital twins with accurate and fast algorithms of FLP and interaction with the physical systems can be implemented for real-time monitoring and prognosis. Our FLP framework includes modules of data generation, model training, and applications (Figure 2, see Experimental Procedures for details). The extended finite element method (XFEM) is utilized for modeling and constructing digital libraries. Statistical noises and rare events are considered through the loading amplitudes with Gaussian distributions in FCG modeling using XFEM. The produced datasets contain fatigue cracks and their corresponding residual life. We combine VAE, LSTM, and FNN in model training, which is used for SHM and FLP for the structural components in the application module. A path-slicing technique is used based on the datasets produced by FEA for the history dependence and statistical noises, and a re-weighing technique is introduced in the training process to signify the impact of rare events on the model parameters (Figure 3A).
### Dimensionality reduction of the crack patterns.
Crack patterns or fractured surfaces modeled by XFEM are stored as voxel data. These data characterize the competition between the material resistance to fracture and the driving force of FCG. For simple structures, the mapping between the far-field loading conditions and the crack-tip driving forces can be captured by FMA. As a result, dimensionality reduction of the crack patterns is performed to extract crack surfaces to improve the learning efficiency in downstream tasks. We use VAE to learn the latent representations of fatigue cracks in reduced dimensions. The density distributions of fatigue crack patterns spanned in the space of the two primary features are summarized in Figure 3B. The latent representations of fatigue cracks tend to follow the Gaussian distribution owing to the competition between the reconstruction loss and the Kullback-Leibler (KL) divergence in VAE (see Experimental Procedures for details)[22].
### Path slicing and re-weighting for history dependence, statistical noises, and rare events.
We introduce a path-slicing technique to address the complexity of the loading profiles and the effects of statistical noises on FCG (Figure 3A). The domain of interest is discretized to \(N_{\mathrm{s}}\) regions. As FCG proceeds across their boundaries and reaches the next region, the tension and shear loads are re-sampled from the Gaussian distribution (Figure 3A). Structural components in service may also experience rare events such as catastrophic failure of the components and unexpected external impact. Rare events are identified from the fatigue crack patterns in the pre-constructed digital libraries by unsupervised clustering in the latent space (shown as red data points in the space spanned by two major features in Figure 3C)[23]. Their weights are boosted in downstream training to improve the performance of prediction, where an enrichment factor
\(\lambda\) is defined for the updated loss function
\[\mathrm{Loss}=\mathrm{MSE}(\mathbf{z},\widehat{\mathbf{z}})+\lambda\mathrm{MSE}( \mathbf{z}_{\mathrm{rare}},\widehat{\mathbf{z}}_{\mathrm{rare}}), \tag{1}\]
where \(\mathbf{z}\) and \(\widehat{\mathbf{z}}\) are the ground truth and predictions of the latent vectors, respectively. The subscript indicates the rare records of fatigue cracks.
### Prediction and correction in digital twins.
To demonstrate the capability of our framework, we consider a flat plate subjected to loads with statistical noises and rare events as a representative example (Figure 4). Figures 4A-4C show the representative fatigue crack paths from FLP. The results show that without path slicing or re-weighting, the deflection of fatigue cracks cannot be correctly predicted. FCG along the direction of the embryo crack is predicted without necessary changes induced by the uncertainties in the loading conditions (Figure 4A). In contrast, path slicing solves the problem by constantly updating the model prediction based on observation in experiments or physical modeling (in our work) at time \(t_{\mathrm{obs}}\), correctly predicting crack deflection (Figure 4B). With re-weighting further implemented, the process of FCG and the remaining life are accurately predicted (Figure 4C).
For quantitative assessment of the framework, the root mean square error (RMSE) and structural similarity (SSIM) are evaluated between the predicted crack path and the ground truth as a function of the observation time, \(t_{\mathrm{obs}}\) (Figure 4D, see Experimental Procedures for details). RMSE offers a local measure for the deviation of fatigue crack paths, showing that combining path slicing and re-weighting effectively minimizes the errors by addressing the history dependence and noisy features in the loading conditions as well as the presence of rare events.
SSIM measures the global similarity from the mean and variance of the voxel values, showing that the predictions of fatigue crack paths are improved as \(t_{\mathrm{obs}}\) increases Figures 5C and 5D show the error in life prediction as well as the accuracy of representative paths and all the path samples in the test set, respectively, which comprises \(20\%\) of data in the digital libraries. As \(t_{\mathrm{obs}}\) increases, the performance of models equipped with path slicing and re-weighting is continuously improved.
In practice, the loading conditions can change continuously in service, the complexity of which is quantified using symbolic aggregate approximation (SAX) (Figures 6A-6C, see Experimental Procedures for details) [24]. We find that path slicing and re-weighting can significantly reduce the data complexity actually needed in the statistical learning framework. Specifically for the plate example, with only \(200\) loading-profile samples in the training set, path slicing with \(N_{\mathrm{s}}=5\) can accurately predict FCG in the test set with \(200\) samples, even for the loading conditions with statistical noises and rare events (Figures 6B and 6C). It should be noted that the performance of prediction depends on the size of training sets and the choice of \(N_{\mathrm{s}}\). Their values are determined for structural components for specific geometries, sizes, and loading conditions, and can be guided by the analysis of the loading-profile complexity.
## Discussion and Conclusion
To predict FCG and the life-to-failure in real time, our digital framework needs only the information of fracture crack morphologies, which can be integrated with PSNs and FMA to assess the model performance [25]. In addition to crack patterns captured by high-speed cameras, PSN
data such as local strain measured by strain gauges, elastic waves detected by acoustic emission sensors, and electrical impedance measured by piezoelectric sensors can be used to identify or infer the features of fatigue cracks and loading conditions [26]. These data can be fed into FMA to calculate the stress fields at the crack tips based on analytical models or FEA [25]. FCG is then predicted by using, for example, the Paris-Erdogan equation, and can be used to evaluate the accuracy of our statistical learning framework. In practice, implementing digital twins for FLP still faces challenges in addition to real-time monitoring of fatigue crack patterns. Firstly, the use of sensors on critical structural components such as turbine blades in the engines may not be feasible in a harsh environment. Even for structural components where sensors can be installed, the driving force of FCG at the crack tip needs to be calculated from limited data from the PSNs. Data-driven approaches were recently proposed to address this issue. Strain fields are predicted from data collected from digital image correlation (DIC) and a limited number of strain gauge sensors using deep neural networks and experimental libraries [26]. Secondly, FMA using FEA, although offering higher accuracy compared to analytical models, demands a considerable amount of time for numerical calculations, especially for structural components with complex geometries, crack-tip morphologies, and load conditions (Figure 6D). The applications of our model in FLP can be extended straightforwardly to engineering components such as the turbine blades with a 3D geometry, where the surface cracks are represented by using the point-cloud representation [27, 28]. However, the digital models need to be updated by the observed physical states typically at computing time scale within milliseconds. Pre-calculated digital libraries and surrogate models mitigate this burden, although including extremely rare events that
deviate far from the Gaussian distribution (e.g., bird strikes on aircraft) remains difficult. Our end-to-end, self-updated model thus offers FLP from the evolution of fatigue crack patterns and can interact with PSNs for performance. The demonstrated accuracy and efficiency by considering the history dependence, statistical noises, and rare events can be integrated into digital twins of aerospace and nuclear power applications.
## Experimental procedures
### Resource availability
#### Lead contact
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Zhiping Xu ([email protected]).
#### Materials availability
This study did not generate new unique materials.
#### Data and code availability
All data needed to evaluate the conclusions are present in the paper. Additional data related to this paper may be requested from the lead contact. The code used for this study is available at [https://github.com/zhaoyj21/FCG](https://github.com/zhaoyj21/FCG).
#### Modeling fatigue crack growth
To construct the digital libraries of fatigue crack patterns, we consider a linear elastic material for the sake of simplicity. Young's modulus \(Y=200\) GPa and Poisson's ratio \(\nu=0.31\) (e.g.,
for a typical nickel alloy) are chosen as a representative example [29]. XFEM is used to model FCG under uniaxial tension and additional shear components. In XFEM, special functions are added to the continuous displacement fields to model discontinuity problems such as cracks and interface, which solves difficulties in meshing high-stress areas near the crack tip in FEA [30]. The step of crack advancement is set to \(0.3\ \mathrm{mm}\), which is minute enough in comparison to the size of the structural components (\(10\ \mathrm{mm}\)) to ensure convergence. The angle of crack deflection (\(\theta\)) is determined by the maximum tangential stress criterion [31],
\[\theta=\arccos\frac{3K_{\mathrm{II}}^{2}+\sqrt{K_{\mathrm{I}}^{4}+8K_{\mathrm{ I}}^{2}K_{\mathrm{II}}^{2}}}{K_{\mathrm{I}}^{2}+9K_{\mathrm{II}}^{2}}, \tag{2}\]
where \(K_{\mathrm{I}}\) and \(K_{\mathrm{II}}\) are mode-I and the \(\mathrm{II}\) stress intensity factors (SIFs), respectively. The remaining life of structures with an existing crack is evaluated by integration using the Paris-Erdogan equation [32],
\[\mathrm{d}a/\mathrm{d}N=C(\Delta K)^{m}, \tag{3}\]
where \(\Delta K\) is the difference between the maximum and minimum SIFs in a load cycle, \(C=9.7\times 10^{-12}\) and \(m=3.0\) are material coefficients [33].
A path-slicing technique is proposed for the uncertainties in service conditions, where the structure is discretized into \(N_{\mathrm{s}}\) segments. The loading conditions change across their boundaries, which follow the distribution of statistical noises (assumed to be Gaussian, Figure 3A). Tails of the Gaussian distribution with a relative probability below \(0.05\) are considered as rare events. The produced digital libraries contain \(1,000\) voxel datasets of the crack patterns and their corresponding life to failure.
### Model reduction
Fatigue cracks can be represented as curves or surfaces for 2D or 3D models. In this work, we use VAE for nonlinear dimensionality reduction into the latent representations, where the fatigue crack patterns in the digital libraries follow Gaussian distributions [22]. The encoder and decoder in VAE approximate the posterior and likelihood distributions, respectively,
\[p(\mathbf{z}\mid\mathbf{x})=\frac{p(\mathbf{x}\mid\mathbf{z})p(\mathbf{z})}{p( \mathbf{x})}, \tag{4}\]
where \(p(\mathbf{z}\mid\mathbf{x})\) is the posterior, \(p(\mathbf{x}\mid\mathbf{z})\) is the likelihood, and \(p(\mathbf{z})\) is the prior distribution, \(\mathbf{x}\) is the fatigue crack and \(\mathbf{z}\) is the latent representation in reduced dimension. The high-dimensional data is fed into the encoder to reduce the dimensionality. The decoder then recovers the high-dimensional data from the latent reduced representation. The loss function of the training process is defined by the reconstruction errors and the KL divergence [22],
\[\mathrm{Loss}=\|\mathbf{x}-\widehat{\mathbf{x}}\|+\mathrm{KL}(\mathcal{N}( \mu,\mathbf{\Sigma})\|\mathcal{N}(\mathbf{0},\mathbf{I})), \tag{5}\]
\[\mathrm{KL}(\mathcal{N}(\mu,\mathbf{\Sigma})\|\mathcal{N}(\mathbf{0},\mathbf{ I}))=\int\mathcal{N}(\mu,\mathbf{\Sigma})\ln\frac{\mathcal{N}(\mu,\mathbf{ \Sigma})}{\mathcal{N}(\mathbf{0},\mathbf{I})}\mathrm{d}\mathbf{z}, \tag{6}\]
where \(\widehat{\mathbf{x}}\) is the reconstruction of the fatigue crack, \(\mu\) and \(\mathbf{\Sigma}\) are the mean and covariance matrix of latent representations learned by neural networks. The latent representations in reduced dimension are used to learn the history or path dependence and the nonlinear mapping in downstream tasks.
### Statistical learning
FCG depends on the loading conditions and the instantaneous crack configuration with loading-history or crack-path dependence. LSTM and FNN are integrated to achieve efficient prediction
of the path and life of FCG at the structural level. LSTM is used to train and predict FCG from the observed fatigue crack patterns. The encoder and decoder both have two LSTM layers followed by a fully connected layer. There are \(100\) neural units in each layer unless otherwise noted. The Adam optimizer is adopted to update the model parameters in training with hyperparameters, \(\eta=10^{-3}\) (the learning rate), \(\beta_{1}=0.9\), \(\beta_{1}=0.99\), and \(\epsilon=10^{-7}\)[34]. A re-weighting technique is proposed here for the rare events, where the weight of fatigue cracks with low probability is boosted by \(500\) times. The predicted growing crack patterns are fed to FNN to forecast the remaining life.
### Verification and validation
The root mean square error (RMSE) between the predicted crack paths (\(\mathbf{r}^{\rm pred}\)) and the ground truth (\(\mathbf{r}^{\rm truth}\)) obtained from FEA is defined as
\[\mathrm{RMSE}=\sqrt{\sum_{i=k}^{n}\frac{(\mathbf{r}_{i}^{\rm pred}-\mathbf{r}_{ i}^{\rm truth})^{2}}{n-k+1}}, \tag{7}\]
where the full path of a crack is discretized into \(n\) points and points \(1-k\) are known from observation. The structural similarity (SSIM) is calculated as
\[\mathrm{SSIM}=\frac{(2\mu_{\rm p}\mu_{\rm t}+c_{1})(2\sigma_{\rm pt}+c_{2})}{( \mu_{\rm p}^{2}+\mu_{\rm t}^{2}+c_{1})(\sigma_{\rm p}^{2}+\sigma_{\rm t}^{2}+c _{2})}, \tag{8}\]
where \(\mu_{\rm p}\) and \(\mu_{\rm t}\) are the mean values of the prediction and ground-truth voxel data, respectively. \(\sigma_{\rm p}\) and \(\sigma_{\rm t}\) are their variances, and \(\sigma_{\rm pt}\) is their covariance. \(c_{1}=(0.01R)^{2}\) and \(c_{2}=(0.03R)^{2}\) are parameters defined by the range of the voxel values, \(R\)[35, 36]. A low RMSE value or a high SSIM score indicates a high accuracy of prediction.
### The complexity of loading profiles
The complexity of loading profiles with various levels of statistical noises and rare events is measured by employing the symbolic aggregate approximation (SAX) [24]. SAX transforms time series data such as the loading profiles into symbolic representations (e.g., words) using piecewise aggregate approximation (PAA) and discretization [24]. The loading profile within a time span of \(m\) (\(t_{1},t_{2},\cdots,t_{m}\)) can be represented by a vector \(\mathbf{A}\) in a reduced \(w\)-dimensional space according to PAA, that is
\[a_{i} =\frac{w}{m}\sum_{j=\frac{m}{w}(i-1)+1}^{\frac{m}{w}i}t_{j}, \tag{9}\] \[\mathbf{A} =[a_{1},a_{2},\cdots,a_{w}]. \tag{10}\]
\(a_{i}\) (\(i=1,2,\cdots,w\)) are then discretized to letters (\(\widehat{a}_{i}\)) into \(l\) (\(=10\)) equally sized regions according to the value of \(a_{i}\), that is
\[\widehat{a}_{i}=\mathrm{letter}_{j},\;\mathrm{if}\;\beta_{j-1}\leq a_{i}<\beta _{j}, \tag{11}\]
where \(\beta_{j-1}\) (\(j=1,2,\cdots,l\)). The loading profile is then represented as a word \(\widehat{A}\) with \(w\) letters.
\[\widehat{A}=\widehat{a}_{1},\widehat{a}_{2},\cdots,\widehat{a}_{w}, \tag{12}\]
and data complexity of the loading profile can be estimated by the number of possible words as
\[\mathrm{data\ complexity}=l^{w}, \tag{13}\]
and \(l\) is the size of the alphabet. To ensure the accuracy of PAA, we choose large values of \(w\) as the complexity of the time series increases, which are \(1,\;4,\;8,\;9\) for constant loads, weak statistical noises, strong statistical noises, and rare events, respectively.
## Acknowledgments
This study was supported by the National Natural Science Foundation of China through grants 11825203, 11832010, 11921002, 52090032, 12122204, and 11872150. The computation was performed on the Explorer 100 cluster system of the Tsinghua National Laboratory for Information Science and Technology.
## Author contributions
Z.X. conceived and supervised the research. Y.Z. and Y.L. performed the finite element simulations and analysis. Y.Z. developed the statistical learning codes. All authors wrote the manuscript.
## Declaration of interests
The authors declare that they have no competing financial interests. |
2309.12755 | Low Scale Leptogenesis in Singlet-Triplet Scotogenic Model | The scotogenic model presents an elegant and succinct framework for
elucidating the origin of tiny neutrino masses within the framework of the
Standard Model, employing radiative corrections within the domain of the dark
sector. We investigate the possibility of achieving low-scale leptogenesis in
the singlet-triplet scotogenic model (STSM), where dark matter mediates
neutrino mass generation. We initially considered a scenario involving two
moderately hierarchical heavy fermions, N and $\Sigma$, wherein the lepton
asymmetry is generated by the out-of-equilibrium decay of both particles. Our
analysis indicates that the scale of leptogenesis in this scenario is similar
to that of standard thermal leptogenesis and is approximately $M_{N,\Sigma}\sim
10^{9}$ GeV, which is comparable to the Type-I seesaw case. Further, we
consider the case with three heavy fermions ($N_1$, $N_2$, and $\Sigma$) with
the hierarchy $M_{N_{1}} < M_{\Sigma} \ll M_{N_{2}}$, which yields the lower
bound on heavy fermions up to 3.1 TeV, therefore significantly reduce the scale
of the leptogenesis up to TeV scale. The only prerequisite is suppression in
the $N_{1}$ and $\Sigma$ Yukawa couplings, which causes suppressed washout
effects and a small active neutrino mass of about $10^{-5}$ eV. This brings
about the fascinating insight that experiments aiming to measure the absolute
neutrino mass scale can test low-scale leptogenesis in the scotogenic model.
Further, the hyperchargeless scalar triplet $\Omega$ provides an additional
contribution to mass of the $W$-boson explaining CDF-II result. | Labh Singh, Devabrat Mahanta, Surender Verma | 2023-09-22T09:55:23Z | http://arxiv.org/abs/2309.12755v2 | # Low Scale Leptogenesis in Singlet-Triplet Scotogenic Model
###### Abstract
The scotogenic model presents an elegant and succinct framework for elucidating the origin of tiny neutrino masses within the framework of the Standard Model, employing radiative corrections within the domain of the dark sector. We investigate the possibility of achieving low-scale leptogenesis in the singlet-triplet scotogenic model (STSM), where dark matter mediates neutrino mass generation. We initially considered a scenario involving two moderately hierarchical heavy fermions, N and \(\Sigma\), wherein the lepton asymmetry is generated by the out-of-equilibrium decay of both particles. Our analysis indicates that the scale of leptogenesis in this scenario is similar to that of standard thermal leptogenesis and is approximately \(M_{N,\Sigma}\sim 10^{9}\) GeV, which is comparable to the Type-I seesaw case. Further, we consider the case with three heavy fermions (\(N_{1}\), \(N_{2}\), and \(\Sigma\)) with the hierarchy \(M_{N_{1}}<M_{\Sigma}\ll M_{N_{2}}\), which yields the lower bound on heavy fermions up to 3.1 TeV, therefore significantly reduce the scale of the leptogenesis up to TeV scale. The only prerequisite is suppression in the \(N_{1}\) and \(\Sigma\) Yukawa couplings, which causes suppressed washout effects and a small active neutrino mass of about \(10^{-5}\) eV. This brings about the fascinating insight that experiments aiming to measure the absolute neutrino mass scale can test low-scale leptogenesis in the scotogenic model. Further, the hyperchargeless scalar triplet \(\Omega\) provides an additional contribution to mass of the \(W\)-boson explaining CDF-II result.
**Keywords:** Leptogenesis; Baryogenesis; Phenomenology; Dark Matter; W-Boson Mass.
Introduction
Although the early Universe started with equal amounts of matter and antimatter, the present Universe contains an excess of matter over antimatter. To account for this asymmetry, a dynamical mechanism must be identified, one that can create an imbalance so that after matter and antimatter annihilate, a surplus of matter remains, giving rise to the visible matter we observe. This surplus of baryons over antibaryons, is often expressed using the baryon-to-photon ratio denoted as \(\eta_{B}\)[1, 2]:
\[\eta_{B}=\frac{n_{B}-n_{\bar{B}}}{n_{\gamma}}=6.1\times 10^{-10},\]
where the variables \(n_{B}\), \(\eta_{\bar{B}}\), and \(n_{\gamma}\) correspond to the number densities of baryons, antibaryons, and photons, respectively. There are substantial evidences suggesting existence of more matter than antimatter in the Universe. These evidences come, primarily, from two sources: the abundance of light elements present in the Universe and observations of cosmic microwave background radiation anisotropies [3, 4, 5]. In order to create this asymmetry dynamically, certain established prerequisites needed to exist during the early Universe. These prerequisites are commonly known as Sakharov's conditions [6]. These conditions encompass (i) the violation of baryon number conservation (B), (ii) the occurrence of C and CP violation, and (iii) a departure from thermal equilibrium. Although the SM falls short in satisfactorily fulfilling all these criteria, different theories beyond the Standard Model (BSM) have been proposed to offer a dynamic explanation for the observed baryon abundance in the Universe.
One of the most straightforward mechanisms among these hypotheses entails the addition of extra massive particles that can annihilate or decay into the SM particles in a way that meets all the prerequisites for successful baryogenesis [7, 8]. Another intriguing approach, which also connects to the physics of the lepton sector, is known as leptogenesis [9]. In its standard conceptualization, leptogenesis is intricately associated with the Type-I seesaw mechanism, designed to elucidate the origin of the small neutrino masses within the SM framework. This mechanism introduces two or more heavy right-handed fermions with significant Majorana masses. Under the standard thermal leptogenesis paradigm, these heavy fermions emerge through scattering interactions within the thermal bath which lead to their CP-violating out-of-equilibrium decays that catalyze the creation of an initial lepton asymmetry. Consequently, \(B-L\) conserving and \(B+L\) violating electroweak sphaleron [10] process then cause this initial lepton asymmetry translate to the baryon asymmetry. Leptogenesis relies on CP violation within the lepton sector, which may be significant, as suggested by some neutrino oscillation experiments [11] because \(CP\) violation in the quark sector is not enough to account for the requisite baryon imbalance. Also,
the fact that lepton asymmetry can be produced by CP-violating, out-of-equilibrium decays of some heavy BSM fields involved in well-known seesaw processes [12, 13, 14] is an intriguing feature of this scenario.
Thermal leptogenesis has a limitation in that it requires an extremely high scale of right-handed heavy fermions. In other words, for thermal leptogenesis to be successful, the lightest Majorana fermions must have a minimum mass of around \(10^{9}\) GeV, which is known as the Davisson-Ibarra bound [15]. By incorporating flavor effects, this lower bound can be brought down to \(10^{8}\) GeV. It is however difficult to validate such a large-scale phenomenon in future collider experiments. This serves as motivation to seek an alternative to standard thermal leptogenesis that can produce the BAU at a much lower scale.
The purpose of this paper is to propose a low-scale alternative to standard thermal leptogenesis without considering degenerate heavy fermions [16]. Keeping this motivation in mind, we revisit the singlet-triplet scotogenic model [17, 18, 19, 20, 21, 22, 23, 24], which extends the original idea presented in [25] to make its phenomenology more viable and diverse. A scalar triplet that is \(Z_{2}\)-invariant facilitates the mixing between singlet and triplet fermions. At the tree level, this scalar triplet introduces additional contributions [26, 27, 28] to the W-mass as observed in CDF-II [29]. The extended scotogenic model has several important advantages over the minimal scotogenic model, including the fact that it prevents the unintended spontaneous breaking of the \(Z_{2}\) parity symmetry. This avoidance is possible due to influence of new couplings emerged in extensions of the minimal scenario [30]. The lightest \(Z_{2}\)-odd particle exhibits inherent stability and, electromagnetically neutral, emerges as a plausible candidate for dark matter. In our investigation, we have considered the real component of scalar doublet to be our dark matter candidate. To achieve successful leptogenesis in the STSM, we take into account the out-of-equilibrium decay of both heavy singlet (N) and triplet (\(\Sigma\)) fermion to produce the necessary lepton asymmetry by considering them to be moderately hierarchical (\(M_{N}\lesssim M_{\Sigma}\)) and, ultimately, the electroweak sphaleron transitions that can account for the observed BAU. Leptogenesis has been examined in previous studies and is a topic that has been covered extensively. In particular, the decay of singlet fermions has been discussed in Refs. [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41] while Refs. [42, 43, 44, 45, 28] focuses on the investigation of leptogenesis concerning the decay of triplet fermions. In the scotogenic model, leptogenesis occurs through various mechanisms that are determined by the mass spectrum in the \(Z_{2}\)-odd sector. In our model, an interesting question remains about the nature of the DM candidate. In the model under consideration, there can be two DM candidates: fermion DM, a mixed state of singlet and triplet fermion, and the lightest component of the inert scalar doublet (\(\eta\)). In the fermionic DM case, the relic density requires larger Yukawa couplings, which results in huge washout effects
in the process of leptogenesis. However, in the scalar dark matter scenario, the relic density depends upon the gauge interactions of the \(\eta\). Therefore, realizing thermal leptogenesis with a specific mass spectrum is impossible in the former scenario. Furthermore, it's important to note that when dealing with fermionic dark matter, the significant Yukawa couplings can readily result in a breach of restrictions concerning lepton flavor violations. As a result, we decided to make the lightest part of \(\eta\) our DM candidate.
Various studies of thermal leptogenesis in the minimal scotogenic model are already present in the literature [46, 47, 48, 49]. We investigate the possibility of achieving low-scale leptogenesis in the phenomenologically rich STSM, where dark matter mediates neutrino mass generation. We initially considered a scenario involving two nearly equal heavy fermions, which leads to a lightest neutrino mass eigenvalue of \(m_{1}=0\). In this scenario, lepton asymmetry is generated by the out-of-equilibrium decay of both particles denoted as N and \(\Sigma\). Our analysis shows that the scale of leptogenesis in this scenario is similar to that of the standard thermal leptogenesis in the Type-I seesaw mechanism. In order to reduce the scale of leptogenesis, we propose addition of two singlet right-handed fermions (\(N_{1}\) and \(N_{2}\)) along with a triplet fermion \(\Sigma\) with a specific mass hierarchy: \(M_{N_{1}}<M_{\Sigma}\ll M_{N_{2}}\). Here, \(M_{N_{1}}\), \(M_{N_{2}}\), and \(M_{\Sigma}\) represent the respective masses of \(N_{1}\), \(N_{2}\), and \(\Sigma\). STSM is more phenomenologically rich than the minimal scotogenic model due to its additional contribution to the \(W\)-boson mass. In the context of the minimal scotogenic model, the possibility of TeV scale leptogenesis has been investigated in Refs. [32, 33]. Within STSM scenario, if low-scale leptogenesis is feasible, then the gauge interactions of a TeV scale triplet fermion could be probed in future collider experiments. Lowering the leptogenesis scale to sub-TeV range enhances the testability of the model in future collider experiments. Further, we study the phenomenology of scalar DM where we chose the real component (\(\eta^{R}\)) of the neutral part of \(\eta\) as candidate for scalar DM.
The structure of the paper is as follows: In Section 2, we begin by introducing the STSM and providing an overview of its essential characteristics. Section 3 delves into the \(W\)-boson mass anomaly observed in the CDF-II results. Section 4 is devoted to the phenomenology of the scalar DM. In Section 5, we formulate all necessary expressions for studying leptogenesis in the STSM. Considering the scenario with two heavy fermions, the analysis shows that this scenario offers no advantage over the standard thermal leptogenesis in Type-I seesaw framework (Section 5.1). We extend our model in Section 5.2 by introducing another right-handed singlet fermion to lower down the scale of leptogenesis. Section 5.3 will briefly present numerical analysis and discussion of leptogenesis scenario. In Section 6, we present conclusions of the work reported in this paper.
## 2 Model and Formalism
The assignments of fields within the STSM, subject to the \(SU(2)_{L}\times U(1)_{Y}\times Z_{2}\) symmetry, are presented in Table 1. The relevant invariant Yukawa Lagrangian is given by
\[{\cal L}=Y_{L}^{\alpha\beta}\overline{L_{\alpha}}\phi\ell_{\beta}+Y_{N}^{\alpha }\overline{L_{\alpha}}i\sigma_{2}\eta N+Y_{\Sigma}^{\alpha}\overline{L_{\alpha }}C\Sigma^{\dagger}i\sigma_{2}\eta+Y_{\Omega}Tr(\overline{\Sigma\Omega})N+ \frac{1}{2}M_{N}\overline{N^{c}}N+\frac{1}{2}M_{\Sigma}Tr(\overline{\Sigma^{c} }\Sigma)+h.c., \tag{1}\]
where \(\alpha\), \(\beta\)=\(1,2,3\) denotes the three flavors. The \(SU(2)_{L}\) doublet fields \((\phi,\eta)\) and \(SU(2)_{L}\) triplet fields \((\Sigma,\Omega)\) can be expressed as
\[\phi=\left(\begin{array}{c}\phi^{+}\\ \phi^{0}\end{array}\right),\eta=\left(\begin{array}{c}\eta^{+}\\ \eta^{0}\end{array}\right),\Sigma=\left(\begin{array}{cc}\frac{\Sigma^{0}}{ \sqrt{2}}&\Sigma^{+}\\ \Sigma^{-}&-\frac{\Sigma^{0}}{\sqrt{2}}\end{array}\right)\ \mbox{and}\ \ \ \ \Omega=\left( \begin{array}{cc}\frac{\Omega^{0}}{\sqrt{2}}&\Omega^{+}\\ \Omega^{-}&-\frac{\Omega^{0}}{\sqrt{2}}\end{array}\right).\]
The invariant scalar potential of the model is given by
\[V = -m_{\phi}^{2}\phi^{\dagger}\phi+m_{\eta}^{2}\eta^{\dagger}\eta+ \frac{\lambda_{1}}{2}\left(\phi^{\dagger}\phi\right)^{2}+\frac{\lambda_{2}}{2} \left(\eta^{\dagger}\eta\right)^{2}+\frac{m_{\Omega}^{2}}{2}\operatorname{Tr }\left(\Omega^{\dagger}\Omega\right)+\lambda_{3}\left(\phi^{\dagger}\phi \right)\left(\eta^{\dagger}\eta\right)+\lambda_{4}\left(\phi^{\dagger}\eta \right)\left(\eta^{\dagger}\phi\right) \tag{2}\] \[+ \frac{\lambda_{5}}{2}\left[\left(\phi^{\dagger}\eta\right)^{2}+h.c.\right]+\frac{\lambda^{\eta}}{2}\left(\eta^{\dagger}\eta\right) \operatorname{Tr}\left(\Omega^{\dagger}\Omega\right)+\frac{\lambda_{1}^{ \Omega}}{2}\left(\phi^{\dagger}\phi\right)\operatorname{Tr}\left(\Omega^{ \dagger}\Omega\right)+\frac{\lambda_{2}^{\Omega}}{4}\operatorname{Tr}\left( \Omega^{\dagger}\Omega\right)^{2}\] \[+ \mu_{1}\phi^{\dagger}\Omega\phi+\mu_{2}\eta^{\dagger}\Omega\eta.\]
In order to maintain the perturbativity of the theory, it is required that all couplings \((\lambda_{i})\) remain less than or equal to one [30, 50]. Due to the conservation of \(\mathbb{Z}_{2}\) symmetry, the \(\eta\) field does not acquire a vacuum expectation value (VEV). The spontaneous electroweak symmetry breaking is triggered primarily by the neutral components of the \(\phi\) and \(\Omega\) fields,
\[\left\langle\phi^{0}\right\rangle=\frac{v_{\phi}}{\sqrt{2}},\ \ \ \left\langle\Omega^{0}\right\rangle=v_{\Omega}.\]
The scalar spectrum is categorized into two distinct sets: the \(\mathbb{Z}_{2}\)-even scalars, namely \(\phi^{0}\), \(\Omega^{0}\), \(\Omega^{\pm}\), and \(\phi^{\pm}\), and the \(\mathbb{Z}_{2}\)-odd scalars, namely \(\eta^{0}\) and \(\eta^{\pm}\). Among these, \(\eta^{0}\) has garnered significant attention as a promising dark matter candidate, with extensive studies documented in the
\begin{table}
\begin{tabular}{c c c} \hline \hline Particle Content & Generations & Symmetry \\ & & \((SU(2)_{L}\times U(1)_{Y}\times Z_{2})\) \\ \hline \hline \(L\) & 3 & \((2,-\frac{1}{2},+)\) \\ \(\phi\) & 1 & \((2,\frac{1}{2},+)\) \\ \(N\) & 1 & \((1,0,-)\) \\ \(\Sigma\) & 1 & \((3,0,-)\) \\ \(\Omega\) & 1 & \((3,0,+)\) \\ \(\eta\) & 1 & \((2,\frac{1}{2},-)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Particle content, generations, and symmetry assignments in the STSM under \(SU(2)_{L}\times U(1)_{Y}\times Z_{2}\) gauge symmetry.
literature [51; 52; 53; 54; 55; 56; 57]. Within this framework, the neutral scalars \(\phi^{0}\) and \(\Omega^{0}\) undergo mixing through a \(2\times 2\) matrix that can be parameterized by the angle \(\Theta\), such that
\[\left(\begin{array}{c}H_{1}\\ H_{2}\end{array}\right)=\left(\begin{array}{cc}\cos\Theta&\sin\Theta\\ -\sin\Theta&\cos\Theta\end{array}\right)\left(\begin{array}{c}\phi^{0}\\ \Omega^{0}\end{array}\right), \tag{3}\]
where
\[\tan(2\Theta)=\frac{4v_{0}v_{\phi}\left(\sqrt{2}\mu_{1}-2\lambda_{1}^{\Omega} v_{\Omega}\right)}{8\lambda_{2}^{0}v_{\Omega}^{3}-4\lambda_{1}v_{0}v_{\phi}^{2}+ \sqrt{2}\mu_{1}v_{\phi}^{2}}. \tag{4}\]
In this theory, the lightest \(Z_{2}\)-even scalar, \(H_{1}\), is identified as the SM Higgs boson, while the heavier one, \(H_{2}\), remains a proposed new scalar Higgs boson. Similarly, the charged scalars \(\phi^{\pm}\) and \(\Omega^{\pm}\) will undergo mixing described by a \(2\times 2\) matrix given by
\[\left(\begin{array}{c}H_{1}^{\pm}\\ H_{2}^{\pm}\end{array}\right)=\left(\begin{array}{cc}\cos\delta&\sin\delta\\ -\sin\delta&\cos\delta\end{array}\right)\left(\begin{array}{c}\phi^{\pm}\\ \Omega^{\pm}\end{array}\right), \tag{5}\]
with
\[\tan(2\delta)=-\frac{4v_{\Omega}v_{\phi}}{v_{\phi}^{2}-4v_{\Omega}^{2}}. \tag{6}\]
It is to be noted that the lightest charged scalar, which is represented by \(H_{1}^{\pm}\), is the Goldstone boson. Also, the theory introduces a novel charged scalar (\(H_{2}^{\pm}\)) field. Further, the masses of the \(Z_{2}\)-odd scalars \(\eta^{0}\) and \(\eta^{\pm}\) are given by
\[m_{\eta^{\pm}}^{2}=m_{\eta}^{2}+\tfrac{1}{2}\lambda_{3}v_{\phi}^ {2}+\tfrac{1}{2}\lambda^{\eta}v_{\Omega}^{2}+\tfrac{1}{\sqrt{2}}v_{\Omega}\mu _{2}, \tag{7}\] \[m_{\eta^{R}}^{2}=m_{\eta}^{2}+\tfrac{1}{2}\lambda_{3}v_{\phi}^{2 }+\tfrac{1}{2}\left(\lambda_{4}+\lambda_{5}\right)v_{\phi}^{2}-\tfrac{1}{ \sqrt{2}}v_{\Omega}\mu_{2}+\tfrac{1}{2}\lambda^{\eta}v_{\Omega}^{2},\] (8) \[m_{\eta^{I}}^{2}=m_{\eta}^{2}+\tfrac{1}{2}\lambda_{3}v_{\phi}^{2 }+\tfrac{1}{2}\left(\lambda_{4}-\lambda_{5}\right)v_{\phi}^{2}-\tfrac{1}{ \sqrt{2}}v_{\Omega}\mu_{2}+\tfrac{1}{2}\lambda^{\eta}v_{\Omega}^{2}. \tag{9}\]
Within the fermionic sector, the fields \(\Sigma^{0}\) and \(N\), which possess \(\mathbb{Z}_{2}\)-odd properties, undergo mixing governed by Yukawa coupling \(Y_{\Omega}\) and occur in presence of the non-zero vacuum expectation value, \(v_{\Omega}\). In the basis of \((\Sigma^{0},N)\), the Majorana mass matrix is given by
\[M_{\chi}=\left(\begin{array}{cc}M_{\Sigma}&v_{\Omega}Y_{\Omega}\\ v_{\Omega}Y_{\Omega}&M_{N}\end{array}\right), \tag{10}\]
which is diagonalized by \(2\times 2\) matrix \(V(\alpha)\) as
\[\left(\begin{array}{c}\chi_{1}^{0}\\ \chi_{2}^{0}\end{array}\right)=V(\alpha)\left(\begin{array}{c}\Sigma^{0}\\ N\end{array}\right), \tag{11}\]
where
\[V(\alpha)=\left(\begin{array}{cc}\cos\alpha&\sin\alpha\\ -\sin\alpha&\cos\alpha\end{array}\right).\]
Consequently, masses of \(\chi^{\pm}\) and \(\chi^{0}_{i}\) eigenstates at tree-level are
\[m_{\chi^{\pm}}=M_{\Sigma}, \tag{12}\] \[m_{\chi^{0}_{1,2}}=\frac{1}{2}\left(M_{\Sigma}+M_{N}\mp\sqrt{\left( M_{\Sigma}-M_{N}\right)^{2}+4\left(v_{0}Y_{\Omega}\right)^{2}}\right), \tag{13}\]
where the mixing angle \(\alpha\) satisfy the relation
\[\tan\left(2\alpha\right)=\frac{2v_{\Omega}Y_{\Omega}}{M_{\Sigma}-M_{N}}. \tag{14}\]
In the model under investigation, the small neutrino masses are generated radiatively at the one-loop level as shown in Fig. 1. The real triplet scalar \(\Omega\) whose VEV accounts for additional contribution to the \(W\)-boson mass, plays a vital role in neutrino mass generation as coupling \(Y_{\Omega}\) generates a mixing between \(N\) and \(\Sigma\) fermions. At one-loop level, the neutrino mass matrix can be written as
\[\left(\mathcal{M}_{\nu}\right)_{\alpha\beta} =\sum_{i=1}^{2}\frac{h_{\alpha i}h_{\beta i}m_{\chi^{0}_{i}}}{2( 4\pi)^{2}}\left[\frac{m_{\pi^{R}}^{2}\ln\left(\frac{m_{\pi^{D}}^{2}}{m_{\phi^{ D}}^{2}}\right)}{m_{\chi^{0}_{i}}^{2}-m_{\pi^{R}}^{2}}-\frac{m_{\eta^{I}}^{2}\ln \left(\frac{m_{\chi^{0}_{i}}^{2}}{m_{\eta^{I}}^{2}}\right)}{m_{\chi^{0}_{i}}^{ 2}-m_{\eta^{I}}^{2}}\right], \tag{15}\] \[=\sum_{i=1}^{2}\frac{h_{\alpha i}h_{\beta i}m_{\chi^{0}_{i}}}{2( 4\pi)^{2}}[L_{i}(m_{\eta^{R}}^{2})-L_{i}(m_{\eta^{I}}^{2})], \tag{16}\]
which can be further written as
\[=\sum_{i=1}^{2}h_{\alpha i}\Lambda_{i}\left(h^{T}\right)_{i\beta}=\left(h \Lambda h^{T}\right)_{\alpha\beta} \tag{17}\]
where \(h\) and \(\Lambda\) matrices are given by
\[h=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}Y_{\Sigma}^{1}&\sqrt{2}Y_{N}^{1} \\ Y_{\Sigma}^{2}&\sqrt{2}Y_{N}^{2}\\ Y_{\Sigma}^{3}&\sqrt{2}Y_{N}^{3}\end{array}\right)V^{T}(\alpha),\quad\Lambda= \left(\begin{array}{cc}\Lambda_{1}&0\\ 0&\Lambda_{2}\end{array}\right),\]
Figure 1: Feynman diagram used to generate neutrino masses at one-loop in STSM.
with
\[\Lambda_{i}=\frac{m_{\chi_{i}^{0}}}{2(4\pi)^{2}}\left[\frac{m_{\eta^{ R}}^{2}\ln\left(\frac{m_{\chi_{i}^{0}}^{2}}{m_{\eta^{ R}}^{2}}\right)}{m_{\chi_{i}^{0}}^{2}-m_{\eta^{R}}^{2}}-\frac{m_{\eta^{I}}^{2}\ln \left(\frac{m_{\chi_{i}^{0}}^{2}}{m_{\eta^{I}}^{2}}\right)}{m_{\chi_{i}^{0}}^{ 2}-m_{\eta^{I}}^{2}}\right]. \tag{18}\]
In the scenario where the masses of \(\eta^{R}\) and \(\eta^{I}\) are equal, _i.e._, \(m_{\eta^{R}}=m_{\eta^{I}}\), the resulting neutrino masses are zero, which leads to the condition \(\lambda_{5}=0\), so the lepton number is conserved. One convenient way to represent the Yukawa couplings, \(h_{\alpha i}\), is by employing the Casas-Ibarra parametrization [58, 59] which turns out to be
\[h=U^{*}\sqrt{\widetilde{M}}R\sqrt{\Lambda}^{-1}, \tag{19}\]
where \(U\) is Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, \(\widetilde{M}=\text{diag}\left(m_{1},m_{2},m_{3}\right)\) with \(m_{i}\) are the neutrino physical masses and \(\Lambda\) is given by Eqn. (18). Additionally, the matrix \(R\), which is a \(3\times 2\) orthogonal matrix satisfying \(RR^{T}=\mathbb{I}_{3\times 3}\), can be expressed as,
\[R=\left(\begin{array}{cc}0&0\\ \cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right),\]
where \(\theta\) is a complex angle, and this description is applicable in the case of normal hierarchy only [17, 18, 19].
## 3 W-Mass Anomaly
It is important to note that the pseudo-scalar \(Z_{2}\)-even portion solely comprises the imaginary component of the neutral part of \(\phi\) and does not include any contributions from \(\eta\) or \(\Omega\). Therefore, \(Z\)-boson gets no contribution from \(\Omega\). The inclusion of the real scalar triplet, \(\Omega\), with zero hypercharge leads to the contribution in the mass of the \(W\)-boson (\(M_{W}\)). When the field \(\Omega\) undergoes spontaneous symmetry breaking by acquiring a non-zero VEV, it imparts an extra contribution to the mass of the \(W\)-boson. This supplementary contribution to the mass of the \(W\)-boson serves as a potential explanation for the experimental findings observed by the CDF-II collaboration [29]. The pertinent components within the Lagrangian that pertain to the mass of the \(W\)-boson, are
\[\mathcal{L}^{\prime}=\left(D_{\mu}\phi\right)^{\dagger}\left(D^{\mu}\phi \right)+Tr[\left(D_{\mu}\Omega\right)^{\dagger}\left(D^{\mu}\Omega\right)],\]
where the covariant derivative is defined as
\[D_{\mu}=\partial_{\mu}+ig\frac{\sigma_{a}}{2}W_{\mu}^{a}+ig^{ \prime}\frac{Y}{2}B_{\mu}. \tag{20}\]
The coupling constants \(g\) and \(g^{\prime}\) correspond to the gauge symmetries \(SU(2)_{L}\) and \(U(1)_{Y}\), respectively.
Considering the non-zero value of \(v_{\Omega}\) allows us to deduce the masses of the gauge bosons resulting from the contribution of \(\Omega\). The expressions for the masses of the \(W\) and \(Z\) bosons are as
\[M_{W}^{2}=(v_{\phi}^{2}+4v_{\Omega}^{2})\frac{g^{2}}{4}\ \ \mbox{and}\ \ M_{Z}^{2}=(g^{2}+g^{\prime 2 })\frac{v_{\phi}}{4}. \tag{21}\]
Since \(\Omega\) lacks hypercharge, it remains uncoupled to the \(Z\)-boson, thus exerting no impact on its mass. The mass of the \(W\)-boson is now recognized to be influenced by scalar triplet's VEV \(v_{\Omega}\), prompting the utilization of the latest CDF-II measurement to establish an upper limit on \(v_{\Omega}\). The measured value of \(M_{W}\) obtained by the CDF-II experiment can be expressed as follows
\[(M_{\rm W}^{2})^{\rm CDF}-(M_{\rm W}^{2})^{\rm SM}=g^{2}v_{\Omega}^{2}, \tag{22}\]
where \((M_{\rm W}^{2})^{\rm SM}\)=\(\frac{g^{2}v_{\phi}^{2}}{4}\) is the mass of \(W\)-boson predicted by SM and value of \(g\)= 0.6517 [60].
Based on the latest CDF-II outcome and prediction of the SM, it becomes imperative to ensure, with in the framework, that the impact on the observed value of \(M_{W}\) arises exclusively from the VEV of the real scalar triplet \(\Omega\). Hence, the constraint delineated in Eqn. (22) can be expressed as a bound on \(v_{\Omega}\) which is given by
\[4.8\:{\rm GeV}\leq v_{\Omega}\leq 6\:{\rm GeV}, \tag{23}\]
Figure 2: The plot illustrating the correlation between the \(M_{W}\) and \(v_{\Omega}\). The horizontal dashed lines represent the SM predictions, the horizontal dotted lines indicate the 1-\(\sigma\) limit based on the CDF-II measurement, and the vertical solid lines signify the range of \(v_{\Omega}\) values that match with the CDF-II outcome.
and represented by the vertical band in Fig. 2. The CDF-II outcome and prediction by the SM are indicated by dotted and dashed horizontal lines, respectively. It is to be noted that the range of \(v_{\Omega}\) obtained in Eqn. (23) is in consonance with that of reported in Ref. [19].
## 4 Phenomenology of Scalar Dark Matter
The model allows for the possibilities of both fermionic and bosonic DM candidates. In so far, fermionic DM is concerned, the lighter neutral eigenstates \(\chi_{1}^{0}\) and \(\chi_{2}^{0}\) are good candidates while for bosonic DM the lightest particle amongst the components of \(\eta\) may serves as the potential DM candidate. Within the premise that the masses of \(N\) and \(\Sigma\) are sufficiently large, the preservation of \(Z_{2}\) symmetry designates lighter of \(\eta^{R}\) and \(\eta^{I}\), as the promising candidate for DM. In our investigation, we contemplate \(\eta^{R}\) as the preferred DM candidate, given the condition of \(\lambda_{5}<0\). Alternatively, in the scenario of \(\lambda_{5}>0\), \(\eta^{I}\) would be considered the potential DM candidate. We implement the model in SARAH [61] and subsequently employed in SPheno [62] to effectively compute all relevant mass matrices and vertices. Further, micrOMEGAs-5.0.8 [63] served as a critical tool in solving the Boltzmann equations and evaluating the thermal characteristics of the relic density of WIMP DM and calculating the cross-section for direct-detection (DD) of DM particle. In the left panel of Fig. 3, we depict the projected DM relic density (\(\Omega h^{2}\)) as a function of the DM mass (\(M_{DM}\)). The solid grey line represents the \(3\sigma\) range of relic density (0.1126\(\leq\Omega h^{2}\leq\) 0.1246) inferred from data obtained by the Planck satellite [65, 2]. We
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Range \\ \hline \hline \(M_{\Sigma},M_{N}\) & \([5\times 10^{3},10^{12}]\,\)GeV \\ \(m_{\eta}^{2}\) & \([10^{2},10^{10}]\,\)GeV \\ \(\mu_{i}\) & \([10,10^{5}]\)GeV \\ \(v_{\Omega}\) & \([2,6]\)GeV \\ \(\lambda_{5}\) & \([-1,-10^{-5}]\) \\ \(\lambda_{4}\) & \([-1,-10^{-5}]\) \\ \(\lambda_{2,3}\) & \([10^{-5},10^{-1}]\) \\ \(Y_{\Omega}\) & \([10^{-5},10^{-1}]\) \\ \(\lambda_{i}^{\Omega}\) & \([10^{-5},10^{-1}]\) \\ \(\lambda^{\eta}\) & \([10^{-5},10^{-1}]\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Input parameter’s range used for the numerical analysis.
performed a numerical scan of the input parameters within the defined intervals outlined in Table 2. In order to ensure a mass of 125 GeV for the lighter neutral scalar, \(H_{1}\), the parameter \(\lambda_{1}\) is adjusted accordingly.
The distinct dips observed can be understood by examining the annihilation and coannihilation processes, which are detailed in Appendix A. The initial trough observed at \(M_{\rm DM}\sim\frac{M_{Z}}{2}\) arises due to annihilation and coannihilation processes facilitated by the exchange of the \(Z\)-boson in the \(s\)-channel. The subsequent dip, located at \(M_{\rm DM}\sim 60\) GeV, is linked to annihilation processes mediated through the \(s\)-channel exchange of \(H_{1}\). In comparison, annihilation mediated by the exchange of \(H_{1}\) is more efficient than that involving the \(Z\)-boson exchange, primarily due to the momentum suppression that hampers the latter. As the mass of \(\eta^{R}\) increases, the quartic interactions with gauge bosons begin to play a significant role. For instance, the third dip in \(\Omega h^{2}\) is brought on by the annihilation of DM candidate \(\eta^{R}\to W^{+}W^{-},ZZ\ via\) quartic couplings for \(M_{\rm DM}\geq 80\) GeV. Moreover, for \(M_{\rm DM}\geq 120\) GeV and \(m_{\eta^{R}}\geq m_{t}\) GeV, \(\eta^{R}\) has the capability to undergo annihilation processes yielding two Higgs bosons (\(H_{1}H_{1}\)) and top-antitop quark pairs (\(t\bar{t}\)), respectively. Also, it is noted that, beyond 120 GeV, as \(M_{\rm DM}\) increases, the relic density rises. This phenomenon is attributed to the suppression of the annihilation cross-section, which decreases as \(\sim\frac{1}{M_{\rm DM}}\). Additionally, it is crucial to highlight that in parameter space regions characterized by a small \(\lambda_{5}\), coannihilation processes involving both \(\eta^{I}\) and \(\eta^{\pm}\) can take place.
Let us now explore the possibilities for DD of \(\eta^{R}\). The interaction cross-section of DM and
Figure 3: **Left Panel:** Correlation between DM Relic density and DM (\(\eta^{R}\)) mass, \(M_{DM}\) for input values, as shown in Table 2. The grey horizontal band represents the \(3\sigma\) experimental range of \(\Omega h^{2}\) coming from Planck observation [2, 65]. LEP rule-out region with \(m_{\eta}^{R}<\frac{m_{Z}}{2}\) and \(m_{\eta^{+}}\lesssim 70\) GeV, is represented by the shaded region [64]. **Right Panel:** Correlation between Spin-independent DM-nucleon cross section vs \(M_{DM}\). The green points signify values within the \(3\sigma\) range of \(\Omega h^{2}\), while grey and cyan color points represent over and under-abundance, respectively.
nucleons at the tree-level is governed by \(\phi\) and \(Z\) portals. Due to non-zero hypercharge of the \(\eta\) doublet, the DM-nucleon interaction through the \(Z\)-boson can, in principle, surpass the latest constraints set by DD experiments. However, \(\lambda_{5}\) introduces a mass difference between the CP-odd counterparts, \(\eta^{I}\) and \(\eta^{R}\). Consequently, the interaction involving the \(Z\)-boson is either prevented due to kinematic constraints or leads to inelastic scattering. Thus, the dominant contribution to the DM-nucleon interaction occurs \(via\)\(\phi\) in whole region of the parameter space. Fig. 3 (right panel) illustrates the relationship between the elastic scattering cross-section of DM with nucleons in a spin-independent manner as a function of the mass of DM. The red dashed and dashed-dotted lines in the plot represent the upper limits set by the XENON1T [66, 67] and XENONnT [68] collaborations, respectively. Also, the blue line represents the lower limit, which corresponds to the neutrino floor. In Fig. 4, we have illustrated the relationship between \(\lambda_{5}\) and \(M_{DM}\). The DM spans the entire spectrum of \(\lambda_{5}\) values, ranging from -1 to \(-10^{-5}\), as indicated by the green points.
## 5 Leptogenesis in STSM
In this section, we delve into baryogenesis _via_ leptogenesis in STSM, considering two cases: one with two heavy fermions and the other with three, aimed at lowering the scale of leptogenesis.
Figure 4: Variation in the \(-\lambda_{5}\) as function of \(M_{DM}\). The color scheme is similar to the right panel of Fig. 3
### Two heavy fermion case
As previously mentioned, the present model allows for the emergence of a net lepton asymmetry through the out-of-equilibrium decay of both the \(N\) and \(\Sigma\). The decay channels of singlet (\(N\)) and triplet (\(\Sigma\)) at tree and one-loop are shown in Figs. 5 and 6. Here we consider the masses of \(N\) and \(\Sigma\) to be of similar order such that the decays of both contribute to leptogenesis. Much like the previously mentioned Davidson-Ibarra bound utilized in the context of Type-I seesaw leptogenesis, a similar lower limit can also be deduced in this scenario. The expressions for the \(CP\) asymmetries resulting from the \(N\) and \(\Sigma\) decays, as shown in Figs. 5 and 6, can be expressed as
\[\epsilon^{N}=\frac{1}{8\pi\left(Y_{N}^{\dagger}Y_{N}\right)}\sum_{j\neq i} \mathrm{Im}\left[\left(Y_{N}^{\dagger}Y_{\Sigma}\right)_{ij}^{2}\right] \mathcal{F}\left(\frac{M_{\Sigma}^{2}}{M_{N}^{2}}\right), \tag{24}\]
and
\[\epsilon^{\Sigma}=\frac{1}{8\pi\left(Y_{\Sigma}^{\dagger}Y_{\Sigma}\right)} \sum_{j\neq i}\mathrm{Im}\left[\left(Y_{\Sigma}^{\dagger}Y_{N}\right)_{ij}^{2 }\right]\mathcal{F}\left(\frac{M_{N}^{2}}{M_{\Sigma}^{2}}\right), \tag{25}\]
where
\[Y_{N}=h_{\alpha 1}=\begin{pmatrix}Y_{N}^{1}\\ Y_{N}^{2}\\ Y_{N}^{3}\end{pmatrix},\qquad Y_{\Sigma}=h_{\alpha 2}=\frac{1}{\sqrt{2}} \begin{pmatrix}Y_{\Sigma}^{1}\\ Y_{\Sigma}^{2}\\ Y_{\Sigma}^{3}\end{pmatrix}, \tag{26}\]
Figure 5: Tree-level Feynman diagrams for the decays (\(N\to L_{\alpha}+\eta\)) and (\(\Sigma\to L_{\alpha}+\eta\)).
Figure 6: Feynman diagrams illustrating the vertex corrections at the one-loop level in the decays of heavy fermions (\(N\) and \(\Sigma\)).
\[{\cal F}(x)=\sqrt{x}\left[1+\frac{1}{1-x}+(1+x)\ln\left(\frac{x}{1+x}\right) \right]. \tag{27}\]
The relevant Boltzmann equations for leptogenesis can be written as
\[\frac{dn_{\Sigma}}{dz} = -D_{\Sigma}(n_{\Sigma}-n_{\Sigma}^{eq})-S_{A}(n_{\Sigma}^{2}-(n_{ \Sigma}^{eq})^{2}), \tag{28}\] \[\frac{dn_{N}}{dz} = -D_{N}(n_{N}-n_{N}^{eq}),\] (29) \[\frac{dn_{B-L}}{dz} = -\epsilon^{\Sigma}D_{\Sigma}(n_{\Sigma}-n_{\Sigma}^{eq})-\epsilon ^{N}D_{N}(n_{N}-n_{N}^{eq})-W_{\rm ID}^{\Sigma}n_{B-L}-W_{\rm ID}^{N}n_{B-L}\] (30) \[-W_{\Delta L}n_{B-L}.\]
Here, \(n_{N}^{eq}\) and \(n_{\Sigma}^{eq}\) are the equilibrium number density of \(N\) and \(\Sigma\) respectively. Also, \(n_{N}\), \(n_{\Sigma}\) and \(n_{B-L}\) are the comoving number densities of \(N\), \(\Sigma\) and \(B-L\) asymmetry respectively. \(D_{N}\) and \(D_{\Sigma}\) are the decay terms for \(N\) and \(\Sigma\) decays which are given by
\[D_{N} = K_{N}z\frac{\kappa_{1}(z)}{\kappa_{2}(z)}, \tag{31}\] \[D_{\Sigma} = K_{\Sigma}\Bigg{(}\frac{M_{\Sigma}}{M_{N}}z\Bigg{)}\frac{\kappa _{1}\Bigg{(}\frac{M_{\Sigma}}{M_{N}}z\Bigg{)}}{\kappa_{2}\Bigg{(}\frac{M_{ \Sigma}}{M_{N}}z\Bigg{)}}, \tag{32}\]
where \(z=\frac{M_{N}}{T}\) with \(T\) is temperature of thermal bath and
\[K_{N} = \frac{\Gamma_{N}}{{\bf H}(T=M_{N})}, \tag{33}\] \[K_{\Sigma} = \frac{\Gamma_{\Sigma}}{{\bf H}(T=M_{\Sigma})}, \tag{34}\]
are the decay parameters for the \(N\) and \(\Sigma\) decay, respectively. \(\Gamma_{N,\Sigma}\) are the decay widths given by
\[\Gamma_{N} = \frac{M_{N}}{8\pi}(Y_{N}^{\dagger}Y_{N})\Bigg{(}1-\frac{m_{\eta} ^{2}}{M_{N}^{2}}\Bigg{)}, \tag{35}\] \[\Gamma_{\Sigma} = \frac{M_{\Sigma}}{8\pi}(Y_{\Sigma}^{\dagger}Y_{\Sigma})\Bigg{(}1 -\frac{m_{\eta}^{2}}{M_{\Sigma}^{2}}\Bigg{)}. \tag{36}\]
In the early Universe, dominated by radiation, the Hubble parameter can be expressed in terms of temperature as \(H=2\sqrt{\frac{\pi^{3}g_{*}}{45}}\frac{T^{2}}{M_{Pl}}\), with \(g_{*}=124\) is the effective number of relativistic degrees of freedom and \(M_{Pl}\) is the Planck mass given by \(M_{Pl}=1.22\times 10^{19}\) GeV. The washout term arises from the combined effect of inverse decays and \(L\) violating washout processes. The total washout term is given by \(W_{\rm Total}=W_{\rm ID}^{N,\Sigma}+W_{\rm\Delta L}\), where \(W_{\rm ID}^{N}\) and \(W_{\rm ID}^{\Sigma}\) are the inverse decay terms for \(N\) and \(\Sigma\) decay, respectively and are given by
\[W_{\rm ID}^{N} = \frac{1}{4}K_{N}z^{3}\kappa_{1}(z), \tag{37}\] \[W_{\rm ID}^{\Sigma} = \frac{1}{4}K_{\Sigma}\Bigg{(}\frac{M_{\Sigma}}{M_{N}}z\Bigg{)}^{ 3}\kappa_{1}\Bigg{(}\frac{M_{\Sigma}}{M_{N}}z\Bigg{)}. \tag{38}\]
Apart from the inverse decays, the other contribution to washout \(W_{\rm AL}\) originates from scatterings that violate the lepton number. The scattering washouts due to \(l\eta\longrightarrow\bar{l}\eta^{*}\), \(ll\longrightarrow\eta^{*}\eta^{*}\) process is given by
\[W_{\rm AL}=\frac{0.585\times M_{pl}}{g_{l}\times g_{*}z^{2}v^{4}} \left(\frac{2\pi^{2}}{\lambda_{5}}\right)^{2}M_{N}\overline{m}_{\varkappa_{N}}^ {2}+\frac{0.585\times M_{pl}}{g_{l}\times g_{*}z^{2}v^{4}}\left(\frac{2\pi^{2}} {\lambda_{5}}\right)^{2}M_{\Sigma}\overline{m}_{\varkappa_{\Sigma}}^{2}. \tag{39}\]
Here, \(g_{l}\) represents the intrinsic degrees of freedom of SM leptons, while \(\overline{m}_{\varkappa}\) is defined as an effective neutrino mass parameter and in the limit of \(m_{1}\)=0, given by
\[\overline{m}_{\varkappa_{N,\Sigma}}^{2}=\varkappa_{2}^{2}m_{2}^{ 2}+\varkappa_{3}^{2}m_{3}^{2} \tag{40}\]
with \(m_{i}^{*}\) being the light neutrino mass eigenvalues and \(\varkappa\) is defined as:
\[\varkappa_{2,3}=\frac{M_{N,\Sigma}^{2}}{8\left(m_{\eta^{R}}^{2}- m_{\eta^{I}}^{2}\right)}[L_{i}(m_{\eta^{R}}^{2})-L_{i}(m_{\eta^{I}}^{2})]. \tag{41}\]
The gauge boson mediated scattering term, \(S_{A}\), for \(\Sigma\) and can be identified as
\[S_{A}=\left(\frac{0.032\sqrt{g_{*}}M_{pl}}{g_{\Sigma}^{2}M_{ \Sigma}}\right)\left(\frac{I(z)}{z\times\kappa_{2}\!\left(\frac{M_{\Sigma}}{M _{N}}z\right)^{2}}\right), \tag{42}\]
where
\[I(z)=\int_{4}^{\infty}\sqrt{x}\kappa_{1}\!\left(z\sqrt{x} \right)\!\hat{\sigma}_{A}(x)\,dx. \tag{43}\]
Here, the \(\hat{\sigma}(x)\) is cross-section for the gauge boson mediated processes given by [45]
\[\hat{\sigma}_{A}=\frac{6g^{4}}{72\pi}\left[\frac{45}{2}r(x)-\frac {27}{2}r(x)^{3}-\left\{9\left(r(x)^{2}-2\right)+18\left(r(x)^{2}-1\right)^{2} \right\}\ln\left(\frac{1+r(x)}{1-r(x)}\right)\right], \tag{44}\]
where \(r(x)=\sqrt{1-4/x}\). Eqn. (44) includes \(\Sigma\), \(\Sigma\)\(\rightarrow\) all possible fermion doublets, Higgs, gauge bosons. After solving the Boltzmann equations mentioned in Eqns. (28), (29), and (30), the relation
\[\eta_{B}=\frac{3g_{*}^{0}}{4g_{*}}a_{sphl}n_{B-L}\simeq 9.2\times 1 0^{-3}n_{B-L}, \tag{45}\]
converts \(B-L\) asymmetry, \(n_{B-L}\), into the observed baryon-to-photon ratio, prior to electroweak sphaleron freeze-out. Here \(a_{sphl}=\frac{8}{23}\) is the sphaleron conversion factor and \(g_{*}^{0}\) represents the effective count of relativistic degrees of freedom during the epoch of recombination.
### Three heavy fermions case
Here we consider two singlet heavy fermions (\(N_{1}\) and \(N_{2}\)) with one triplet fermion \(\Sigma\). \(M_{N_{1}}\), \(M_{N_{2}}\) and \(M_{\Sigma}\) are the masses of \(N_{1}\), \(N_{2}\) and \(\Sigma\), respectively. Due to the presence of an additional singlet fermion, the Yukawa matrix \(h\) takes the form
\[h = \frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{2}Y_{N_{1}}^{1}&Y_{\Sigma}^{ 1}&\sqrt{2}Y_{N_{2}}^{1}\\ \sqrt{2}Y_{N_{1}}^{2}&Y_{\Sigma}^{2}&\sqrt{2}Y_{N_{2}}^{2}\\ \sqrt{2}Y_{N_{1}}^{3}&Y_{\Sigma}^{3}&\sqrt{2}Y_{N_{2}}^{3}\end{pmatrix}V^{T}( \alpha). \tag{46}\]
Here, we perform the analysis assuming the hierarchy \(M_{N_{1}}\lesssim M_{\Sigma}\ll M_{N_{2}}\). Since \(N_{2}\) is assumed to be much heavier compared to the other two fermions, the lepton asymmetry generated from the decay of \(N_{2}\) is washed-out by the inverse decays of \(N_{1}\) and \(\Sigma\). Effectively, only \(N_{1}\) and \(\Sigma\) will generate the lepton asymmetry. The relevant CP asymmetry parameter due to the decay of \(N_{1}\) and \(\Sigma\) are found to be
\[\epsilon_{N_{1}} = \frac{1}{8\pi\left(Y_{N_{1}}^{\dagger}Y_{N_{1}}\right)}\left[ \mathrm{Im}\left[\left(Y_{N_{1}}^{\dagger}Y_{\Sigma}\right)^{2}\right]{\cal F }\left(\frac{M_{\Sigma}^{2}}{M_{N_{1}}^{2}}\right)+\left(Y_{N_{1}}^{\dagger}Y _{\Sigma}\right)^{2}{\cal F}\left(\frac{M_{\Sigma}^{2}}{M_{N_{1}}^{2}}\right) \right], \tag{47}\] \[\epsilon_{\Sigma} = \frac{1}{8\pi\left(Y_{\Sigma}^{\dagger}Y_{\Sigma}\right)}\left[ \mathrm{Im}\left[\left(Y_{\Sigma}^{\dagger}Y_{N_{1}}\right)^{2}\right]{\cal F }\left(\frac{M_{N_{1}}^{2}}{M_{\Sigma}^{2}}\right)+\left(Y_{\Sigma}^{\dagger }Y_{N_{2}}\right)^{2}{\cal F}\left(\frac{M_{N_{2}}^{2}}{M_{\Sigma}^{2}} \right)\right]. \tag{48}\]
The relevant Boltzmann equations in this scenario are
\[\frac{dn_{N_{1}}}{dz} = -D_{N}\left(n_{N_{1}}-n_{N_{1}}^{eq}\right), \tag{49}\] \[\frac{dn_{\Sigma}}{dz} = -D_{\Sigma}(n_{\Sigma}-n_{\Sigma}^{eq})-S_{A}(n_{\Sigma}^{2}-(n_ {\Sigma}^{eq})^{2}),\] (50) \[\frac{dn_{B-L}}{dz} = -\epsilon_{N_{1}}D_{N_{1}}(n_{N_{1}}-n_{N_{1}}^{eq})-\epsilon_{ \Sigma}D_{\Sigma}(n_{\Sigma}-n_{\Sigma}^{eq})-W_{\rm ID}^{N_{1}}n_{B-L}-W_{\rm ID }^{\Sigma}n_{B-L}\] (51) \[-W_{\Delta L}n_{B-L}.\]
Here, the decay terms for \(N_{1}\) and \(\Sigma\) are given by
\[D_{N_{1}} = K_{N_{1}}\frac{\kappa_{1}(z)}{\kappa_{2}(z)}, \tag{52}\] \[D_{\Sigma} = K_{\Sigma}\left(\frac{M_{\Sigma}}{M_{N_{1}}}z\right)\frac{\kappa _{1}\left(\frac{M_{\Sigma}}{M_{N_{1}}}z\right)}{k_{2}\left(\frac{M_{\Sigma}}{ M_{N_{1}}}z\right)}. \tag{53}\]
and inverse decay terms for \(N_{1}\) and \(\Sigma\) are defined as
\[W_{\rm ID}^{N_{1}} = \frac{1}{4}K_{N_{1}}z^{3}\kappa_{1}(z), \tag{54}\] \[W_{ID}^{\Sigma} = \frac{1}{4}K_{\Sigma}\left(\frac{M_{\Sigma}}{M_{N_{1}}}z\right)^{ 3}\kappa_{1}\left(\frac{M_{\Sigma}}{M_{N_{1}}}z\right). \tag{55}\]
### Numerical Analysis and Discussion
We study the leptogenesis in the STSM where we first consider the minimal scenario with two moderately hierarchical (\(M_{N}\lesssim M_{\Sigma}\)) heavy fermions, \(N\) and \(\Sigma\). This results in the lightest neutrino remaining massless. In order to satisfy Sakharov's third condition, the lepton asymmetry must be generated by the out-of-equilibrium decay of heavy fermions. This asymmetry is then converted into the baryon asymmetry \(via\) (\(B+L\)) violating sphaleron processes. As the Universe evolves, characterized by a decrease in temperature or increase in \(z\), \(N\) and \(\Sigma\) begin decaying into SM particles reducing their respective comoving number densities (\(n_{N}\) and \(n_{\Sigma}\)). Consequently, this process occurs concomitantly with an escalation in lepton asymmetry \(n_{B-L}\), as depicted in Figs. 7 and 8. At fixed \(\lambda_{5}=-1\) and \(M_{\Sigma}=10^{0.5}M_{N}\), \(n_{N}\) and \(n_{\Sigma}\) are very close to \(n_{N}^{eq}\) and \(n_{\Sigma}^{eq}\) as shown in the upper panel of Fig. 7. Due to the presence of only two heavy fermions generating masses of the neutrinos, the Yukawa couplings for the heavy fermions are large enough such that the decay and inverse decays keep their abundances very close to their equilibrium abundance. This is explained in detail in Appendix B. Due to the same reason, we are necessarily in a strong washout region (\(K_{N,\Sigma}>>1\)) with two heavy fermions. This leads to a high-scale leptogenesis scenario where \(M_{N,\Sigma}\geq 10^{9}\) GeV. In Fig. 7, one can see that \(n_{\Sigma}\) deviates
Figure 7: Comoving number density of \(N\) (upper left panel), \(\Sigma\) (upper right panel) and \(B-L\) (lower panel) with different values of \(M_{N}\). In the lower panel, the grey horizontal line represents the experimentally required value for the \(B-L\) asymmetry.
more from its equilibrium abundance with the increase in \(M_{\Sigma}\) near \(z\simeq 1\). With the increase in \(M_{\Sigma}\), the rate of gauge boson-mediated annihilations for \(\Sigma\) decreases. This makes \(\Sigma\) to deviate more from its equilibrium abundance. It can, also, be seen that with the increase in \(M_{N,\Sigma}\) the \(B-L\) asymmetry increases. This is due to the suppression of inverse decay washouts with the increase in \(M_{N,\Sigma}\). In Fig. 8 we show the evolution of \(n_{N}\), \(n_{\Sigma}\) and \(n_{B-L}\) with \(z=M_{N}/T\) for different values of \(\lambda_{5}\). With the decrease in \(\lambda_{5}\) the Yukawa couplings for \(N,\Sigma\) increases through the CI parameterisation of Eqn. (19). Due to the increase in Yukawa coupling the decay and inverse decay rates increase resulting in \(n_{N,\Sigma}\) to remain closer to their equilibrium abundance. Similarly from the lower panel plot of Fig. 8 it can be seen that with the decrease in \(\lambda_{5}\) the \(B-L\) asymmetry first increases, however, after a certain small value of \(\lambda_{5}\) it starts decreasing with further decrease in \(\lambda_{5}\). For the small value of \(\lambda_{5}\) the Yukawa coupling becomes large enough to increase the CP asymmetry resulting in an increase in \(B-L\) asymmetry. However, beyond a certain large value of the Yukawa coupling the washouts increase which dominates over the increase in CP asymmetry. As a result, the \(B-L\) asymmetry decreases with the decrease in \(\lambda_{5}\).
In the presence of three fermions (\(N_{1}\),\(N_{2}\),\(\Sigma\)), the Yukawa structure changes and the lightest active neutrino mass (\(m_{1}\)) becomes an important parameter as discussed in the Appendix C.
Figure 8: Comoving number density of \(N\) (upper left panel), \(\Sigma\) (upper right panel) and \(B-L\) (lower panel) with different values of \(\lambda_{5}\). In the lower panel, the grey horizontal line represents the experimentally required value for the \(B-L\) asymmetry.
Here we choose the hierarchy \(M_{N_{1}}\lesssim M_{\Sigma}<M_{N_{2}}\) among the heavy fermions. With vanishingly small lightest active neutrino mass, the Yukawa coupling for the lightest fermion \(N_{1}\) becomes small enough to make a weak washout leptogenesis scenario (\(K_{N_{1}}<1\)). This results in a decrease of the leptogenesis scale from \(10^{10}\) GeV to sub-TeV scale. In Fig. 9 we show the evolution of \(n_{N_{1}},n_{\Sigma}\) and \(n_{B-L}\) with \(z=M_{N_{1}}/T\) for different values of \(\lambda_{5}\). Similar behaviour of the \(n_{N_{1}}\), \(n_{\Sigma}\) and \(n_{B-L}\) can be seen with the change in \(\lambda_{5}\) as in the case of the two heavy fermions. In Fig. 10 we show the evolution of \(n_{N_{1}}\), \(n_{\Sigma}\) and \(n_{B-L}\) with \(z=M_{N_{1}}/T\) for different values of \(M_{N_{1}}\) with \(M_{\Sigma}=10^{0.5}M_{N_{1}}\) for fixed benchmark values of other relevant parameters. Similar to two fermion cases, we can see the increase in asymmetry with the increase in \(M_{N_{1}}\) and \(M_{\Sigma}\). Furthermore, previous investigations indicate that the lightest heavy fermion is the sole contributor to the \(B-L\) asymmetry, while heavier fermions are disregarded due to their substantial washout effects. However, our study suggests that when we choose moderately hierarchical heavy fermion masses, the heavier fermions can significantly contribute to the \(B-L\) asymmetry. The evolution of the \(B-L\) asymmetry for the \(N_{1}+\Sigma\) (red dashed line) and \(N_{1}\) only (blue line) is depicted in Fig. (11). For later case, the \(B-L\) asymmetry remains very low compared to the required value of \(n_{B-L}\). However, considering the hierarchy of the heavy fermions (\(M_{N_{1}}\lesssim M_{\Sigma}<M_{N_{2}}\)), the triplet fermion can contribute significantly contribute to the final \(B-L\) asymmetry which is,
Figure 9: Comoving Number density of \(N_{1}\) (top left panel plot), \(\Sigma\) (top right panel plot) and \(B-L\) (lower panel plot) with \(z=M_{N_{1}}/T\) for different values of \(\lambda_{5}\). In the lower panel, the grey horizontal line represents the experimentally required value for the \(B-L\) asymmetry.
also, evident from the Fig. (11).
Despite the lack of signatures of \(\Sigma\) beyond 1 TeV at LHC, it is still feasible to explore the triplet fermions in final states that involve multiple leptons and fat-jets. These final states are more refined compared to the typical LHC searches and can be extended up to 10 TeV [69, 70]. At 95% confidence level, the ATLAS experiment at LHC has excluded the triplet fermion with mass below 790 GeV [71]. We have successful leptogenesis around the sub-TeV scale. Collider experiments already constrain light triplet fermions, but future experiments may test viable parameter space.
**Benchmark points yielding correct \(n_{B-L}\):** For two heavy fermion case:
\(M_{N}=7.97\times 10^{9}\) GeV, \(\lambda_{5}\)=-0.1 and \(M_{N}=6.30\times 10^{9}\) GeV, \(\lambda_{5}\)=-0.3.
For three heavy fermion case:
\(M_{N_{1}}=1513.56\) GeV, \(\lambda_{5}\)=-0.1 and \(M_{N_{1}}=1995.26\) GeV, \(\lambda_{5}\)=-0.2.
Figure 10: Comoving Number density of \(N_{1}\) (upper left panel), \(\Sigma\) (upper right panel) and \(B-L\) (lower panel) with \(z=M_{N}/T\) for different values of \(M_{N_{1}}\). In the lower panel, the grey horizontal line represents the experimentally required value for the \(B-L\) asymmetry.
## 6 Summary
We examined the singlet-triplet scotogenic model (STSM) in which dark sector particles run in the loop and generate tiny neutrino masses at one loop level. The particle content of the model includes the heavy \(SU(2)_{L}\) singlet (\(N\)), triplet (\(\Sigma\)) fermions. Scalar sector is extended by a \(SU(2)_{L}\) doublet scalar \(\eta\) and a hyperchargeless \(SU(2)_{L}\) triplet \(\Omega\). We examined the prospects of the model meeting the recent \(W\)-boson mass measurements of the CDF-II collaboration at tree-level through hyperchargeless scalar triplet \(\Omega\). Also, we studied the phenomenology of DM where the real component of \(\eta\) acts as a viable WIMP-type DM candidate.
In our study, we explored the possibility of producing baryogenesis at the TeV scale through leptogenesis in the STSM. The correct lepton asymmetry is generated through the out-of-equilibrium decay of both singlet (N) and triplet fermions. This lepton asymmetry is then converted into the baryon asymmetry via sphaleron processes that accidentally conserve \(B-L\) but violate \(B+L\) symmetry. In our analysis, we are examining two scenarios: one involving two heavy fermions and the other involving three, and we explicitly distinguish between the two cases. First of all, we deal with two heavy fermions, namely N and \(\Sigma\), which result in the vanishing lightest neutrino mass eigenvalue (\(m_{1}=0\)). Both heavy fermions have a moderately hierarchical mass, and their out-of-equilibrium decay contributes to the final \(B-L\) asymmetry. As light neutrino masses require a large value of the Yukawa coupling through Casas-Ibarra parameterization, the scale of leptogenesis is estimated to be around \(10^{10}\) GeV, which is similar to the thermal leptogenesis in the Type-I seesaw scenario. In order to lower the scale of leptogenesis
Figure 11: Evolution of \(B-L\) asymmetry for \(N_{1}+\Sigma\) (red dashed line) and \(N_{1}\) (blue line) as a function of \(z=M_{N_{1}}/T\). The grey horizontal line represents the experimentally required value for the \(B-L\) asymmetry.
to around sub-TeV scale, we introduce an additional heavy fermion and consider three fermion cases (\(N_{1}\), \(N_{2}\), \(\Sigma\)). As a result, the lightest neutrino mass now has a non-zero value (\(m_{1}\neq 0\)). In the scotogenic model, with three heavy Majorana fermions, the scale of leptogenesis depends heavily on the value of the lightest neutrino mass (\(m_{1}\)). For vanishingly small values of \(m_{1}\) (\(\lesssim 10^{-5}\) eV), the impact of washouts is negligible. Therefore reducing the \(m_{1}\) beyond this range results in lowering the leptogenesis scale. This mechanism requires leptogenesis to end before electroweak sphaleron drop out of equilibrium. The analysis yields a minimum mass estimate of around 3 TeV for the lightest right-handed fermion which is quite small as compared to the mass bound of \(\sim 10^{9}\) GeV in standard thermal leptogenesis. Moreover, the quadratic coupling \(\lambda_{5}\) plays a significant role in bridging DM and leptogenesis. Future collider experiments may offer insights into the gauge interactions of a TeV-scale triplet fermion.
## Acknowledgments
L. S. acknowledges the financial support provided by Central University of Himachal Pradesh. The authors, also, thank Sushant Yadav for the helpful discussions.
Appendix A Relevant annihilation and coannihilation diagrams contributing to the relic abundance of \(\eta^{R}\)
Two heavy fermion case
When we consider two heavy fermions \(N\) and \(\Sigma\) the lightest active neutrino becomes massless and the CI parametrisation gives the Yukawa couplings for the heavy fermions
\[h=U^{*}\sqrt{\widetilde{M}}R\sqrt{\Lambda}^{-1}. \tag{56}\]
Where \(\widetilde{M}=diag(m_{1},m_{2},m_{3})\) is the active neutrino mass matrix. To generate a non-zero CP asymmetry \(\epsilon^{N}\) and \(\epsilon^{\Sigma}\) the following \(R\) matrix is chosen
\[R=\begin{pmatrix}0&0\\ \cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}. \tag{57}\]
The explicit form of the Yukawa couplings is then found to be
\[h=\begin{pmatrix}\Lambda_{1}^{-1/2}(\sqrt{m_{2}}\cos\theta u_{21}^{*}+\sqrt{m _{3}}\sin\theta u_{31}^{*})&\Lambda_{2}^{-1/2}(-\sqrt{m_{2}}\sin\theta u_{21}^ {*}+\sqrt{m_{3}}\cos\theta u_{31}^{*})\\ \Lambda_{1}^{-1/2}(\sqrt{m_{2}}\cos\theta u_{22}^{*}+\sqrt{m_{3}}\sin\theta u _{32}^{*})&\Lambda_{2}^{-1/2}(-\sqrt{m_{2}}\sin\theta u_{22}^{*}+\sqrt{m_{3}} \cos\theta u_{32}^{*})\\ \Lambda_{1}^{-1/2}(\sqrt{m_{2}}\cos\theta u_{23}^{*}+\sqrt{m_{3}}\sin\theta u _{33}^{*})&\Lambda_{2}^{-1/2}(-\sqrt{m_{2}}\sin\theta u_{23}^{*}+\sqrt{m_{3}} \cos\theta u_{33}^{*})\end{pmatrix}. \tag{58}\]
From Eqn. (58) it can be seen that the Yukawa couplings are dependent on \(m_{2}\) and \(m_{3}\). With the normal ordering of light neutrino mass, we don't have freedom over \(m_{2}\) and \(m_{3}\) to make the Yukawa coupling smaller irrespective of the value of \(\theta\). Here \(u_{ij}\)s are the elements of the PMNS matrix. The only way to make the Yukawa couplings smaller is to increase the scalar coupling \(\lambda_{5}\). Since \(\lambda_{5}\) has a perturbative limit of \(4\pi\), the Yukawa couplings can not be made arbitrarily small by increasing the value of \(\lambda_{5}\). Therefore we are always in a strong washout region with two heavy fermions.
## Appendix C Three heavy fermion case
With the presence of another heavy fermion (\(N_{1}\), \(\Sigma\), \(N_{2}\)) the minimal \(R\) matrix to generate a non-zero CP asymmetry become
\[R=\begin{pmatrix}1&0&0\\ 0&\cos\theta&\sin\theta\\ 0&-\sin\theta&\cos\theta\end{pmatrix}, \tag{59}\]
and the Yukawa matrix can be found to be
\[h=\begin{pmatrix}\sqrt{m_{1}}\Lambda_{1}^{-1/2}u_{11}^{*}&\Lambda_{1}^{-1/2}( \sqrt{m_{2}}\cos\theta u_{21}^{*}+\sqrt{m_{3}}\sin\theta u_{31}^{*})&\Lambda_ {2}^{-1/2}(-\sqrt{m_{2}}\sin\theta u_{21}^{*}+\sqrt{m_{3}}\cos\theta u_{31}^{* })\\ \sqrt{m_{1}}\Lambda_{1}^{-1/2}u_{12}^{*}&\Lambda_{1}^{-1/2}(\sqrt{m_{2}}\cos \theta u_{22}^{*}+\sqrt{m_{3}}\sin\theta u_{32}^{*})&\Lambda_{2}^{-1/2}(-\sqrt {m_{2}}\sin\theta u_{22}^{*}+\sqrt{m_{3}}\cos\theta u_{32}^{*})\\ \sqrt{m_{1}}\Lambda_{1}^{-1/2}u_{13}^{*}&\Lambda_{1}^{-1/2}(\sqrt{m_{2}}\cos \theta u_{23}^{*}+\sqrt{m_{3}}\sin\theta u_{33}^{*})&\Lambda_{2}^{-1/2}(-\sqrt {m_{2}}\sin\theta u_{23}^{*}+\sqrt{m_{3}}\cos\theta u_{33}^{*})\end{pmatrix}. \tag{60}\]
From Eqn. (60) it can be seen that the Yukawa couplings for the lightest of the three heavy fermions (\(N_{1}\)) can be made small by choosing the vanishingly small lightest active neutrino mass \(m_{1}\). This makes the decay and the inverse decay widths smaller, resulting in a weak washout leptogenesis scenario. Therefore, the leptogenesis scale can be significantly lower compared to the two generations of heavy fermions.
|
2309.16533 | Further results on the Hunters and Rabbit game through monotonicity | Hunters and Rabbit game is played on a graph $G$ where the Hunter player
shoots at $k$ vertices in every round while the Rabbit player occupies an
unknown vertex and, if not shot, must move to a neighbouring vertex after each
round. The Rabbit player wins if it can ensure that its position is never shot.
The Hunter player wins otherwise. The hunter number $h(G)$ of a graph $G$ is
the minimum integer $k$ such that the Hunter player has a winning strategy
(i.e., allowing him to win whatever be the strategy of the Rabbit player). This
game has been studied in several graph classes, in particular in bipartite
graphs (grids, trees, hypercubes...), but the computational complexity of
computing $h(G)$ remains open in general graphs and even in trees. To progress
further, we propose a notion of monotonicity for the Hunters and Rabbit game
imposing that, roughly, a vertex that has already been shot ``must not host the
rabbit anymore''. This allows us to obtain new results in various graph
classes.
Let the monotone hunter number be denoted by $mh(G)$. We show that $pw(G)
\leq mh(G) \leq pw(G)+1$ for any graph $G$ with pathwidth $pw(G)$, implying
that computing $mh(G)$, or even approximating $mh(G)$ up to an additive
constant, is NP-hard. Then, we show that $mh(G)$ can be computed in polynomial
time in split graphs, interval graphs, cographs and trees. These results go
through structural characterisations which allow us to relate the monotone
hunter number with the pathwidth in some of these graph classes. In all cases,
this allows us to specify the hunter number or to show that there may be an
arbitrary gap between $h$ and $mh$, i.e., that monotonicity does not help. In
particular, we show that, for every $k\geq 3$, there exists a tree $T$ with
$h(T)=2$ and $mh(T)=k$. We conclude by proving that computing $h$ (resp., $mh$)
is FPT parameterised by the minimum size of a vertex cover. | Thomas Dissaux, Foivos Fioravantes, Harmender Gahlawat, Nicolas Nisse | 2023-09-28T15:45:48Z | http://arxiv.org/abs/2309.16533v1 | # Further results on the Hunters and Rabbit game through monotonicity+
###### Abstract
The Hunters and Rabbit game is played on a graph \(G\) where the Hunter player shoots at \(k\) vertices in every round while the Rabbit player occupies an unknown vertex and, if it is not shot, must move to a neighbouring vertex after each round. The Rabbit player wins if it can ensure that its position is never shot. The Hunter player wins otherwise. The hunter number \(h(G)\) of a graph \(G\) is the minimum integer \(k\) such that the Hunter player has a winning strategy (i.e., allowing him to win whatever be the strategy of the Rabbit player). This game has been studied in several graph classes, in particular in bipartite graphs (grids, trees, hypercubes...), but the computational complexity of computing \(h(G)\) remains open in general graphs and even in more restricted graph classes such as trees. To progress further in this study, we propose a notion of monotonicity (a well-studied and useful property in classical pursuit-evasion games such as graph searching games) for the Hunters and Rabbit game imposing that, roughly, a vertex that has already been shot "must not host the rabbit anymore". This allows us to obtain new results in various graph classes.
More precisely, let the monotone hunter number \(mh(G)\) of a graph \(G\) be the minimum integer \(k\) such that the Hunter player has a monotone winning strategy. We show that \(pw(G)\leq mh(G)\leq pw(G)+1\) for any graph \(G\) with pathwidth \(pw(G)\), which implies that computing \(mh(G)\), or even approximating \(mh(G)\) up to an additive constant, is NP-hard. Then, we show that \(mh(G)\) can be computed in polynomial time in split graphs, interval graphs, cographs and trees. These results go through structural characterisations which allow us to relate the monotone hunter number with the pathwidth in some of these graph classes. In all cases, this allows us to specify the hunter number or to show that there may be an arbitrary gap between \(h\) and \(mh\), i.e., that monotonicity does not help. In particular, we show that, for every \(k\geq 3\), there exists a tree \(T\) with \(h(T)=2\) and \(mh(T)=k\). We conclude by proving that computing \(h\) (resp., \(mh\)) is FPT parameterised by the minimum size of a vertex cover.
## 1 Introduction
The Hunters and Rabbit game is played on a graph \(G\) and with a fixed integer \(k\) (the number of hunters), where the Hunter player shoots at \(k\) vertices in every round while the Rabbit player
occupies an unknown vertex and, if it is not shot, must move to a neighbouring vertex after each round. The Rabbit player wins if it can ensure that its position is never shot. The Hunter player wins otherwise. The Hunters and Rabbit game was first introduced in [8], in the case \(k=1\), where it was shown that the Hunter player wins in a tree \(T\) if and only if \(T\) does not contain as subgraph any tree obtained from a star with \(3\) leaves by subdividing each edges twice. This result was also observed in [19], where the authors also consider the minimum number of rounds needed for the Hunter player to win. The version where \(k>1\) was first considered in [1]. Observe that, if \(k=|V(G)|-1\), the Hunter player can win in any connected graph \(G\) (in two rounds) by shooting twice a subset of \(k\) vertices of \(G\). Hence, let the _hunter number_ of \(G\), denoted by \(h(G)\), be the minimum integer \(k\) such that \(k\) hunters can win in \(G\) whatever be the rabbit strategy. The exact value of \(h(G)\) has been determined for several specific families of graphs \(G\). For any \(n\geq 2\), \(h(P_{n})=1\) where \(P_{n}\) is the path with \(n\) vertices [1] (because the rabbit is forced to move at every round, \(h(P_{1})=0\)). For any \(n\geq 3\), \(h(C_{n})=2\) and \(h(K_{n})=n-1\), where \(C_{n}\) and \(K_{n}\) are the cycle and complete graph on \(n\) vertices respectively [1]. Moreover, \(h(G_{n\times m})=\lfloor\frac{\min\{n,m\}}{2}\rfloor+1\)[1] and \(h(Q^{n})=1+\Sigma_{i=0}^{n-2}\binom{i}{i/2}\)[6], where \(G_{n\times m}\) is the \(n\times m\) grid and \(Q^{n}\) is the hypercube with dimension \(n\). By taking advantage of the bipartiteness of trees, it was proven that, for any tree \(T\), \(h(T)\leq\lceil\frac{1}{2}\log_{2}(|V(T)|)\rceil\)[16]. Surprisingly, the computational complexity of the problem that takes a graph \(G\) and an integer \(k\) as inputs and aims at deciding whether \(h(G)\leq k\) is still open, even if \(G\) is restricted to be a tree.
In this paper, we progress further in this research direction by exhibiting new classes of graphs \(G\) where \(h(G)\) can be determined in polynomial time. We also define some _monotone_ variants of the game which allow us to get new results on the initial game.
**Graph searching games.** The Hunters and Rabbit game takes place in the larger class of Graph Searching games initially introduced in [7, 26]. In these pursuit-evasion games, one player plays with a team of searchers (also called cops, hunters, etc.) that must track a fugitive (or robber, rabbit, etc.) moving in a graph. There are many games that can fall under this framework, each one specifying its own rules on, for example, the available moves of the searchers, the speed of the fugitive, whether the fugitive is visible or not, and so on. Several variations of graph searching games have been studied in the literature due to their numerous applications in artificial intelligence [21], robot motion planning [10], constraint satisfaction problems and database theory [15], and distributed computing [25]. Graph Searching games have mostly been studied for their significant implications in graph theory and algorithms. In particular, many variants of these games provide algorithmic interpretations of several width measures of graphs like treewidth [27], pathwidth [26], tree-depth [14], hypertree-width [2], cycle-rank [14], and directed tree-width [22]. The connection between Graph Searching games and structural parameters, such as the treewidth or the pathwidth, is based on the notion of _monotonicity_[4, 27, 24, 20]. In short, a searchers' strategy is _monotone_ if it ensures that the fugitive can never "recontaminate" a vertex, i.e., it can never access a vertex that has already been "visited" (or "searched") by a searcher. The main question is then, given a game, whether "recontamination does not help in this game" [23], i.e., whether there always exists, in this game, an optimal (in terms of number of searchers) monotone winning strategy for the searchers. In particular, the monotonicity played a central role in the proof that the minimum number of searchers to capture an invisible (resp., visible) fugitive in the node-searching game played in a graph \(G\) equals its pathwidth plus one [4] (resp., treewidth plus one [27]).
Not surprisingly, the Hunters and Rabbit game has also a close relationship with the pathwidth of graphs. Precisely, the hunter number of any graph is at most its pathwidth plus one [1]. In this paper, we investigate further this relationship and, for this purpose, we define a notion of monotonicity adapted to the Hunters and Rabbit game and study the monotone variant of the game.
**Our contribution.** In Section 2, we first give the main notation and definitions used throughout this paper, and we prove (or recall from previous works) several basic properties of the hunter number of graphs. In Section 3, we introduce the notion of monotonicity for the Hunters and Rabbit game. As discussed in Section 3, some peculiar behaviours of the Hunters and Rabbit game makes the definition of monotone hunter strategies not as straightforward as in classical Graph Searching games. We then prove, in Section 3.1, some technical properties (used later) of the monotone hunter number \(mh(G)\) of a graph \(G\), i.e., the minimum number of hunters needed by a monotone strategy to win against the rabbit whatever it does in \(G\). In Section 3.2, we prove that \(mh(G)\in\{pw(G),pw(G)+1\}\) in any graph \(G\). This result has interesting implications. Along with implying that it is \(\mathsf{NP}\)-hard to compute \(mh(G)\) for a graph \(G\), it also implies that it is \(\mathsf{NP}\)-hard to approximate \(mh(G)\) up to an additive error of \(|V(G)|^{\varepsilon}\), for \(0<\varepsilon<1\). On the positive side, we give polynomial-time algorithms to determine \(h(G)\) and/or \(mh(G)\) in particular graph classes \(G\) in Section 4. Precisely, in Section 4.1, we show that \(\omega(G)\leq h(G)\leq mh(G)\leq\omega(G)+1\) in any split graph \(G\) with maximum clique of size \(\omega(G)\) and precisely characterise when each bound is reached. We also precisely characterise \(mh(G)\) for any interval graph \(G\). In Section 4.2, we design a linear-time algorithm that computes \(mh(G)\) for any cograph \(G\) and give bounds for \(h(G)\) in that case. In Section 4.3, we adapt the Parsons' Lemma [26] to the case of the monotone Hunters and Rabbit game which leads to a polynomial-time algorithm that computes \(mh(T)\) for any tree \(T\). In Section 5, we investigate the monotonicity property in the case of the "bipartite" variant of the Hunters and Rabbit game (see [1, 16]). In particular, this allows us to show that, for any \(k\in\mathbb{N}\), there exist trees \(T\) such that \(h(T)=2\) and \(mh(T)\geq k\). That is, "recontamination helps a lot" in the Hunters and Rabbit game. Finally, in Section 6, we show as a general positive result that the problem of deciding if \(h(G)\leq k\), for some given integer \(k\), is in \(\mathsf{FPT}\) when parameterised by the vertex cover number of \(G\). This is done through kernelisation. We close our study by providing directions for further research in Section 7.
## 2 Preliminaries
Unless mentioned otherwise, in this paper we will always deal with graphs \(G=(V,E)\) that are non empty, finite, undirected, connected and simple. For any two adjacent vertices \(x,y\in V\), let \(xy\in E\) denote the edge between \(x\) and \(y\). Given a set \(S\subseteq V\), let \(G[S]\) denote the subgraph of \(G\) induced by (the vertices in) \(S\) and let \(G\setminus S\) denote the subgraph \(G[V\setminus S]\). For any \(v\in V\) and \(X\subseteq V\), let \(N_{X}(v)=\{u\in X\mid uv\in E\}\) be the _open neighbourhood_ of \(v\) in \(X\) and let the _closed neighbourhood_ of \(v\) in \(X\) be \(N_{X}[v]=(N_{X}(v)\cup\{v\})\cap X\). If \(X=V\), we simply write \(N(v)\) and \(N[v]\) respectively. For any \(S\subseteq V\), let \(N(S)=\bigcup_{v\in S}N(v)\setminus S\) and \(N[S]=N(S)\cup S\). The degree \(d(v)=|N(v)|\) is the number of neighbours of \(v\) and let \(\delta(G)=\min_{v\in V}d(v)\). An _independent set_ of a graph \(G=(V,E)\) is a subset \(I\) of \(V\) such that, for every \(u,v\in I\), \(uv\notin E\). A graph is _bipartite_ if its vertex-set can be partitioned into two independent sets.
Hunters and Rabbit game.The Hunters and Rabbit game is played between two players, Hunter and Rabbit, on a non empty, finite, undirected, connected and simple graph \(G=(V,E)\). Let \(k\in\mathbb{N}^{*}\). The Hunter player controls \(k\) hunters and the Rabbit player controls a single rabbit. First, the Rabbit player places the rabbit at a vertex \(r_{0}\in V\). The rabbit is _invisible_, that is, the position of the rabbit is not known to the hunters. Then, the game proceeds in _rounds_. In each round \(i\geq 1\), first, the Hunter player selects a non empty subset \(S_{i}\subseteq V\) of at most \(k\) vertices of \(G\) (we say that the vertices in \(S_{i}\) are _shot_ at round \(i\)). If the current position \(r_{i-1}\) of the rabbit is shot, i.e., if \(r_{i-1}\in S_{i}\) (we say that the rabbit is shot), then the Hunter player wins, and the game stops. Otherwise, the rabbit must move from its current
position \(r_{i-1}\) to a vertex \(r_{i}\in N(r_{i-1})\), and the next round starts. The Rabbit wins if it avoids being shot forever.
A _hunter strategy_ in \(G=(V,E)\) is a finite sequence \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) of non empty subsets of vertices of \(G\). Let \(h(\mathcal{S}):=\max_{1\leq i\leq\ell}|S_{i}|\) and let us say that \(\mathcal{S}\)_uses_\(h(\mathcal{S})\) hunters. A _rabbit trajectory in \(G\) starting from \(W\subseteq V\)_(\(W\) will always be assumed non empty) is any walk \((r_{0},\ldots,r_{\ell})\) starting from \(W\), _i.e._, \(r_{0}\in W\) and \(r_{i}\in N(r_{i-1})\) for every \(1\leq i\leq\ell\). A hunter strategy is _winning with respect to \(W\)_ if, for every rabbit trajectory \((r_{0},\ldots,r_{\ell})\) starting from \(W\), there exists \(0\leq j<\ell\) such that \(r_{j}\in S_{j+1}\), that is, the rabbit is eventually shot whatever be its trajectory starting from \(W\). Given a hunter strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\), a rabbit trajectory \((r_{0},\ldots,r_{\ell})\) starting from \(W\) is _winning against \(\mathcal{S}\)_ if \(r_{i}\notin S_{i+1}\) for every \(0\leq i<\ell\). A _winning hunter strategy_ is any winning hunter strategy with respect to \(V\) and a _rabbit trajectory_ is any rabbit trajectory starting from \(V\).
The _hunter number of \(G=(V,E)\) with respect to \(W\subseteq V\)_, denoted by \(h_{W}(G)\), is the minimum integer \(k\) such that there exists a winning hunter strategy with respect to \(W\) and using \(k\) hunters. Let \(h(G)=h_{V}(G)\) be the _hunter number_ of \(G\). Note that, for technical reasons, for a single vertex graph \(G\), we set \(h(G)=0\). This goes in accordance with "the locating part" of the game since the rabbit is already located. The Rabbit player has a _strategy \(\mathcal{R}\) starting from \(W\subseteq V\) against \(k\geq 1\) hunters_ if, for every hunter strategy \(\mathcal{S}\) using \(k\) hunters, there exists a rabbit trajectory \(\mathcal{R}(\mathcal{S})\) that is winning against \(\mathcal{S}\). Note that, if such a strategy \(\mathcal{R}\) exists, then \(h_{W}(G)>k\).
The following lemmas will be used throughout this paper. In [1], it is shown that the hunter number is closed under taking subgraphs. We first show that this result trivially extends to the case when the starting positions of the rabbit are restricted.
**Lemma 1**.: _Let \(G=(V,E)\) be any graph and let \(H\) be a subgraph of \(G\), and let \(W\subseteq V\) with \(W\cap V(H)\neq\emptyset\). Then, \(h_{W\cap V(H)}(H)\leq h_{W}(G)\leq h(G)\)._
Proof.: By definition, \(h_{W}(G)\leq h(G)\). Let us show the other inequality.
Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a winning hunter strategy in \(G\) with respect to \(W\). Let \(\mathcal{S}^{\prime}=(S^{\prime}_{1},S^{\prime}_{2},\ldots,S^{\prime}_{\ell})\) be such that, for every \(1\leq i\leq\ell\), \(S^{\prime}_{i}=S_{i}\cap V(H)\) if \(S_{i}\cap V(H)\neq\emptyset\) and \(S^{\prime}_{i}\) consists of any vertex of \(V(H)\) otherwise. Then, \(\mathcal{S}^{\prime}\) is a winning hunter strategy in \(H\) with respect to \(W\cap V(H)\). Indeed, any rabbit trajectory \((r_{0}\in W\cap V(H),r_{1},\ldots,r_{\ell})\) in \(H\) is also a trajectory starting from \(W\) in \(G\). Since \(\mathcal{S}\) is winning w.r.t. \(W\), there exists \(i<\ell\) such that \(r_{i}\in S_{i+1}\cap V(H)\subseteq S^{\prime}_{i+1}\), and so \(\mathcal{S}^{\prime}\) is winning w.r.t. \(W\cap V(H)\). Moreover, \(h(\mathcal{S}^{\prime})\leq h(\mathcal{S})\).
For any hunter strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\), it will be convenient to identify the potential positions of a rabbit (starting in \(W\subseteq V\)) after each round. Precisely, let \(\mathcal{Z}^{W}(\mathcal{S})=(Z^{W}_{0}(\mathcal{S}),\ldots,Z^{W}_{\ell}( \mathcal{S}))\) be defined as follows. Let \(Z^{W}_{0}(\mathcal{S})=W\) and, for every \(0<i\leq\ell\), let \(Z^{W}_{i}(\mathcal{S})\) be the set of vertices \(v\) such that there exists a rabbit trajectory \((r_{0},r_{1},\ldots,r_{i}=v)\) such that \(r_{0}\in W\) and, for every \(0\leq j<i\), \(r_{j}\notin S_{j+1}\). Formally, for any \(1\leq i\leq\ell\), let \(Z^{W}_{i}(\mathcal{S})=\{x\in V(G)\mid\exists y\in(Z^{W}_{i-1}(\mathcal{S}) \setminus S_{i})\wedge(xy\in E(G))\}\). Intuitively, \(Z^{W}_{i}(\mathcal{S})\) is the set of vertices that the rabbit (starting from some vertex in \(W\)) can have reached at the end of the \(i^{th}\) round without having been shot. We will refer to the vertices in \(Z^{W}_{i}(\mathcal{S})\) as the _contaminated_ vertices after round \(i\). Note that, if \(\mathcal{S}\) is winning, then \(Z^{W}_{\ell}(\mathcal{S})=\emptyset\). In what follows, we write \(Z_{i}\) (resp., \(Z_{i}(\mathcal{S})\)) instead of \(Z^{W}_{i}(\mathcal{S})\) when \(\mathcal{S}\) and \(W\) (resp., when \(W\)) are clear from the context.
We now show that we can only consider hunter strategies that consist only of "useful shots". A hunter strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) is said to be _parsimonious_ if, for every \(1\leq i\leq\ell\), \(S_{i}\subseteq Z_{i-1}(\mathcal{S})\). Note that, if \(\mathcal{S}\) is parsimonious, then \(Z_{i}\neq\emptyset\) for every \(i<\ell\). Note that if \(\mathcal{S}\) is parsimonious, then it can be retrieved only from the sequence \(\mathcal{Z}(\mathcal{S})=(Z_{0},\ldots,Z_{\ell})\) of the contaminated sets. Indeed, for any \(1\leq i\leq\ell\), \(S_{i}=\{w\in Z_{i-1}\mid\exists x\in N(w)\backslash Z_{i}\}\).
In the following lemma, we establish that we can hunt the rabbit in a parsimonious manner without increasing the number of required hunters.
**Lemma 2**.: _For any graph \(G=(V,E)\) and any non empty subset \(W\subseteq V\), there is a parsimonious winning hunter strategy in \(G\) with respect to \(W\) and that uses \(h_{W}(G)\) hunters._
Proof.: Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a winning hunter strategy with respect to \(W\subseteq V\) using at most \(k\geq 1\) hunters. Let \(\mathcal{Z}(\mathcal{S})=(Z_{0}(\mathcal{S}),\ldots,Z_{\ell}(\mathcal{S}))\) be the set of contaminated vertices for each round of \(\mathcal{S}\). If there exists an integer \(\ell^{\prime}<\ell\) such that \(Z_{\ell^{\prime}}(\mathcal{S})=\emptyset\), then \(\mathcal{S}=(S_{1},\ldots,S_{\ell^{\prime}})\) is also a winning hunter strategy with respect to \(W\subseteq V\) using at most \(k\) hunters. Hence, we may assume that \(Z_{i}(\mathcal{S})\neq\emptyset\) for every \(0\leq i<\ell\).
Moreover, if there exists an integer \(1\leq i\leq\ell\) such that \(S_{i}\cap Z_{i-1}(\mathcal{S})=\emptyset\), let \(h\) be the smallest such integer and let \(v\in Z_{h-1}(\mathcal{S})\). Then, \(\mathcal{S}^{\prime}=(S_{1},\ldots,S_{h-1},\{v\},S_{h+1},\ldots,S_{\ell})\) is also a winning strategy with respect to \(W\subseteq V\) using at most \(k\geq 1\) hunters (since \(S_{h}\cap Z_{h-1}(\mathcal{S})=\emptyset\)). By repeating this process, we may assume that, for every \(1\leq i\leq\ell\), \(S_{i}\cap Z_{i-1}(\mathcal{S})\neq\emptyset\).
Let \(\mathcal{S}^{\prime}=(S^{\prime}_{1},S^{\prime}_{2},\ldots,S^{\prime}_{\ell^{ \prime}})\) be such that, for every \(1\leq i\leq\ell^{\prime}\), \(S^{\prime}_{i}=S_{i}\cap Z_{i-1}(\mathcal{S})\). It is easy to see that, for every \(i\leq\ell^{\prime}\), \(Z_{i}(\mathcal{S})=Z_{i}(\mathcal{S}^{\prime})\), and then \(\mathcal{S}^{\prime}\) is parsimonious. Furthermore, \(\mathcal{S}^{\prime}\) is a winning hunter strategy with respect to \(W\). Indeed, since \(\mathcal{S}\) is winning w.r.t. \(W\), for any rabbit trajectory \((r_{0},r_{1},\ldots,r_{\ell})\), there exists an integer \(j<\ell\) such that \(r_{j}\in S_{j+1}\). Let \(i\) be the smallest such integer. By definition, \(r_{i}\in Z_{i}\cap S_{i+1}=S^{\prime}_{i+1}\) and so \(\mathcal{S}^{\prime}\) is winning w.r.t. \(W\). Moreover, \(h(\mathcal{S}^{\prime})\leq h(\mathcal{S})\).
It must be noticed that there exist graphs \(G=(V,E)\) and hunter strategies \((S_{1},\ldots,S_{\ell})\) that are winning in \(G\) without shooting to all vertices, i.e., such that \(V\setminus\bigcup_{1\leq i\leq\ell}S_{i}\neq\emptyset\). For instance, in the graph \(G\) that consists of a single edge \(uv\), the strategy \((\{u\},\{u\})\) is a winning hunter strategy using one hunter and without shooting at \(v\). Note that, in that example, there exists no winning parsimonious hunter strategy using one hunter and that shots to both \(u\) and \(v\). The next lemma, that characterises the set of such unshot vertices, will be used throughout the paper.
**Lemma 3**.: _Let \(H\) be any non-empty connected subgraph of any graph \(G=(V,E)\). Let \(W\subseteq V\) such that \(W\cap V(H)\neq\emptyset\). Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be any winning hunter strategy in \(G\) with respect to \(W\). If \(S_{i}\cap V(H)=\emptyset\) for all \(1\leq i\leq\ell\), then \(|V(H)|=1\)._
Proof.: Let \(x\in V(H)\cap W\). Towards a contradiction, assume that \(|V(H)|\geq 2\). Let \(y\in N_{H}(x)\) (it exists since \(H\) is connected). Note that since \(S_{i}\cap V(H)=\emptyset\) for all \(1\leq i\leq\ell\), \(\{x,y\}\cap\bigcup_{1\leq i\leq\ell}S_{i}=\emptyset\). Thus, the rabbit can oscillate between \(x\) and \(y\) during the whole game without being shot. That is, \(\mathcal{R}=(r_{0}=x,r_{1}=y,r_{2}=x,\ldots,r_{\ell})\) is a winning rabbit trajectory against \(\mathcal{S}\) starting from \(W\cap V(H)\). This contradicts that \(\mathcal{S}\) is a winning hunter strategy in \(G\) with respect to \(W\).
In what follows, we will use the following result of [6]:
**Lemma 4**.: _[_6_]_ _For any graph \(G\), \(h(G)\geq\delta(G)\)._
The Hunters and Rabbit game has been particularly studied in bipartite graphs [1, 6, 16] and we continue this study in Section 5. In what follows, bipartite graphs are referred to as \(G=(V_{r}\cup V_{w},E)\) where \((V_{r},V_{w})\) is implicitly a bipartition of \(V(G)\) such that \(V_{r}\) and \(V_{w}\) are independent sets respectively. We refer to the vertices in \(V_{r}\) (resp., in \(V_{w}\)) as the _red_ (resp., _white_) vertices.
In [1], it is shown that, in bipartite graphs, it is sufficient to consider winning hunter strategies with respect to one of the independent sets of the bipartition. For completeness and to further motivate some of our results, we briefly recall their result. Precisely:
**Lemma 5**.: _[_1_]_ _For any bipartite graph \(G=(V_{r}\cup V_{w},E)\), \(h(G)=h_{V_{r}}(G)=h_{V_{w}}(G)\)._
Proof.: By definition, \(\max\{h_{V_{r}}(G),h_{V_{w}}(G)\}\leq h(G)\). To show that \(h(G)\leq h_{V_{r}}(G)\) (resp., \(h(G)\leq h_{V_{w}}(G)\)), let \(\mathcal{S}_{r}=(S_{1},\ldots,S_{\ell})\) be a winning hunter strategy in \(G\) with respect to \(V_{r}\) (resp., w.r.t. \(V_{w}\)). If \(\ell\) is odd, then \((S_{1},\ldots,S_{\ell},S_{1},\ldots,S_{\ell})\) is a winning hunter strategy, and otherwise, \((S_{1},\ldots,S_{\ell},\{u\},S_{1},\ldots,S_{\ell})\) where \(u\) is any arbitrary vertex is a winning hunter strategy.
Note that, in most of the paper, we will consider hunter strategies with respect to \(V\), but in section 5. More precisely, in Section 5, we will consider the Hunters and Rabbit game in bipartite graphs when the rabbit must start at some vertex of \(V_{r}\). We will refer to this variant as the _red variant_ of the game. The following remark will be widely used.
Remark.Let \(G=(V_{r}\cup V_{w},E)\) be a bipartite graph and \(\mathcal{S}_{r}=(S_{1},\ldots,S_{\ell})\) be a parsimonious hunter strategy in \(G\) with respect to \(V_{r}\). Then, for every \(1\leq i\leq\lceil\ell/2\rceil\), \(S_{2i-1}\subseteq Z_{2i-2}\subseteq V_{r}\) and (if \(2i\leq\ell\)) \(S_{2i}\subseteq Z_{2i-1}\subseteq V_{w}\). Indeed, in a bipartite graph, if the rabbit starts at a vertex in \(V_{r}\) (resp., \(V_{w}\)), it must occupy a vertex of \(V_{r}\) at the end of every even (resp., odd) round and a vertex of \(V_{w}\) at the end of every odd (resp., even) round.
## 3 Monotonicity
In classical graph pursuit-evasion games, an important notion is that of _monotonicity_. Without going into the details, in these games, a strategy is _monotone_ if the area reachable by the fugitive never increases. Said differently, in the particular case of graph searching games, a strategy is monotone if, once a searcher is removed from one vertex, it is never necessary to occupy this vertex during a subsequent round (note that, in some specific cases, for instance in directed graphs, these two definitions are not rigorously equivalent [3]). Monotone strategies have been widely studied [4, 29, 24] because, on the one hand, it is generally easier to design them and, on the other hand, monotone strategies have length polynomial in the size of the graph and, so, corresponding decision problems (is there a monotone strategy using \(k\) searchers?) can be proven to be in NP.
It is clear that such a definition is not suitable to the Hunters and Rabbit game. Indeed, consider the graph that consists of a single edge \(uv\): the hunter must shoot at some vertex, say \(u\), and, if the rabbit was at \(v\), it will move to \(u\), i.e., the vertex \(u\) is "recontaminated". Therefore, we propose to define monotonicity in the Hunters and Rabbit game as follows (see the formal definition below): once a vertex has been "cleared", if the rabbit can access it in a subsequent round, then the vertex must be shot immediately.
In classical graph searching games, a vertex being cleared at some round means that the searchers' strategy ensures that the fugitive cannot occupy this vertex at this round. Being recontaminated can then be intuitively defined by the fact that a vertex can be reached by the fugitive while having been cleared in a previous round. This intuitive definition does not make any sense in the Hunters and Rabbit game and, in particular, in its red variant in bipartite graphs. Indeed, in such case, every red vertex is cleared at every odd round and so, looking for a strategy without recontamination would be meaningless. To overcome this difficulty, we propose to define the clearing of a vertex at some round by the fact that the actions of the hunters ensure that this vertex cannot be occupied by the rabbit at this round.
A related difficulty comes from the fact that, contrary to classical graph searching games, a vertex may be "cleared" without having been shot during the game. Recall, for instance, our previous discussion for the graph consisting of a single edge. As a less trivial example,
consider a star with three leaves whose edges have been subdivided once each. Then, assuming that the leaves and the centre are red, in the red variant, it is possible for one hunter to win without shooting any of the leaves (while any of the leaves may be occupied by the rabbit initially). Indeed, consider the strategy for one hunter where on every odd round it shoots on the centre and on every even round it shoots on an arbitrary neighbour of the centre that was not previously shot. Figure 1 illustrates the above strategy.
Therefore, two actions of the hunters may clear a vertex: either a hunter shoots a vertex
Figure 1: Example of a bipartite graph (where \(V_{r}=\{a,c,e,g\}\) corresponds to the red part of the bipartition, illustrated by the red vertices in the figures) and of a parsimonious winning strategy with respect to \(V_{r}\), such that no vertex in \(\{a,e,g\}\) is ever shot. Each subfigure depicts the situation at the end of the corresponding round. In round \(0\), the rabbit occupies any vertex in \(\{a,c,e,g\}\). Then, in round \(1\), the hunter shoots the vertex \(c\) (depicted as the cross over the corresponding vertex of subfigure (b)) and the rabbit moves to one of the vertices in \(\{b,d,f\}\). The game continues until the end of round \(6\) (subfigure (g)), at which point the hunter is sure to shoot the rabbit in vertex \(b\). Formally, we have \(\mathcal{S}=(\{c\},\{d\},\{c\},\{f\},\{c\},\{b\})\) and \(\mathcal{Z}(\mathcal{S})=(\{a,c,e,g\},\{b,d,f\},\{a,c,g\},\{b,f\},\{a,c\},\{b \},\emptyset)\).
at round \(i\) and does not shoot the rabbit (i.e. there is no rabbit trajectory, such that \(r_{i-1}=v\), that is winning against a strategy shooting at \(v\) at round \(i\)), or the hunters shoot at every contaminated vertex in the neighbourhood of \(v\). In this case, either \(v\) was occupied and the rabbit has to leave \(v\), or it was not and cannot be occupied after the move of the rabbit. In both cases, \(v\notin Z_{i}\). This discussion motivates the following definition for the monotonicity of hunter strategies.
### Definition of monotone strategies and first properties
Given a graph \(G\), a winning hunter strategy \(\mathcal{S}\) in \(G\) with respect to \(W\subseteq V\), is _monotone_ if for every vertex \(v\in V\), once \(v\) has been "cleared", then it is shot again every time the rabbit can potentially reach \(v\). Formally, we say that a vertex \(v\) is _cleared_ at round \(i\) if either \(v\in S_{i}\) or \(N(v)\cap Z_{i-1}\neq\emptyset\) and \(N(v)\cap Z_{i-1}\subseteq S_{i}\). Note that, in the second condition, the fact that we require that \(N(v)\cap Z_{i-1}\neq\emptyset\) comes from technicalities when \(W\neq V\). A strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) is _monotone_ if, for every vertex \(v\in V\), if there exists an \(i\) such that \(v\) is cleared at round \(i\), then for every \(j>i\) such that \(v\in Z_{j}\), the strategy ensures that \(v\in S_{j+1}\). A vertex \(v\) is _recontaminated_ at round \(j\) if there exists \(i\leq j\) such that \(v\) is cleared at round \(i\) and \(v\in Z_{j}\setminus S_{j+1}\).
The _monotone hunter number_ of a graph \(G\) with respect to \(W\subseteq V(G)\), denoted by \(mh_{W}(G)\), is the minimum number \(k\) such that \(k\) hunters have a monotone winning hunter strategy in \(G\) with respect to \(W\). Let us denote the _monotone hunter number_\(mh_{V}(G)\) of \(G\) by \(mh(G)\). Note that, by definition:
**Proposition 1**.: _For every graph \(G=(V,E)\) and \(W\subseteq V\), \(h_{W}(G)\leq mh_{W}(G)\leq mh(G)\)._
In this subsection, we prove some general properties of (non-)monotone strategies. Let us start with two technical claims that will be used in several proofs below.
**Proposition 2**.: _Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a hunter strategy in a graph \(G=(V,E)\). Let \(v\in V\) and \(1\leq i\leq\ell\). If there exists a vertex \(u\in N(v)\) and a vertex \(x\in N(u)\) (possibly \(x=v\)) such that \(u\notin\bigcup_{j\leq i}S_{j}\) and \(x\notin\bigcup_{j<i}S_{j}\), then \(v\in Z_{p}\) for each \(p\leq i\)._
Proof.: This clearly holds if \(p=0\) since \(Z_{0}=V\). If \(p=1\), there exists a rabbit trajectory \((r_{0}=u\in N(v)\setminus S_{1},r_{1}=v)\) and so \(v\in Z_{1}\). Hence, we assume that \(p>1\).
The rabbit can follow the following strategy depending on whether \(p\) is odd or even:
1. \(p\) is odd: The rabbit can follow the following trajectory: \((r_{0}=u,r_{1}=x,r_{2}=u,\ldots,r_{p-2}=x,r_{p-1}=u,r_{p}=v)\) where, for \(q<p\), \(r_{q}=u\) if \(q\) is even and \(r_{q}=x\) if \(q\) is odd.
2. \(p\) is even: The rabbit can follow the following trajectory: \((r_{0}=x,r_{1}=u,r_{2}=x,\ldots,r_{p-2}=x,r_{p-1}=u,r_{p}=v)\) where, for \(q<p\), \(r_{q}=x\) if \(q\) is even and \(r_{q}=u\) if \(q\) is odd.
In both cases, for every \(0\leq j<p\), \(r_{j}\notin S_{j+1}\) since \(p\leq i\), \(u\notin\bigcup_{j\leq i}S_{j}\) and \(x\notin\bigcup_{j<i}S_{j}\). Therefore, \(v\in Z_{p}\).
The next lemma shows that, as expected, if the hunters follow a monotone strategy, the set of potential positions for the rabbit cannot increase.
**Lemma 6**.: _Let \(G=(V,E)\) be a graph with at least two vertices. Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a monotone hunter strategy in \(G\). For any \(0\leq p\leq i\leq\ell\), \(Z_{i}\subseteq Z_{p}\)._
Proof.: This clearly holds if \(p=0\) or \(i=1\) since \(Z_{0}=V\). Hence, let us assume that \(p\geq 1\) and \(i>1\). Let \(v\in Z_{i}\). Since \(v\in Z_{i}\), there exists a rabbit trajectory \(R=(r_{0},\ldots,r_{i-2}=x,r_{i-1}=u,r_{i}=v)\) such that, for any \(0\leq j<i\), \(r_{j}\notin S_{j+1}\). By definition of a rabbit trajectory, \(u\in N(v)\) and \(x\in N(u)\). Moreover, by monotonicity of \(\mathcal{S}\), since \(u\in Z_{i-1}\setminus S_{i}\) (resp. \(x\in Z_{i-2}\setminus S_{i-1}\)), \(u\notin\bigcup_{q\leq i}S_{q}\) (resp. \(x\notin\bigcup_{q\leq i-1}S_{q}\)). By Proposition 2, \(v\in Z_{p}\) for each \(p\leq i\).
The next lemma states that, for any non-monotone strategy, there must exist a vertex that has been shot at some round and that is recontaminated later (recall that it is not trivial since a vertex may be recontaminated without being previously shot).
**Lemma 7**.: _Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a non-monotone winning hunter strategy in a graph \(G=(V,E)\). Then, there exist a vertex \(v\in V\) and \(1\leq i\leq\ell\) such that \(v\in Z_{i-1}\setminus S_{i}\) and \(v\in\bigcup_{p<i}S_{p}\)._
Proof.: Towards a contradiction, assume that the statement of the lemma is false, i.e., for every vertex \(v\in V\) and every \(1\leq i\leq\ell\), if \(v\in Z_{i-1}\setminus S_{i}\) then \(v\notin\bigcup_{p<i}S_{p}\).
Since \(\mathcal{S}\) is non-monotone and winning, there exists a vertex \(u\) such that \(u\) is cleared at a round \(1\leq q\leq\ell-2\), and then recontaminated at a round \(j>q\) (i.e., \(u\in Z_{j}\setminus S_{j+1}\)). Moreover, by our assumption, \(u\) is cleared by shooting each contaminated vertex in \(N(u)\) at round \(q\), i.e., \(Z_{q-1}\cap N(u)\subseteq S_{q}\).
Let us show that \(N(u)\subseteq\bigcup_{p\leq q}S_{p}\). Let us assume that there exists a vertex \(x\in N(u)\) such that \(x\notin\bigcup_{p<q}S_{p}\). Since both \(u,x\notin\bigcup_{p<q}S_{q}\) and \(u\in N(x)\), by Proposition 2, we get that \(x\in Z_{q-1}\). Therefore, \(x\in Z_{q-1}\cap N(u)\subseteq S_{q}\). Hence, \(N(u)\subseteq\bigcup_{p\leq q}S_{p}\).
Since \(u\in Z_{j}\) then there exists \(w\in(N(u)\cap Z_{j-1})\setminus S_{j}\) and \(w\in\bigcup_{p<j}S_{p}\), i.e., \(w\) satisfies the statement of the lemma, a contradiction.
Now, let us generalise Lemmas 1 and 2 to monotone strategies.
**Lemma 8**.: _For any non-empty connected subgraph \(H\) of a graph \(G=(V,E)\), \(mh(H)\leq mh(G)\). More precisely, if there exists a monotone winning hunter strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) in \(G\), then there exists a monotone winning hunter strategy \(\mathcal{S}^{\prime}\) in \(H\) using at most \(\max_{1\leq i\leq\ell}|S_{i}\cap V(H)|\) hunters._
Proof.: Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a monotone winning hunter strategy for \(G\).
If \(|V(H)|=1\), the result clearly holds since \(mh(H)=0\). Hence, let us assume that \(|V(H)|>1\). Let \(m\) be the minimum integer such that \(S_{m}\cap V(H)\neq\emptyset\) and let \(u\in S_{m}\cap V(H)\) (by Lemma 3, such an integer \(m\) exists because \(|V(H)|>1\)). Let \(\mathcal{S}^{\prime}=(S^{\prime}_{1},\ldots,S^{\prime}_{\ell})\) be the hunter strategy, such that for every \(1\leq i\leq\ell\),
\[S^{\prime}_{i}=\begin{cases}S_{i}\cap V(H),&\quad\text{if }S_{i}\cap V(H)\neq\emptyset\\ \{u\},&\quad\text{otherwise}\end{cases}\]
First, we have the following claim.
**Claim 1**.: _For every \(0\leq i\leq\ell\) and for any vertex \(v\in V(H)\), if \(v\in Z_{i}(\mathcal{S}^{\prime})\), then \(v\in Z_{i}(\mathcal{S})\)._
Proof of Claim.: Let \(\mathcal{R}=(r_{0},\ldots,r_{i}=v)\) be a rabbit trajectory in \(H\) such that for any \(0\leq j<i\), \(r_{j}r_{j+1}\in E(H)\), \(r_{j}\notin S^{\prime}_{j+1}\) and \(r_{i}=v\) (such a trajectory exists since \(v\in Z_{i}(\mathcal{S}^{\prime})\)). By construction of \(\mathcal{S}^{\prime}\), for any \(1\leq j\leq\ell\), \(S_{j}\cap V(H)\subseteq S^{\prime}_{j}\). Therefore, \(\mathcal{R}\) is also a rabbit trajectory in \(G\) with \(r_{j}\notin S_{j+1}\), for all \(0\leq j<i\). Thus, \(v\in Z_{i}(\mathcal{S})\).
Let us show that \(\mathcal{S}^{\prime}\) is a monotone winning hunter strategy in \(H\). First, we show that \(\mathcal{S}^{\prime}\) is indeed a winning hunter strategy in \(H\). Towards a contradiction, assume that \(\mathcal{S}^{\prime}\) is not a winning strategy in \(H\). This implies that \(Z_{\ell}(\mathcal{S}^{\prime})\neq\emptyset\). Hence, Claim 1 implies that \(Z_{\ell}(\mathcal{S})\neq\emptyset\), contradicting the fact that \(\mathcal{S}\) is a winning hunter strategy in \(G\).
Thus, \(\mathcal{S}^{\prime}\) is a winning strategy in \(H\). Next, we establish that \(\mathcal{S}^{\prime}\) is indeed monotone. Towards a contradiction, let us assume that \(\mathcal{S}^{\prime}\) is non-monotone. Hence, by Lemma 7, there exist \(v\in V(H)\) and \(1\leq q<i\leq\ell\) such that \(v\in S^{\prime}_{q}\) and \(v\in Z_{i}(\mathcal{S}^{\prime})\setminus S^{\prime}_{i+1}\). By Claim 1 and because \(v\in Z_{i}(\mathcal{S}^{\prime})\), \(v\in Z_{i}(\mathcal{S})\). Since \(S_{p}\cap V(H)\subseteq S^{\prime}_{p}\) for any \(1\leq p\leq\ell\) and because \(v\notin S^{\prime}_{i+1}\), \(v\notin S_{i+1}\).
If \(v=u\), \(i+1>m\) (since \(u\in S^{\prime}_{p}\) for all \(1\leq p\leq m\)) and so \(v\in S_{m}\) and in \(Z_{i}(\mathcal{S})\setminus S_{i+1}\), contradicting the monotonicity of \(\mathcal{S}\).
Otherwise, \(v\neq u\). By construction of \(\mathcal{S}^{\prime}\), \(S^{\prime}_{p}\setminus\{u\}\subseteq S_{p}\) for all \(1\leq p\leq\ell\). Hence, \(v\in S_{q}\) and \(v\in Z_{i}(\mathcal{S})\setminus S_{i+1}\), contradicting the monotonicity of \(\mathcal{S}\).
Finally, the fact that \(h(\mathcal{S}^{\prime})\leq\max_{1\leq i\leq\ell}|S_{i}\cap V(H)|\leq h( \mathcal{S})\) completes the proof.
**Lemma 9**.: _For a graph \(G=(V,E)\) and any \(k\geq mh(G)\), there exists a parsimonious monotone winning hunter strategy in \(G\) using at most \(k\) hunters._
Proof.: Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a monotone winning hunter strategy in \(G\) using at most \(k\geq mh(G)\) hunters such that \(\ell\) is minimum. If \(\mathcal{S}\) is parsimonious, we are done. Otherwise, among such strategies, let us consider \(\mathcal{S}\) that maximizes the first round \(1\leq j\leq\ell\) that makes \(\mathcal{S}\) not parsimonious. There are several cases to be considered.
* Let \(\mathcal{Z}(\mathcal{S})=(Z_{0}(\mathcal{S}),\ldots,Z_{\ell}(\mathcal{S}))\) be the set of contaminated vertices for each round of \(\mathcal{S}\). If there exists an integer \(\ell^{\prime}<\ell\) such that \(Z_{\ell^{\prime}}(\mathcal{S})=\emptyset\), then \(\mathcal{S}=(S_{1},\ldots,S_{\ell^{\prime}})\) is also a monotone winning hunter strategy in \(G\) using at most \(k\) hunters, contradicting the minimality of \(\ell\). Hence, we may assume that \(Z_{i}(\mathcal{S})\neq\emptyset\) for every \(0\leq i<\ell\).
* Let \(1\leq j\leq\ell\) be the smallest integer such that \(S_{j}\setminus Z_{j-1}(\mathcal{S})\neq\emptyset\) (if no such integer exists, then \(\mathcal{S}\) is parsimonious and we are done). If \(S_{j}\cap Z_{j-1}(\mathcal{S})\neq\emptyset\), replace \(S_{j}\) by \(S_{j}\cap Z_{j-1}(\mathcal{S})\). This leads to a winning monotone hunter strategy \(\mathcal{S}^{\prime}\) (indeed, \(Z_{h}(\mathcal{S})=Z_{h}(\mathcal{S}^{\prime})\) for all \(1\leq h\leq\ell\)) contradicting the maximality of \(j\). Hence, we may assume that \(S_{j}\cap Z_{j-1}(\mathcal{S})=\emptyset\). Note that this implies that \(j<\ell\) (since otherwise, \(\mathcal{S}\) would not be winning).
* If any, let \(0<i\) be the minimum integer such that \(S_{j+i}\cap Z_{j+i-1}(\mathcal{S})\neq\emptyset\). Let \(v\in S_{j+i}\cap Z_{j+i-1}(\mathcal{S})\). Since \(v\in Z_{j+i-1}(\mathcal{S})\), by Lemma 6, \(v\in Z_{j-1+i^{\prime}}(\mathcal{S})\) for every \(0\leq i^{\prime}<i\). Then, for every \(0\leq i^{\prime}<i\), replace \(S_{j+i^{\prime}}\) by \(\{v\}\). Let us prove that this leads to a monotone hunter strategy contradicting the maximality of \(j\). Let \(\mathcal{S}^{\prime}\) be the strategy obtained by the above modifications. First, note that, for any \(0\leq h<j\), \(S_{h}=S^{\prime}_{h}\) and so \(Z_{h}(\mathcal{S})=Z_{h}(\mathcal{S}^{\prime})\). By definition, \(Z_{j}(\mathcal{S})=\{x\in V\mid\exists y\in Z_{j-1}(\mathcal{S})\setminus S_{j }\wedge(xy\in E)\}\) and, since \(S_{j}\cap Z_{j-1}=\emptyset\), we get \(Z_{j}(\mathcal{S})=\{x\in V\mid\exists y\in Z_{j-1}(\mathcal{S})\wedge(xy\in E)\}\). On the other hand, \(Z_{j}(\mathcal{S}^{\prime})=\{x\in V\mid\exists y\in Z_{j-1}(\mathcal{S}^{ \prime})\setminus S^{\prime}_{j}\wedge(xy\in E)\}=\{x\in V\mid\exists y\in Z_{j- 1}(\mathcal{S})\setminus\{v\}\wedge(xy\in E)\}\) since \(Z_{j-1}(\mathcal{S}^{\prime})=Z_{j-1}(\mathcal{S})\) and \(S^{\prime}_{j}=\{v\}\). Hence, \(Z_{j}(\mathcal{S}^{\prime})\subseteq Z_{j}(\mathcal{S})\). By induction on \(j\leq i^{\prime}\leq\ell\) and using the same arguments, we get that \(Z_{i^{\prime}}(\mathcal{S}^{\prime})\subseteq Z_{i^{\prime}}(\mathcal{S})\) for every \(j\leq i^{\prime}\leq\ell\). Thus, \(\mathcal{S}^{\prime}\) is a winning hunter strategy in \(G\) using at most \(k\geq mh(G)\) hunters (because \(Z_{\ell}(\mathcal{S}^{\prime})\subseteq Z_{\ell}(\mathcal{S})=\emptyset\)). It remains to show that \(\mathcal{S}^{\prime}\) is monotone. For purpose of contradiction, let us assume that \(\mathcal{S}^{\prime}\) is non-monotone. By Lemma 7, there exists a vertex \(x\) and \(1<m\leq\ell\) such that \(x\in Z_{m-1}(\mathcal{S}^{\prime})\setminus S^{\prime}_{m}\) and \(x\in\bigcup_{h<m}S^{\prime}_{h}\). If \(x\neq v\), then by definition of \(\mathcal{S}^{\prime}\) (for every \(1\leq r\leq\ell\), either \(S^{\prime}_{r}=S_{r}\) or \(S^{\prime}_{r}=\{v\}\)) and because \(Z_{r}(\mathcal{S}^{\prime})\subseteq Z_{r}(\mathcal{S})\) for all \(1\leq r\leq\ell\), we get that \(x\in\bigcup_{h<m}S_{h}\) and \(x\in Z_{m-1}(\mathcal{S})\setminus S_{m}\) which contradicts the monotonicity of \(\mathcal{S}\). Hence, let us assume that \(x=v\). Recall that
we proved that \(v\in Z_{j-1}(\mathcal{S})\setminus S_{j}\). Therefore, by monotonicity of \(\mathcal{S}\), \(v\notin\bigcup_{r<j}S_{r}\) which implies that \(v\notin\bigcup_{r<j}S_{r}^{\prime}\). Since \(v\in S_{j+i^{\prime}}^{\prime}\) for all \(0\leq i^{\prime}\leq i\), we get that \(m>j+i\). This means that \(v\in Z_{m-1}(\mathcal{S})\setminus S_{m}\) (because \(Z_{r}(\mathcal{S}^{\prime})\subseteq Z_{r}(\mathcal{S})\) for all \(1\leq r\leq\ell\) and \(S_{r}^{\prime}=S_{r}\) for all \(r\geq m>j+i\)) and \(v\in S_{j+i}\), which contradicts the monotonicity of \(\mathcal{S}\). Hence, we may assume that \(S_{j+i^{\prime}}\cap Z_{j+i^{\prime}-1}(\mathcal{S})=\emptyset\) for all \(0\leq i^{\prime}\) such that \(j+i^{\prime}\leq\ell\).
* Let us recall that \(j<\ell\) and that \(Z_{j-1}(\mathcal{S})\neq\emptyset\). Thus, let \(v\in Z_{j-1}(\mathcal{S})\). So, there exists \(w\in N(v)\cap Z_{j-2}(\mathcal{S})\setminus S_{j-1}\). Moreover, since \(w\in Z_{j-2}(\mathcal{S})\setminus S_{j-1}\) (resp. \(v\in Z_{j-1}(\mathcal{S})\setminus S_{j}\)), \(w\notin S_{r}\) (resp. \(v\notin S_{r}\)) for any \(r\leq j\) (otherwise, it contradicts the monotonicity of \(\mathcal{S}\)). Let us recall that for any \(0\leq i^{\prime}\) such that \(j+i^{\prime}\leq\ell\), \(S_{j+i^{\prime}}\cap Z_{j+i^{\prime}-1}(\mathcal{S})=\emptyset\). Therefore, \(w\notin S_{r}\) and \(v\notin S_{r}\) for any \(r\leq\ell\). By Proposition 2, \(v\in Z_{\ell}(\mathcal{S})\), which contradicts that \(\mathcal{S}\) is winning.
This completes the proof.
To conclude this subsection, we give an alternative point of view of Lemma 6. In particular, we show that, if the hunters follow a monotone parsimonious strategy, after having shot at one vertex, the hunters must always shoot at this vertex until it cannot be reached by the rabbit anymore.
**Lemma 10**.: _Let \(G=(V,E)\) be a graph and \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a parsimonious monotone winning hunter strategy in \(G\)._
* _If there exist_ \(1\leq i<j\leq\ell\) _such that_ \(v\in S_{i}\cap S_{j}\)_, then_ \(v\in S_{i+1}\)_._
* _If there exists an integer_ \(1\leq i<\ell\) _such that_ \(v\notin Z_{i-1}\)_, then_ \(v\notin S_{j}\) _for every_ \(j\geq i\)_._
Proof.: First assume that \(v\in S_{i}\cap S_{j}\). Since \(\mathcal{S}\) is parsimonious, it implies that \(v\in Z_{i-1}\cap Z_{j-1}\). By Lemma 6 and since \(v\in Z_{j}\), \(v\in Z_{i^{\prime}}\) for all \(i^{\prime}<j\). Since \(v\in S_{i}\) and \(v\in Z_{p-1}\) for all \(i\leq p\leq j\), by monotonicity of \(\mathcal{S}\), \(v\in S_{p}\) for all \(i\leq p\leq j\).
For the second statement, the fact that \(v\notin Z_{i-1}\) and Lemma 6 imply that \(v\notin Z_{j}\) for all \(j\geq i\). Since \(\mathcal{S}\) is parsimonious, \(v\notin S_{j}\) for every \(j\geq i\).
Surprisingly, the above lemma is not a characterisation of monotone strategies. Indeed, consider the path \((a,b,c,d)\) on four vertices. It can be checked that the hunter strategy \((\{a\},\{b,c\},\{b,c\})\) is parsimonious and winning (with respect to \(V\)) and satisfies the condition of the previous lemma, but this strategy is non-monotone (since \(a\in S_{1}\) and \(a\in Z_{1}\setminus S_{2}\)).
### Monotone Hunter Number and Pathwidth
In this subsection, we relate the monotone hunter number of a graph to its pathwidth. Our result might be surprising since the pathwidth of a graph \(G\) is equivalent to the number of searchers required to (monotonously) capture an arbitrary fast invisible fugitive [4] while, in our case, the invisible rabbit seems much weaker than the fugitive: the rabbit is "slow" (it moves only to neighbours) and constrained to move at every round. In this view, we might guess that the monotone hunter number of a graph could be arbitrary smaller than its pathwidth. On the contrary, we show that both parameters differ by at most one.
A _path-decomposition_ of a graph \(G=(V,E)\) is a sequence \(P=(X_{1},\ldots,X_{p})\) of subsets of vertices, called _bags_, such that (1) \(\bigcup_{i\leq p}X_{i}=V\); (2) for every \(uv\in E\), there exists \(i\leq p\) with \(\{u,v\}\subseteq X_{i}\); and (3): for every \(1\leq i\leq j\leq q\leq p\), \(X_{i}\cap X_{q}\subseteq X_{j}\). The _width_\(w(P)\) of \(P\) is the size of a largest bag of \(P\) minus one, i.e., \(w(P)=\max_{i\leq p}|X_{i}|-1\). The _pathwidth_\(pw(G)\) of \(G\) is the minimum width of its path-decompositions. A path-decomposition of \(G\) of width \(pw(G)\)
is said to be _optimal_. A path-decomposition is _reduced_ if no bag is contained in another one. It is well known that any graph admits an optimal reduced path-decomposition.
**Theorem 1**.: _For any graph \(G=(V,E)\), \(pw(G)\leq mh(G)\leq pw(G)+1\)._
Proof.: First, let \(P=(X_{1},\ldots,X_{\ell})\) be a reduced path-decomposition of \(G\) with width \(k\). Then, \(P\) is a monotone hunter strategy in \(G\) using \(k+1\) hunters. This directly comes from the well known fact that, for every \(1\leq i<\ell\), \(X_{i}\cap X_{i+1}\) separates \(\bigcup_{1\leq j\leq i}X_{j}\setminus X_{i+1}\) from \(\bigcup_{i<j\leq\ell}X_{j}\setminus X_{i}\), and so \(Z_{i}\subseteq\bigcup_{i<j\leq\ell}X_{j}\) for every \(1\leq i\leq\ell\). In particular, \(mh(G)\leq pw(G)+1\).
To show the other inequality, let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a parsimonious winning monotone hunter strategy in \(G\) using at most \(k\geq mh(G)\) hunters (it exists by Lemma 9).
**Claim 2**.: _For every \(v\in V\setminus\bigcup_{1\leq i\leq\ell}S_{i}\), there exists \(1\leq j\leq\ell\) such that \(N(v)\subseteq S_{j}\)._
Proof of Claim.: For the purpose of contradiction, let \(v\notin\bigcup_{1\leq i\leq\ell}S_{i}\) such that, for every \(1\leq i\leq\ell\), there exists \(u_{i}\in N(v)\setminus S_{i}\). Then, \(\mathcal{R}=(r_{0}=v,u_{1},v,u_{3},\ldots,u_{2i-1},v,u_{2i+1},v\ldots)\) is a winning rabbit trajectory against \(\mathcal{S}\), contradicting the fact that \(\mathcal{S}\) is a winning hunter strategy. \(\diamond\)
Let us build a path-decomposition \(\mathcal{P}\) of \(G\) as follows. Start with \(\mathcal{P}_{0}=\emptyset\) and let \(Y_{0}=V\setminus\bigcup_{1\leq i\leq\ell}S_{i}\) (\(Y_{0}\) is the set of vertices that are never shot by \(\mathcal{S}\)). Assume, by induction, that the sequence \(\mathcal{P}_{i}\) and the set \(Y_{i}\) have been built for some \(0\leq i<\ell\). Let us define \(\mathcal{P}_{i+1}\) and \(Y_{i+1}\) as follows. Let \(H_{i+1}=\{u_{1}^{i+1},\ldots,u_{r_{i+1}}^{i+1}\}=\{v\in Y_{i}\mid N(v)\subseteq S _{i+1}\}\). Let \(\odot\) denote the concatenation of two sequences. Let \(\mathcal{P}_{i+1}=\mathcal{P}_{i}\odot(S_{i+1}\cup\{u_{1}^{i+1}\},\ldots,S_{i+ 1}\cup\{u_{r_{i+1}}^{i+1}\})\) and let \(Y_{i+1}=Y_{i}\setminus H_{i+1}\). Finally, let \(\mathcal{P}=(X_{1},\ldots,X_{r})=\mathcal{P}_{\ell}\).
Note that, by construction, for every \(1\leq i\leq r\), \(|X_{i}|\leq k+1\), and so \(w(\mathcal{P})\leq k\). Let us show that \(\mathcal{P}\) satisfies the three properties of a path-decomposition.
By construction, \(\bigcup_{1\leq i\leq\ell}S_{i}\subseteq\bigcup_{1\leq i\leq r}X_{i}\). Moreover, by Claim 2, \(Y_{0}\subseteq\bigcup_{1\leq i\leq r}X_{i}\). Hence, \(\bigcup_{1\leq i\leq r}X_{i}=V\) and the Property (1) of path-decomposition is satisfied.
By construction, for every \(v\in Y_{0}\), there exists a unique \(1\leq i\leq r\) such that \(v\in X_{i}\). Now, for any \(v\in V\setminus Y_{0}\), let \(1\leq i\leq j\leq r\) such that \(v\in X_{i}\cap X_{j}\). Let \(1\leq i^{\prime}\leq\ell\) (resp., \(i^{\prime}\leq j^{\prime}\leq\ell\)) such that \(X_{i}\) has been built from \(S_{i^{\prime}}\) (resp., \(X_{j}\) has been built from \(S_{j^{\prime}}\)). By Lemma 10, \(v\in S_{p^{\prime}}\) for all \(i^{\prime}\leq p^{\prime}\leq j^{\prime}\). Hence, by construction, \(v\in X_{p}\) for all \(i\leq p\leq j\). Therefore, Property (3) of the path-decomposition is satisfied for every \(v\in V\).
Let \(uv\in E\). First, let us assume that \(u\in Y_{0}\). By Claim 2, \(v\notin Y_{0}\). Let \(1\leq j\leq r\) such that \(u\in X_{j}\), then by construction, \(N(u)\subset X_{j}\) and so \(u,v\in X_{j}\). Second, assume that \(u,v\in V\setminus Y_{0}\). For purpose of contradiction, let us assume that, for every \(1\leq i\leq r\), \(|\{u,v\}\cap X_{i}|\leq 1\). W.l.o.g., \(M=\max\{1\leq j\leq r\mid u\in X_{j}\}<m=\min\{1\leq j\leq r\mid v\in X_{j}\}\) (both integers \(m\) and \(M\) are well defined since Properties (1) and (3) are satisfied). Let \(1\leq M^{\prime}\leq\ell\) (resp., \(M^{\prime}\leq m^{\prime}\leq\ell\)) such that \(X_{M}\) has been built from \(S_{M^{\prime}}\) (resp., \(X_{m}\) has been built from \(S_{m^{\prime}}\)). By definition of \(\mathcal{P}\), \(u\notin\bigcup_{M^{\prime}<i\leq\ell}S_{i}\) and \(v\in S_{m^{\prime}}\setminus\bigcup_{1<i<m^{\prime}}S_{i}\). Because \(\mathcal{S}\) is parsimonious, \(v\in Z_{m^{\prime}-1}(\mathcal{S})\) and so, there exists a \(w\in N(v)\cap Z_{m^{\prime}-2}(\mathcal{S})\setminus S_{m^{\prime}-1}\). By monotonicity of \(\mathcal{S}\), \(w\notin\bigcup_{1\leq i<m^{\prime}}S_{i}\). By Proposition 2, \(u\in Z_{m^{\prime}-1}(\mathcal{S})\). Since \(u\in(Z_{m^{\prime}-1}(\mathcal{S})\cap S_{m^{\prime}})\setminus S_{m^{\prime}}\), the vertex \(u\) is recontaminated, contradicting the monotonicity of \(\mathcal{S}\). Therefore, in all cases, for every \(uv\in E\), there exists \(1\leq j\leq r\) such that \(u,v\in X_{j}\). Hence, Property (2) of path-decompositions is satisfied.
Hence, \(\mathcal{P}\) is a path-decomposition of width at most \(k\). In particular, \(pw(G)\leq mh(G)\).
Theorem 1 has important consequences.
**Corollary 1**.: _Given an \(n\)-node graph \(G\) and \(k\in\mathbb{N}\), it is NP-hard to decide whether \(mh(G)\leq k\). Moreover, it is NP-hard to approximate \(mh(G)\) up to an additive error of \(n^{\varepsilon}\), for \(0<\varepsilon<1\)._
Proof.: This comes from Theorem 1 and the fact that it is NP-hard to approximate the pathwidth of a graph up to an additive error of \(n^{\varepsilon}\), for \(0<\varepsilon<1\)
Moreover, Theorem 1 implies that recontamination may help in the Hunters and Rabbit game.
**Corollary 2**.: _There exists \(\varepsilon>0\) such that, for any \(k\in\mathbb{N}\), there exists a tree \(T\) with \(h(T)\geq k\) and \(mh(T)\geq(1+\varepsilon)h(T)\)._
Proof.: For any \(n\in\mathbb{N}\), let \(T_{n}\) be the rooted tree defined as follows: \(T_{0}\) is a single node, and, for any \(n>0\), \(T_{n}\) is obtained from three copies of \(T_{n-1}\) and a new node \(r\) (the root of \(T_{n}\)) such that \(r\) is made adjacent to each of the three roots of the copies of \(T_{n-1}\). We have that \(|V(T_{n})|=\frac{3^{n+1}-1}{2}\) and, by the Parsons' Lemma [26], \(pw(T_{n})=n=\Theta(\log_{3}(V(T)))\). On the other hand, it is shown in [16] that, for any tree \(T\), \(h(T)\leq\lceil\frac{\log_{2}|V(T)|}{2}\rceil\). The result follows for \((1+\varepsilon)=2\frac{\ln(2)}{\ln(3)}\).
## 4 (Monotone) hunter number of some graph classes
In this section, we characterise the monotone hunter number of several graph classes such as split graphs, interval graphs, cographs and trees. In particular, in all these cases, our results lead to a polynomial time algorithm to compute the monotone hunter number.
### Split and interval graphs
A graph \(G=(V,E)\) is a _split graph_ if \(V=C\cup I\) can be partitioned into a set \(C\) inducing an inclusion-maximal clique and a set \(I\) inducing an independent set. Note that given a split graph \(G\), a partition \((C,I)\) of \(V(G)\) can be computed in linear time [18]. In what follows, we denote a split graph by \(G=(C\cup I,E)\) where \(C\) induces an inclusion-maximal clique and \(I\) induces an independent set. Let us recall the following result on the pathwidth of split graphs:
**Lemma 11**.: _[_17_]_ _Let \(G=(C\cup I,E)\) be a split graph. Then, \(|C|-1\leq pw(G)\leq|C|\)._
First, we have the following easy observation.
**Proposition 3**.: _Let \(G=(C\cup I,E)\) be a split graph. Then, \(|C|-1\leq h(G)\leq mh(G)\leq|C|\)._
Proof.: By Lemma 1, \(h(G)\geq h(G[C])\), and by Lemma 4, \(h(G[C])\geq\delta(G[C])=|C|-1\). Therefore, \(h(G)\geq|C|-1\). Moreover, the hunter strategy that consists in shooting to all the vertices of \(C\) twice is clearly a monotone winning hunter strategy in \(G\). Hence, \(h(G)\leq mh(G)\leq|C|\).
The following theorem fully characterises the hunter number of split graphs.
**Theorem 2**.: _Let \(G=(C\cup I,E)\) be a split graph. Then, \(h(G)=|C|\) if and only if for every two distinct vertices \(x,y\in C\), there exists a vertex \(z\in N_{I}(x)\cap N_{I}(y)\). Otherwise, \(h(G)=|C|-1\)._
Proof.: First we show that if, for every two distinct vertices \(x,y\in C\), there exists a vertex \(z\in I\) such that \(xz\in E\) and \(yz\in E\), then \(h(G)=|C|\). We prove this by showing that there exists a winning rabbit strategy against \(|C|-1\) hunters. That is, for any (fixed) hunter strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) such that \(|S_{i}|\leq|C|-1\) for every \(i\geq 1\), we design a rabbit trajectory \(\mathcal{R}=(r_{0},r_{1},\ldots,r_{\ell-1})\) such that for every \(i\geq 0\), \(r_{i}\notin S_{i+1}\). Since \(|S_{1}|\leq|C|-1\), there is at least one vertex, say, \(v\in C\), such that \(v\notin S_{1}\). Let \(r_{0}=v\). Hence the rabbit is safe for the first round (since \(r_{0}\notin S_{1}\)). Now, for \(i\geq 0\), let us assume that we have built \((r_{0},\ldots,r_{i})\) such that \(r_{j}\notin S_{j+1}\) for every \(0\leq j\leq i\) and \(r_{i}\in C\). If there is at least one vertex \(u\neq r_{i}\) in \(C\) such that \(u\) is not shot in round \(i+2\) (i.e., \(u\notin S_{i+2}\)), then let \(r_{i+1}=u\). Otherwise, \(S_{i+2}=C\setminus\{r_{i}\}\). Moreover, observe that there is at least one vertex \(w\in C\) such that \(w\notin S_{i+3}\) (since \(|S_{i+3}|<|C|\)). We
note here that \(w\) may be the same vertex as \(r_{i}\). Due to our assumptions, there exists a vertex \(z\in I\) such that \(wz,r_{i}z\in E\). Let us set \(r_{i+1}=z\) and \(r_{i+2}=w\). Observe that \(r_{i+1}\notin S_{i+2}\) (since \(S_{i+2}=C\setminus\{r_{i}\}\) and \(z\in I\)), \(r_{i+2}\notin S_{i+3}\), and \(r_{i+2}\in C\). Therefore, using the above strategy, we can design \(\mathcal{R}\) such that it is a winning trajectory against \(\mathcal{S}\). Therefore, \(h(G)\geq|C|\). Since \(h(G)\leq|C|\) (due to Proposition 3), we have that \(h(G)=|C|\).
To prove the reverse direction, we show that if there exist two distinct vertices \(x,y\in C\) such that \(N_{I}(x)\cap N_{I}(y)=\emptyset\) (i.e., there is no \(z\in I\) such that \(xz\in E\) and \(yz\in E\)), then \(h(G)\leq|C|-1\) (and so \(h(G)=|C|-1\) by Proposition 3). We prove this by giving a (simple) winning hunter strategy \(\mathcal{S}\) using \(|C|-1\) hunters. Let \(\mathcal{S}=(S_{1},S_{2},S_{3},S_{4},S_{5})\) where \(S_{1}=S_{2}=S_{5}=C\setminus\{y\}\) and \(S_{3}=S_{4}=C\setminus\{x\}\). Let \(\mathcal{R}=(r_{0},\ldots,r_{4})\) be any rabbit trajectory. If the rabbit is not shot at the first round, i.e., \(r_{0}\notin S_{1}\), then either \(r_{0}=y\) or \(r_{0}\in I\). Accordingly, we consider both these cases below to show that \(\mathcal{S}\) is a winning hunter strategy.
**Case 1. \(\mathbf{r_{0}=y:}\)** In this case, assume \(r_{1}\notin S_{2}\) (otherwise, the rabbit will be shot). Note that \(r_{1}\in N_{I}(y)\). Since \(N_{I}(x)\cap N_{I}(y)=\emptyset\), \(r_{2}\in C\setminus\{x\}\). As \(S_{3}=C\setminus\{x\}\), \(r_{2}\in S_{3}\), and therefore, \(\mathcal{S}\) is a winning hunter strategy.
**Case 2. \(\mathbf{r_{0}\in I:}\)** In this case, if \(r_{1}\notin S_{2}\), then \(r_{1}=y\). Now, assuming that \(r_{2}\notin S_{3}\), the rabbit can either move to \(x\) (i.e, \(r_{2}=x\)) or the rabbit can move to \(N_{I}(y)\) (i.e., \(r_{2}\in N_{I}(y)\)). We have the following two cases accordingly:
1. \(\mathbf{r_{2}\in N_{I}(y):}\) This case is similar to Case 1. Since \(N_{I}(x)\cap N_{I}(y)=\emptyset\), \(r_{3}\in C\setminus\{x\}\). As \(S_{4}=C\setminus\{x\}\), \(r_{3}\in S_{4}\), and therefore, \(\mathcal{S}\) is a winning hunter strategy.
2. \(\mathbf{r_{2}=x:}\) In this case, if \(r_{3}\notin S_{4}\), then \(r_{3}\in N_{I}(x)\). Therefore, similarly to previous arguments, \(r_{4}\in C\setminus\{y\}\). Since \(S_{5}=C\setminus\{y\}\), \(\mathcal{S}\) is a winning hunter strategy.
This completes the proof.
The above characterisation allows us to show that the hunter number and the pathwidth of split graphs coincide.
**Corollary 3**.: _For any split graph \(G=(C\cup I,E)\), \(h(G)=pw(G)\)._
Proof.: If \(|C|=1\) and \(I=\emptyset\), then \(pw(G)=h(G)=0\). If \(|C|=1\) and \(I\neq\emptyset\), then \(pw(G)=h(G)=1\). Let us now assume that \(|C|>1\).
Since \(pw(G),h(G)\in\{|C|-1,|C|\}\) by Lemma 11 and Proposition 3, let us assume first that \(h(G)=|C|\). By Theorem 2, for any two distinct vertices \(x,y\in C\), there exist \(z\in N_{I}(x)\cap N_{I}(y)\). For purpose of contradiction, let us assume that there exists a reduced optimal path-decomposition \(P=(X_{1},\ldots,X_{\ell})\) of width \(|C|-1\). It is well known that there exists \(1\leq i\leq\ell\) with \(C\subseteq X_{i}\). Moreover, since \(P\) has width \(|C|-1\), \(X_{i}=C\). Let us prove now that \(1<i<\ell\). Let us suppose by contradiction that \(i=1\), i.e., \(X_{1}=C\) (the case \(i=\ell\) is symmetric). Since \(P\) has width \(|C|-1\) and is reduced, let \(v\in X_{1}\setminus X_{2}\) and let \(z\in N_{I}(v)\) (\(z\) exists since \(|C|>1\) and any two distinct vertices of \(C\) have a common neighbour in \(I\)). Since \(v\in X_{1}=C\) and \(v\notin\bigcup_{1<j\leq\ell}X_{j}\), no bag contains both \(v\) an \(z\), contradicting the definition of a path-decomposition. Hence, \(1<i<\ell\). Now, let \(x\in C\setminus X_{i-1}\) and \(y\in C\setminus X_{i+1}\) (\(x\) and \(y\) exists since \(P\) is reduced and \(X_{i}=C\)). Let \(z\in N_{I}(x)\cap N_{I}(y)\). If \(x=y\), no bag contains both \(x\) and \(z\) (since \(x\) appears only in \(X_{i}=C\)). If \(x\neq y\), there must exist \(1\leq j\leq\ell\) such that \(\{y,z\}\subseteq X_{j}\). Since \(y\notin\bigcup_{i<h\leq\ell}X_{h}\), \(j<i\) and since \(z\in X_{j}\setminus X_{i}\), \(z\notin\bigcup_{i<h\leq\ell}X_{h}\). Finally, since \(x\notin\bigcup_{1\leq h<i}X_{h}\), there is no bag containing both \(x\) and \(z\), contradicting the definition of a path-decomposition.
Second, let us assume that \(h(G)=|C|-1\). By Theorem 2, there exist distinct vertices \(x,y\in C\) such that \(N_{I}(x)\cap N_{I}(y)=\emptyset\). Let us prove that, in that case, \(pw(G)=|C|-1\). Let \(N_{I}(x)=\{x_{1},\ldots,x_{m}\}\) and \(I\setminus N_{I}(x)=\{y_{1},\ldots,y_{t}\}\). Then,
\(\{y\}),C,y_{1}\cup(C\setminus\{x\}),\ldots,y_{t}\cup(C\setminus\{x\}))\) is a path decomposition of \(G\) with width \(|C|-1\) and, since \(pw(G)\geq|C|-1\) by Lemma 11, we get that \(pw(G)=|C|-1\).
Next, let us characterise the monotone hunter number of split graphs. We start with the following general lemma.
**Lemma 12**.: _Let \(G\) be a graph that contains a complete subgraph \(C\) such that \(N(v)\setminus C\neq\emptyset\) for every \(v\in C\). Then, \(mh(G)\geq|C|\)._
Proof.: By Lemmas 1, 4 and Proposition 1, \(mh(G)\geq h(G)\geq h(C)\geq|C|-1\). Let \(H=G[N[C]]\). We will show that \(mh(H)\geq|C|\) and so, the result will follow from Lemma 8.
Let us assume by contradiction that \(mh(H)=|C|-1\). By Lemma 9, there exists a parsimonious monotone winning hunter strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) in \(H\) using \(|C|-1\) hunters.
There must be an index \(1\leq i\leq\ell\) such that \(|S_{i}\cap C|=|C|-1\). Otherwise, \((r_{0},\ldots,r_{\ell})\) where \(r_{0}\in C\setminus S_{1}\) and \(r_{j}\in C\setminus(S_{j+1}\cup\{r_{j-1}\})\) for every \(1\leq j\leq\ell\) is a winning rabbit trajectory, contradicting the fact that \(\mathcal{S}\) is winning. Hence, let \(i\) be the smallest integer such that \(|S_{i}\cap C|=|C|-1\), let \(\{v\}=C\setminus S_{i}\) and let \(w\in N(v)\setminus C\) (which exists by hypothesis). Let us define a rabbit trajectory \(\mathcal{R}=(r_{0},\ldots,r_{i-1}=v)\) such that \(r_{j}\in C\setminus(S_{j+1}\cup\{r_{j-1}\})\) for every \(1\leq j<i-1\), which is possible since \(|S_{j}\cap C|<|C|-1\) for all \(j<i\). Thus \(v\in Z_{i-1}\setminus S_{i}\). Therefore, \(v\notin\bigcup_{1\leq j\leq i}S_{j}\) since otherwise, \(v\) would have been recontaminated.
Let us show now that \(w\notin\bigcup_{1\leq j<i}S_{j}\). Towards contradiction, let us assume that \(w\notin\bigcup_{1\leq j<i}S_{j}\). If \(w\notin S_{i+1}\), then let \(r_{i}=w\): this contradicts the monotonicity of \(\mathcal{S}\) (since \(w\in Z_{i}\cap\bigcup_{1\leq j<i}S_{j}\)) \(\setminus S_{i+1}\)). Hence, \(w\in S_{i+1}\) and so there exists \(z\in C\setminus(S_{i+1}\cup\{v\})\). In this latter case, let \(r_{i}=z\), contradicting the monotonicity of \(\mathcal{S}\) (since \(z\in Z_{i}\cap S_{i}\)) \(\setminus S_{i+1}\)).
Since \(w\notin\bigcup_{1\leq j<i}S_{j}\), let \(j>i\) be the smallest integer such that \(w\in S_{j}\), or, if \(w\) is never shot, let \(j>i\) be the smallest integer such that \(v\in S_{j}\) (it must exists otherwise the rabbit may oscillate between \(v\) and \(w\) without being never shot). In both cases, let \(z\in C\setminus(S_{j}\cup\{v\})\). Thus, by Proposition 2, \(z\in Z_{j-1}\setminus S_{j}\), contradicting the monotonicity of \(\mathcal{S}\).
Recall that a vertex in a graph \(G\) is _simplicial_ if its neighbourhood induces a clique. In particular, in a split graph \(G=(C\cup I,E)\), a vertex \(v\in C\) is simplicial if and only if \(N(v)\setminus C=\emptyset\) (recall that \(C\) is supposed to be an inclusion-maximal clique).
**Theorem 3**.: _Let \(G=(C\cup I,E)\) be a split graph. Then, \(mh(G)=|C|-1\) if and only if there exists a simplicial vertex in \(C\). Otherwise, \(mh(G)=|C|\)._
Proof.: Note first that if there is no simplicial vertex in \(C\), then by Lemma 12, \(mh(G)\geq|C|\) and so, by Proposition 3, \(mh(G)=|C|\). Otherwise, if there exists a simplicial vertex \(v\in C\), then \(S=(C\setminus v,C\setminus v)\) is a monotone winning hunter strategy in \(G\).
Note that above results imply that there exist split graphs \(G\) for which \(mh(G)\neq h(G)\), i.e., recontamination helps in split graphs. For instance, let \(G\) be the split graph obtained from a clique \(C\) and an independent set \(I\) by adding a perfect matching between \(C\) and \(I\). By Theorems 3 and 2, we get that \(mh(G)=n\) and \(h(G)=n-1\).
To conclude this section, let us show another application of Lemma 12. Recall that an _interval graph_ is the intersection graph of a set of intervals in the real line. It is well known that, for any interval graph \(G\), \(pw(G)=\omega(G)-1\) where \(\omega(G)\) is the maximum size of a clique in \(G\), and that \(G\) admits an optimal path-decomposition where each bag induces a complete graph.
**Theorem 4**.: _Let \(G\) be an interval graph. Then, \(h(G)=mh(G)=\omega(G)-1\) if every maximum clique has a simplicial vertex. Otherwise, \(mh(G)=\omega(G)\)._
Proof.: By Theorem 1, \(pw(G)\leq mh(G)\leq pw(G)+1=\omega(G)\). Moreover, by Lemma 4, \(h(G)\geq\omega(G)-1\). If there exists a clique of maximum size that does not contain any simplicial vertex, then by Lemma 12, \(mh(G)=\omega(G)\). Otherwise, let \((X_{1},\ldots,X_{\ell})\) be an optimal path-decomposition of \(G\) such that all bags induce a complete graph. For every \(1\leq i\leq\ell\), if \(X_{i}\) contains a simplicial vertex \(v_{i}\), let \(Y_{i}=\{v_{i}\}\) and let \(Y_{i}=\emptyset\) otherwise. Then, \((X_{1}\setminus Y_{1},X_{1}\setminus Y_{1},X_{2}\setminus Y_{2},X_{2}\setminus Y _{2},X_{3}\setminus Y_{3},\ldots,X_{\ell}\setminus Y_{\ell},X_{\ell}\setminus Y _{\ell})\) is a monotone winning hunter strategy using \(\omega(G)-1\) hunters.
It follows that \(\omega(G)-1\leq h(G)\leq\omega(G)\). But, the question of deciding \(h(G)\) in interval graph when some maximum clique has no simplicial vertex seems more challenging.
### Cographs
The class of _cographs_ can be defined recursively as follows [11]. One vertex is a cograph. Given two cographs \(A\) and \(B\), their disjoint union \(A\cup B\) is a cograph, and their join \(A\Join B\) (where all edges between \(A\) and \(B\) are added) is a cograph. Note that a decomposition of a cograph (_i.e._, a building sequence of unions and joins performed from single vertices) can be computed in linear time [11].
**Theorem 5**.: \(mh(G)\) _can be computed in linear time in the class of cographs._
Proof.: Let \(A\) and \(B\) be two cographs. We prove that:
* \(mh(A\cup B)=max(mh(A),mh(B))\), and
* \(mh(A\Join B)=min(mh(A)+|V(B)|,|V(A)|+mh(B))\).
The result then follows from the linear time algorithm to compute the recursive decomposition of cographs [11].
The first statement is obvious, so let us prove the second one. Let \(G=A\Join B\) and let \(\mathcal{S}^{A}=(S^{A}_{1},\ldots,S^{A}_{\ell})\) and \(\mathcal{S}^{B}\) be two monotone winning hunter strategies for \(A\) and \(B\) and using respectively \(mh(A)\) and \(mh(B)\) hunters. Note that \(\mathcal{S}^{A}\cup V(B)=(S^{A}_{1}\cup V(B),\ldots,S^{A}_{\ell}\cup V(B))\) and \(\mathcal{S}^{B}\cup V(A)\) are both monotone winning hunter strategies in \(G\). Therefore, \(mh(G)\leq min(mh(A)+|V(B)|,|V(A)|+mh(B))\).
Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a parsimonious monotone winning hunter strategy in \(G\) using at most \(k\geq mh(G)\) hunters (it exists by Lemma 9) and such that \(\ell\) is minimized among all such strategies. If \(\ell=1\), then \(k=|S_{1}|=|V(G)|\geq min(mh(A)+|V(B)|,|V(A)|+mh(B))\). Hence, let us assume that \(\ell>1\). Let \(\mathcal{Z}=(Z_{0},\ldots Z_{\ell})\) be the set of contaminated vertices with respect to \(\mathcal{S}\). Note first that since \(\ell\) is minimum, \((S_{2},\ldots,S_{\ell})\) is not a winning hunter strategy, and so \(Z_{1}\neq V\). Let \(v\in V\setminus Z_{1}\).
Let us assume that \(v\in V(A)\) (the case \(v\in V(B)\) is symmetric). Since \(v\notin Z_{1}\) and \(Z_{0}=V\), \(N(v)\subseteq S_{1}\). Since \(B\subseteq N(v)\), we have that \(V(B)\subseteq S_{1}\). Moreover, \(V(B)\subseteq Z_{1}\). Otherwise, there exists \(w\in B\) such that \(N(w)\subseteq S_{1}\) and since \(V(A)\subseteq N(w)\), we would have \(S_{1}=V(A)\cup V(B)=V(G)\), a contradiction to the fact that \(\ell>1\).
Let us prove by induction on \(i\) that, for every \(1\leq i<\ell\), \(V(B)\subseteq Z_{i}\) and \(V(B)\subseteq S_{i}\). This statement holds for \(i=1\) by the previous paragraph. By induction, let us assume that \(V(B)\subseteq Z_{i}\) and \(V(B)\subseteq S_{i}\) for some \(1\leq i<\ell-1\). Since \(\mathcal{S}\) is monotone, \(V(B)\subseteq S_{i+1}\). Let us assume that there exists \(b\in V(B)\) such that \(b\notin Z_{i+1}\). It implies that \(V(A)\cap Z_{i}\subseteq S_{i+1}\). Therefore, \(Z_{i}=(V(A)\cap Z_{i})\cup(V(B)\cap Z_{i})\subseteq S_{i+1}\), which implies that \(Z_{i+1}=\emptyset\), contradicting the minimality of \(\ell\). Thus \(V(B)\subseteq Z_{i+1}\) and the induction hypothesis holds for \(i+1\). In particular, \(V(B)\subseteq Z_{\ell-1}\) and so, by monotonicity, \(V(B)\subseteq S_{\ell}\). Therefore, \(V(B)\subseteq S_{i}\) for all \(1\leq i\leq\ell\). Since \(V(B)\subseteq S_{i}\) for all \(1\leq i\leq\ell\), the strategy \(\mathcal{S}\cap V(A)=(S_{1}\cap V(A),\ldots,S_{\ell}\cap V(A))\) is a
monotone winning hunter strategy in \(G[A]\) using \(k-|V(B)|\) hunters. Hence, \(k-|V(B)|\geq mh(A)\) which concludes the proof.
Once again, the case of the hunter number seems more challenging. In particular, the following lemma shows that recontamination may help in cographs.
**Lemma 13**.: _For every \(k\geq 2\), there exists a cograph \(G\) such that \(h(G)\geq k\) and \(mh(G)\geq\frac{3}{2}h(G)-1\)._
Proof.: Let \(a\geq 1\). Let \(A\) and \(B\) be two (isomorphic) cographs that consist of the disjoint union of a complete graph with \(a\) vertices (denoted by \(K_{A}\) and \(K_{B}\) respectively) and \(a\) independent vertices (so \(|V(A)|=|V(B)|=2a\)). Let \(G=A\Join B\). Clearly, \(h(A)=mh(A)=h(B)=mh(B)=a-1\) and, by the proof of Theorem 5, \(mh(G)=3a-1\). Note also that \(h(G)\geq 2a\) by Lemma 4. Now, \((A,K_{A}\cup K_{B},K_{B},A)\) is a (non-monotone) winning hunter strategy in \(G\) using \(2a\) hunters and so \(h(G)=2a\).
### Trees
This section is devoted to showing that the monotone hunter number of trees can be computed in polynomial time. Roughly, we show that a Parsons' like lemma [26] holds for the monotone hunter number in trees and then the algorithm follows the one for computing the pathwidth of trees in [13].
Let us start with the easy case of paths.
**Proposition 4**.: _Let \(P\) be any path with at least \(4\) vertices. Then, \(1=h(P)<mh(P)=2\)._
Proof.: The fact that \(h(P)=1\) has been proven in [1], and the fact that \(mh(P)\leq 2\) is easy.
Towards a contradiction, let us assume that there exists a winning monotone hunter strategy in \(P\) using one hunter and let \(\mathcal{S}=(S_{1},\ldots S_{\ell})\) be a shortest such strategy (i.e., minimizing \(\ell\)). Let \(Z=(Z_{1},\ldots,Z_{\ell})\) be the set of contaminated vertices with respect to \(\mathcal{S}\). Let \(w\in V(P)\) such that \(S_{1}=\{w\}\). Note that \(w\in Z_{1}\) and so, \(\ell>1\). Since \(P\) has length at least \(4\), there exist \(x,y\in V(P)\) such that \(x\in N(w)\) and \(y\in N(x)\setminus\{w\}\). We will prove by induction on \(i\) that \(S_{i}=\{w\}\) for all \(1\leq i\leq\ell\). The base case (\(i=1\)) is already proven. Assume now that for some \(1\leq q<\ell\), it holds that \(S_{j}=\{w\}\) for all \(1\leq j\leq q\). Thus, \(x,y\notin\bigcup_{1\leq j\leq q}S_{j}\) and so, by Proposition 2, \(w\in Z_{q}\). Hence, by the monotonicity of \(\mathcal{S}\), we have that \(w\in S_{q+1}\), showing the step of the induction. Therefore, \(x,y\notin\bigcup_{1\leq j\leq\ell}S_{j}\) and so, by Proposition 2, \(w,x,y\in Z_{\ell}\), contradicting the fact that \(\mathcal{S}\) is a winning strategy in \(P\). Therefore, \(mh(P)\geq 2\).
We then need the following two technical results.
**Proposition 5**.: _Let \(G=(V,E)\) be any connected graph and \(H\) be a connected subgraph of \(G\). Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be any parsimonious monotone winning hunter strategy in \(G\). Moreover, let \(1\leq i\leq\ell\) and \(x,y\in V(H)\) such that \(x\in\bigcup_{j<i}S_{j}\), \(y\in Z_{i-1}\) and minimising the distance between such \(x\) and \(y\) in \(H\). If \(x,y\notin S_{i}\), then \(xy\in E(H)\)._
Proof.: Note first that \(x\neq y\). Indeed, assuming otherwise would imply that \(\mathcal{S}\) is not monotone since \(y=x\in(\bigcup_{j<i}S_{j}\cap Z_{i-1})\setminus S_{i}\). Hence, we may assume that \(x\neq y\). Let \(P\) be a shortest path from \(x\) to \(y\) in \(H\). Let \(a\) be the neighbour of \(x\) in \(P\). If \(a=y\), then \(\{x,y\}\in E(G)\), and the claim holds. Hence, we may assume that \(a\neq y\). By minimality of the distance between \(x\) and \(y\), \(a\notin Z_{i-1}\) and \(a\notin\bigcup_{j<i}S_{j}\). Let \(b\neq x\) be the other neighbour of \(a\) in \(P\). We show that \(b\notin\bigcup_{j<i}S_{j}\). If \(b\neq y\), then, by minimality of the distance between \(x\) and \(y\), \(b\notin\bigcup_{j<i}S_{j}\). If \(b=y\), since \(y\in Z_{i-1}\setminus S_{i}\), then \(y\notin\bigcup_{j<i}S_{j}\), because otherwise this would contradict the monotonicity of \(\mathcal{S}\). Therefore, by Proposition 2, \(a\in Z_{q}\) for every \(q<i\). In particular, \(a\in Z_{i-1}\), a contradiction.
**Lemma 14**.: _Let \(G=(V,E)\) be any graph and \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a parsimonious monotone winning hunter strategy in \(G\) that uses at most \(k\) hunters. Let \(H\) be a connected subgraph of \(G\) with \(|V(H)|>1\). If \(S_{i}\cap V(H)\neq\emptyset\) and \(S_{j}\cap V(H)\neq\emptyset\) for some \(1\leq i<j\leq\ell\), then \(S_{z}\cap V(H)\neq\emptyset\) for every \(i\leq z\leq j\)._
Proof.: Let \(2\leq i+1<j\leq\ell\) be such that \(V(H)\cap S_{i}\neq\emptyset\) and \(V(H)\cap S_{j}\neq\emptyset\). Towards a contradiction, let us assume that there exists \(i<z<j\) such that \(S_{z}\cap V(H)=\emptyset\). Let \(X=V(H)\cap\bigcup_{q<z}S_{q}\). Since \(V(H)\cap S_{i}\neq\emptyset\) and \(i<z\), we get that \(X\neq\emptyset\). Let \(Y=V(H)\cap Z_{z-1}\). Since \(S_{j}\cap V(H)\neq\emptyset\) and \(\mathcal{S}\) is parsimonious, we have that \(Z_{j-1}\cap V(H)\neq\emptyset\). Let \(u\in Z_{j-1}\cap S_{j}\cap V(H)\). By Lemma 6, \(u\in Z_{q}\) for every \(q<j\). In particular, \(u\in Z_{z-1}\) and so \(Y\neq\emptyset\). Let \(x\in X\) and \(y\in Y\) such that the distance bewteen \(x\) and \(y\) in \(H\) is minimum. By Proposition 5, \(xy\in E\). Thus, since \(y\in Z_{z-1}\setminus S_{z}\), we get that \(x\in Z_{z}\). Therefore, since \(\mathcal{S}\) is monotone and \(x\in\bigcup_{q<z}S_{q}\), we must have \(x\in S_{z+1}\). By Lemma 10, since \(x\in\bigcup_{q<z}S_{q}\) and \(x\in S_{z+1}\), then \(x\in S_{z}\). Hence, \(V(H)\cap S_{z}\neq\emptyset\), a contradiction.
Let \(T\) be a tree and \(v\in V(T)\). A _branch_ at \(v\) is any connected component of \(T-v\). A _star_ is any tree with at least two vertices and at most one vertex with degree at least two. Roughly speaking, Parsons' Lemma [26] states that, for any tree \(T\) and \(k\in\mathbb{N}\), \(pw(T)\geq k+1\) if and only if there exists a vertex \(v\) such that at least three branches at \(v\) have pathwidth at least \(k\). Here, we adapt this lemma in the case of the monotone hunter number of trees.
**Lemma 15** (Parsons' like lemma).: _Let \(T=(V,E)\) be any tree._
* \(mh(T)=0\) _if and only if_ \(|V|=1\)_;_
* \(mh(T)=1\) _if and only if_ \(T\) _is a star;_
* \(mh(T)=2\) _if and only if_ \(T\) _is not a star and contains a path_ \(P\) _such that_ \(T\setminus P\) _is a forest of stars and isolated vertices;_
* _For every_ \(k\geq 3\)_,_ \(mh(T)\geq k\) _if and only if there exists a vertex_ \(v\in V\) _such that at least three branches at_ \(v\) _have monotone hunter number at least_ \(k-1\)_._
Proof.: The first item is trivial. Then, if \(T\) is a star, then shooting twice at the vertex of maximum degree is a monotone winning hunter strategy using one hunter, and so, (since \(|V(T)|\geq 2\)), \(mh(T)=1\). If \(T\) is not a star (and \(|V(T)|>1\)), then it contains a path with at least \(4\) vertices as a subgraph. By Proposition 4 and Lemma 8, it follows that \(mh(T)\geq 2\), which concludes the proof of the second item. If \(T\) is not reduced to a star and contains a path \(P\) such that \(T\setminus P\) is a forest of stars and isolated vertices, it is easy to show that \(mh(T)\leq 2\). Otherwise, \(T\) contains a vertex \(v\) such that at least three components of \(T-v\) contain a path with \(4\) vertices. The "if" statement of the fourth item then shows that \(mh(T)>2\) and concludes the proof of the third item.
Let us prove the fourth item. Let \(k\geq 3\).
**Proof of \(\Leftarrow\):** Let us first assume that there exists some vertex \(v\) and three branches \(B_{1}\), \(B_{2}\) and \(B_{3}\) at \(v\), such that \(mh(B_{1}),mh(B_{2}),mh(B_{3})\geq k-1\). We will show that \(mh(T)\geq k\). Towards a contradiction, let us assume that \(mh(T)<k\). By Lemma 9, there exists a parsimonious monotone strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) in \(T\) that uses at most \(k-1\) hunters. Let \(Z=(Z_{1},\ldots,Z_{\ell})\) be the set of contaminated vertices with respect to \(\mathcal{S}\).
For \(j\in\{1,2,3\}\), let \(1\leq i_{j}\leq\ell\) be the minimum integer such that \(V(B_{j})\cap Z_{i_{j}}=\emptyset\). Note that, by Lemma 6, \(V(B_{j})\cap Z_{q}=\emptyset\) for all \(i_{j}\leq q\leq\ell\), and since \(\mathcal{S}\) is parsimonious, \(V(B_{j})\cap S_{q}=\emptyset\) for all \(i_{j}<q\leq\ell\). Note also that, since \(mh(B_{j})\geq k-1\geq 2\), \(B_{j}\) has at least two vertices. Since \(Z_{i_{j}-1}\cap V(B_{j})\neq\emptyset\) and \(Z_{i_{j}}\cap V(B_{j})=\emptyset\), it implies that \(S_{i_{j}}\cap V(B_{j})\neq\emptyset\) (otherwise, let
\(w\in(Z_{i_{j}-1}\cap V(B_{j}))\) and \(u\in N(w)\cap V(B_{j})\), then \(u\in Z_{i_{j}}\cap V(B_{j})\), a contradiction with the definition of \(i_{j}\)). W.l.o.g., let us assume that \(i_{1}\leq i_{2}\leq i_{3}\).
We will show that there exists a round \(j_{2}\) during which all the \(k-1\) hunters will have to shoot on vertices of \(B_{2}\), and that \(v\in Z_{j_{2}-1}\), which will lead to a contradiction.
For any \(1\leq i\leq 3\), let \(j_{i}\) be an index such that \(|S_{j_{i}}\cap V(B_{i})|=k-1\). These indices exist as, otherwise, by Lemma 8, \(mh(B_{i})<k-1\). We will first show that \(j_{2}<i_{3}\), which will be used to prove that \(v\in Z_{j_{2}-1}\). Observe that \(j_{2}\leq i_{2}\leq i_{3}\). Moreover, \(j_{2}\neq i_{3}\) since \(S_{i_{3}}\cap V(B_{3})\neq\emptyset\), \(|S_{j_{2}}\cap V(B_{2})|=k-1\) and \(\mathcal{S}\) uses at most \(k-1\) hunters. Hence, \(j_{2}<i_{3}\). Therefore, by Proposition 2 and Lemma 14, \(V(B_{3})\cup\{v\}\subseteq Z_{q}\) for all \(q\leq j_{2}\). In particular, \(v\in Z_{j_{2}-1}\). Moreover, since \(S_{j_{2}}\subseteq V(B_{2})\), \(v\notin S_{j_{2}}\). Hence, \(x\in Z_{j_{2}}\) where \(x\) is the neighbour of \(v\) in \(B_{1}\), i.e., \(Z_{j_{2}}\cap V(B_{1})\neq\emptyset\).
Since \(Z_{i_{1}}\cap V(B_{1})=\emptyset\), if \(i_{1}<j_{2}\), then there is a contradiction to Lemma 6. Otherwise, \(j_{2}<i_{1}\leq i_{2}\) (because \(j_{2}\neq i_{1}\)) and so either \(j_{1}<j_{2}<i_{1}\) or \(j_{2}<j_{1}<i_{2}\) (because \(j_{1}\neq j_{2}\), \(j_{1}\leq i_{1}\leq i_{2}\) and \(j_{1}\neq i_{2}\)), both contradicting Lemma 14.
**Proof of \(\Rightarrow\):** Now let us assume that, for every \(v\in V\), at most two branches at \(v\) have monotone hunter number at least \(k-1\). Let us prove that there exists a parsimonious monotone winning hunter strategy \(\mathcal{S}\) in \(T\) using at most \(k-1\) hunters.
First, let us assume that there exists a path \(P=(v_{1},\ldots,v_{p})\) such that for any connected component \(C\) of \(T\setminus P\), \(mh(C)<k-1\). The following hunter strategy is parsimonious monotone winning and uses at most \(k-1\) hunters. The strategy consists of \(p\) phases executed sequentially from \(i=1\) to \(p\). Phase \(i\) consists in shooting \(v_{i}\) at each round, and in using the \(k-2\) remaining shots to clear sequentially each connected component of \(T\setminus P\) that is adjacent to \(v_{i}\) (this is possible since each of these branches at \(v\) has monotone hunter number at most \(k-2\)). Finally, the last round of Phase \(i\) (except for \(i=p\)) consists in shooting to both \(v_{i}\) and \(v_{i+1}\) (recall that \(k-1\geq 2\)).
Let us now show that a path \(P\), defined as in the previous paragraph, exists. Let \(X\) be the set of vertices \(v\) such that exactly two branches at \(v\) have monotone hunter number at least \(k-1\). First, let us assume that \(X\neq\emptyset\) and let us show that it induces a path. Let \(x,y\in X\) and let \(z\) be any internal vertex of the path between \(x\) and \(y\). Let \(B\) (resp., \(B^{\prime}\)) be the branch at \(z\) that contains \(x\) (resp., that contains \(y\)). One branch \(B_{x}\) at \(x\) with \(mh(B_{x})\geq k-1\) is a subgraph of \(B\) and so, by Lemma 8, \(mh(B)\geq k-1\). Similarly, \(mh(B^{\prime})\geq k-1\) and so, \(z\in X\) and therefore \(X\) induces a subtree of \(T\). If there exists a node \(w\) of degree at least three in \(T[X]\), by similar arguments, there are at least three branches at \(w\) with monotone hunter number at least \(k-1\), a contradiction with the initial hypothesis. Hence, \(X\) induces a path \((v_{2},\ldots,v_{p-1})\). Let \(B_{1}\) be the branch at \(v_{2}\) not containing \(v_{3}\) such that \(mh(B_{1})=k-1\) (if \(v_{3}\) does not exist, \(B_{1}\) is any branch at \(v_{2}\) with \(mh(B_{1})\geq k-1\)) and let \(v_{1}\) be the neighbour of \(v_{2}\) in \(B_{1}\). Symmetrically, let \(B_{p}\) be the branch at \(v_{p-1}\) not containing \(v_{p-2}\) such that \(mh(B_{p})=k-1\) (if \(p-1=2\), let \(B_{p}\) be the branch at \(v_{2}\), distinct from \(B_{1}\), and with \(mh(B_{p})\geq k-1\)) and let \(v_{p}\) be the neighbour of \(v_{p-1}\) in \(B_{p}\). Then, \(P=(v_{1},v_{2},\ldots,v_{p-1},v_{p})\) satisfies the desired conditions.
Finally, if \(X=\emptyset\), let \(v_{1}\) be any vertex of \(T\). We build the path \(P\) starting from \(v_{1}\) as follows. Let us assume that a path \((v_{1},\ldots,v_{i})\) has already been built for some \(i\geq 1\). If there exists a branch \(B\) at \(v_{i}\), not containing \(v_{i-1}\) (if \(i>1\)), and with monotone hunter number at least \(k-1\) (if any, such a branch must be unique since \(X=\emptyset\)), then, let \(v_{i+1}\) be the neighbour of \(v_{i}\) in \(B\). The process ends when no such branch \(B\) exists and the obtained path satisfies the desired conditions.
This completes the proof.
We design a dynamic programming algorithm to compute the monotone hunter number of a tree \(T\). Let us first root \(T\) at any vertex \(r\in V(T)\). Let \(T[u]\) denote the subtree of \(T\) induced
by \(u\in V(T)\) and all the descendent of \(u\). For any \(x_{1},\cdots,x_{p}\in V(T[u])\), let \(T[u,x_{1},\ldots,x_{p}]\) denote the subtree obtained from \(T[u]\) after the vertices of \(\bigcup_{i\leq p}V(T[x_{i}])\) have been removed. Finally, for \(k\geq 2\), a vertex \(x\in V(T)\) is _\(k\)-critical_ if and only if \(mh(T[x])=k\) and there exist two children \(v_{1}\) and \(v_{2}\) of \(x\) in \(T\) such that \(mh(T[v_{1}])=mh(T[v_{2}])=k\). In the case \(k=1\), we will say that a vertex \(x\) is _\(1\)-critical_ if and only if \(mh(T[x])=1\) and there exists a unique child \(v\) of \(x\) such that \(mh(T[v])=1\).
**Remark:** Let us recall that \(mh(T[w])\geq 1\) if \(T[w]\) contains at least one edge, i.e., \(w\) has at least one child. Therefore, if a vertex \(w\) is \(1\)-critical in \(T[w]\), then \(T[w]\) is a star centred in the child \(w^{\prime}\) of \(w\) such that \(mh(T[w^{\prime}])=1\), i.e., \(w^{\prime}\) is the only vertex of \(T[w]\) that has degree at least \(2\). Moreover, if a vertex \(w\) is not \(1\)-critical and \(mh(T[w])=1\), then \(T[w]\) is star rooted in \(w\), i.e., \(w\) is the only vertex that may have degree greater than \(1\) in \(T[w]\) (\(w\) may also have degree \(1\) when \(|V(T[w])|=2\)).
The next lemma is used through out the proof of Corollary 4 and Lemma 17.
**Lemma 16**.: _Let \(T\) be any tree rooted in a vertex \(v\in V(T)\). Let \(k=\max_{1\leq i\leq d}\{mh(T[v_{i}]\}\), where \(v_{1},\ldots,v_{d}\) are the children of \(v\) in \(T\). For any vertex \(w\in V(T)\), there is at most \(1\) branch at \(w\) in \(T\) that has monotone hunter number \(k+1\). Moreover, \(mh(T)\leq k+1\)._
Proof.: Note first that, by definition of \(k\), there is no branch at \(v\) that has monotone hunter number \(k+1\). Hence, let \(w\) be any vertex \(V(T)\setminus\{v\}\) and let us denote by \(x\) the child of \(v\) such that \(w\in V(T[x])\). By definition of \(k\), \(mh(T[x])\leq k\). For any child \(z\) of \(w\), \(T[z]\) is a subtree of \(T[x]\). Thus, by Lemma 8, \(mh(T[z])\leq k\) and so there is at most \(1\) branch at \(w\) that has monotone hunter number at least \(k+1\). To conclude, note that the hunter strategy consisting in applying sequentially all the monotone winning hunter strategies for the branches at \(v\), while shooting continuously on \(v\), is a monotone winning hunter strategy in \(T\) using at most \(k+1\) hunters, i.e., \(mh(T)\leq k+1\).
The upcoming corollary, obtained from Lemmas 15 and 16, describes how to compute \(mh(T)\) of a rooted tree \(T\), bottom-up, from its leaves to the root, such that, for any \(u\in V(T)\), \(mh(T[u])\) is computed from the values \((mh(T[u_{i}]))_{u_{i}\ child\ of\ u}\) and from the critical vertices the subtrees \(T[u_{i}]\) contain.
**Corollary 4**.: _Let \(T\) be a rooted tree, \(u\in V(T)\) and let \(u_{1},\ldots,u_{d}\) be the \(d\) children of \(u\) in \(T\). Let us order the children of \(u\) such that \(k=mh(T[u_{1}])\geq mh(T[u_{2}])\geq\ldots mh(T[u_{d}])\)._
1. _If_ \(d=0\)_, then_ \(mh(T[u])=0\)_;_
2. _If_ \(k=0\) _and_ \(d>0\)_, then_ \(mh(T[u])=1\)_;_
3. _If_ \(k=1\)_,_ \(d=1\) _and the only child of_ \(u\) _is not_ \(1\)_-critical, then_ \(mh(T[u])=1\)_;_
4. _If_ \(k=1\) _and_ \(d=1\) _and the only child of_ \(u\) _is_ \(1\)_-critical, then_ \(mh(T[u])=2\)_;_
5. _If_ \(k=1\) _and_ \(d\geq 2\)_, then_ \(mh(T[u])=2\)_;_
6. _If_ \(k>1\) _and_ \(mh(T[u_{3}])=k\)_, then_ \(mh(T[u])=k+1\)_;_
7. _If_ \(k>1\)_,_ \(mh(T[u_{2}])=k\) _and (_\(mh(T[u_{3}])<k\) _or_ \(d=2\)_), and_ \(T[u_{1}]\) _or_ \(T[u_{2}]\) _contains a_ \(k\)_-critical vertex, then_ \(mh(T[u])=k+1\)_._
8. _If_ \(k>1\)_,_ \(mh(T[u_{2}])=k\) _and (_\(mh(T[u_{3}])<k\) _or_ \(d=2\)_), and neither_ \(T[u_{1}]\) _nor_ \(T[u_{2}]\) _contains a_ \(k\)_-critical vertex, then_ \(mh(T[u])=k\)
_._
9. _If_ \(k>1\)_,_ \(mh(T[u_{1}])=k\) _and (_\(mh(T[u_{2}])<k\) _or_ \(d=1\)_),_ \(T[u_{1}]\) _contains a_ \(k\)_-critical vertex_ \(x\) _and_ \(mh(T[u,x])=k\)_, then_ \(mh(T[u])=k+1\)_;_
10. _If_ \(k>1\)_,_ \(mh(T[u_{1}])=k\) _and (_\(mh(T[u_{2}])<k\) _or_ \(d=1\)_) and_ \(T[u_{1}]\) _contains a_ \(k\)_-critical vertex_ \(x\) _and_ \(mh(T[u,x])<k\)_, then_ \(mh(T[u])=k\)_;_
11. _If_ \(k>1\)_,_ \(mh(T[u_{1}])=k\) _and (_\(mh(T[u_{2}])<k\) _or_ \(d=1\)_) and_ \(T[u_{1}]\) _does not contain any_ \(k\)_-critical vertex, then_ \(mh(T[u])=k\)_._
Proof.: Each statement can be proved using Lemma 15.
1. If \(d=0\), then \(T[u]\) is a single vertex and by Lemma 15, \(mh(T[u])=0\).
2. If \(k=0\) and \(d>0\), then there exists at least one child \(w\) of \(u\) such that \(mh(T[w])=0\) and there is no child \(w^{\prime}\) of \(u\) such that \(mh(T[w^{\prime}])>0\). Thus, \(T[u]\) contains at least one edge and is a star graph centred in \(u\). By Lemma 15, we get that \(mh(T[u])=1\).
3. If \(k=1\), \(d=1\) and the only child of \(u\) is not \(1\)-critical, i.e., \(u_{1}\) is the only child of \(u\) and \(mh(T[u_{1}])=1\). Thus, \(T[u_{1}]\) is a star centred at \(u_{1}\). Therefore, \(T[u]\) is also a star centred at \(u_{1}\), and so, by Lemma 15, \(mh(T[u])=1\).
4. If \(k=1\), \(d=1\) and the only child of \(u\) is \(1\)-critical, i.e., \(u_{1}\) is the only child of \(u\), \(mh(T[u_{1}])=1\) and there exists a child \(w\) of \(u_{1}\) such that \(mh(T[w])=1\). By Lemma 15, \(T[w]\) contains at least \(1\) edge. Let \(v\) be the child of \(w\) in \(T[w]\). Thus, \(T[u]\) contains the path \((v,w,u_{1},u)\), and so, is not a star. Moreover, \(T[u]\setminus u\) is a forest of stars, and so, by Lemma 15, \(mh(T[u])=2\).
5. If \(k=1\) and \(d\geq 2\), then \(u\) has at least two children. Thus, by Lemma 15, \(T[u_{1}]\) contains at least one edge. Let \(w\) be a child of \(u_{1}\) in \(T[u_{1}]\). Note that \(T[u]\) contains the path \(P=(u_{2},u,u_{1},w)\) as subgraph and that \(T[u]\setminus u\) is a forest of stars. Therefore, by Lemma 15, \(mh(T[u])=2\).
6. If \(k>1\) and \(mh(T[v_{3}])=k\), then \(mh(T[u])\geq k+1\) by Lemma 15. Moreover, by Lemma 16, we get that \(mh(T[u])\leq k+1\), and so \(mh(T[u])=k+1\).
7. If \(k>1\), \(mh(T[u_{1}])=mh(T[u_{2}])=k\) and (\(mh(T[v_{3}])<k\) or \(d=2\)) and \(T[u_{1}]\) or \(T[u_{2}]\) contains a \(k\)-critical vertex, let \(x\) be a \(k\)-critical vertex in \(T[u]\setminus\{u\}\). W.l.o.g., let us assume that \(x\in V(T[u_{1}])\). Let us denote by \(y\) the parent of \(x\) in \(T[u]\). Note that \(y\) may be equal to \(u\). Let \(B_{1}\) and \(B_{2}\) denote the two branches at \(x\) in \(T[x]\) such that \(mh(B_{1})=mh(B_{2})=k\). Let \(B_{y}\) denote the branch at \(x\) in \(T[u]\) that contains \(y\). Note that \(B_{y}\) contains \(T[u_{2}]\) as subgraph. Thus, by Lemma 8, \(mh(B_{y})\geq mh(T[u_{2}])=k\). Therefore, since there are \(3\) branches at \(x\) with monotone hunter number at least \(k\), by Lemma 15, we get that \(mh(T[u])\geq k+1\). Finally, by Lemma 16, \(mh(T[u])\leq k+1\), and so \(mh(T[u])=k+1\).
8. If \(k>1\), \(mh(T[u_{1}])=mh(T[u_{2}])=k\) and (\(mh(T[v_{3}])<k\) or \(d=2\)) and neither \(T[u_{1}]\) nor \(T[u_{2}]\) contains a \(k\)-critical vertex, then there do not exist three branches at \(u\) having monotone hunter number at least \(k\). Note that for any \(w\in V(T[u_{1}])\) (resp., \(w\in V(T[u_{2}])\)), \(w\) is not \(k\)-critical, and so, there is at most \(1\) branch at \(w\) in \(T[w]\) that has monotone hunter number at least \(k\). Therefore, since every branch at \(w\) in \(T[u]\) is also a branch at \(w\) in \(T[u_{1}]\), except the one containing the parent of \(w\), we get that there are at most \(2\) branches at \(w\) that has monotone hunter number at least \(k\). Finally, if \(d>2\), for any \(2<j\leq d\) and any vertex \(w\in V(T[u_{j}])\), let us denote by \(y\) the parent of \(w\) in \(T[u_{j}]\cup\{u\}\). Note that for any vertex \(z\in N(w)\setminus\{y\}\), \(T[z]\) is a subgraph of \(T[u_{j}]\) and
so, by Lemma 8, \(mh(T[z])\leq mh(T[u_{j}])<k\). Therefore, there is no vertex \(w\in V(T[u])\) such that \(w\) has 3 branches that have monotone hunter number at least \(k\). Hence, by Lemma 15, \(mh(T[u])\leq k\). By Lemma 8 and because \(mh(T[u_{1}])=k\), \(mh(T[u])=k\).
9. If \(k>1\), \(mh(T[u_{1}])=k\) and \((mh(T[u_{2}])<k\) or \(d=1)\) and \(T[u_{1}]\) contains a \(k\)-critical vertex \(x\) and \(mh(T[u,x])=k\), then, \(x\) has 3 branches with monotone hunter number at least \(k\) in \(T[u]\). Therefore, by Lemma 15, \(mh(T[u])\geq k+1\). Finally, by Lemma 16, \(mh(T[u])\leq k+1\) and so \(mh(T[u])=k+1\).
10. If \(k>1\), \(mh(T[u_{1}])=k\) and \((mh(T[u_{2}])<k\) or \(d=1)\) and \(T[u_{1}]\) contains a \(k\)-critical vertex \(x\) and \(mh(T[u,x])<k\), then, there do not exist three branches at \(u\) with monotone hunter number at least \(k\). Let \(w\) be any vertex of \(T[u]\setminus\{u\}\). Let us assume first that \(w\) is not \(k\)-critical. Thus, there is at most one branch at \(w\) in \(T[w]\) with monotone hunter number \(k\). Since there is only one other branch left for \(w\) in \(T[u]\) (the one containing its parent), there do not exist three branches at \(w\) in \(T[u]\) with monotone hunter number at least \(k\). Let us assume now that \(w\) is \(k\)-critical. Thus, \(w\in V(T[u_{1}])\) since \(mh(T[u_{j}])<k\) for any \(2\leq j\leq d\). Note that \(w=x\). Indeed, towards contradiction, let us assume that \(w\neq x\). Then, the branch at \(w\) containing its parent in \(T[u_{1}]\) contains \(T[x]\) as a subgraph and/or the branch at \(x\) containing its parent in \(T[u_{1}]\) contains \(T[w]\). Thus, in every case, due to Lemma 8, there exists a vertex in \(T[u_{1}]\) that has 3 branches having monotone hunter number \(k\), each. Therefore, by Lemma 15, \(mh(T[u_{1}])=k+1\), which is a contradiction to the definition of \(k\). To sum up, for any vertex \(w\in V(T[u])\setminus\{x\}\), there exist at most two branches at \(w\) that require monotone hunter number at least \(k\). Let us recall that, by hypothesis, \(mh(T[u,x])<k\). Hence, there are only two branches at \(x\) that have monotone hunter number at least \(k\) (otherwise, by Lemma 15 and Lemma 8, \(k<mh(T[x])\leq mh(T[u_{1}])\), a contradiction). Therefore, there is no vertex \(w\) in \(T[u]\) such that there exist at least 3 branches at \(w\), each having monotone hunter number at least \(k\). Thus, by Lemma 15, \(mh(T[u])\leq k\) and by Lemma 8, \(mh(T[u])\geq mh(T[u_{1}])\geq k\).
11. If \(k>1\), \(mh(T[u_{1}])=k\) and \((mh(T[v_{2}])<k\) or \(d=1)\), and \(T[u_{1}]\) does not contain any \(k\)-critical vertex, then there is no vertex in \(T[u]\) that has 3 branches that require monotone hunter number at least \(k\) (as in previous case). Therefore, by Lemma 15, \(mh(T[u])\leq k\) and by Lemma 8, \(mh(T[u])\geq mh(T[u_{1}])\geq k\), and so, \(mh(T[u])=k\).
We need the following technical definition to finally describe our algorithm, which will recursively build a "label" for each vertex from the labels of its children. Recall that, for any \(u,x_{1},\cdots,x_{p}\in V(T[u])\), the tree \(T[u,x_{1},\ldots,x_{p}]\) denotes the subtree obtained from \(T[u]\) after the vertices of \(\bigcup_{i\leq p}V(T[x_{i}])\) have been removed.
**Definition 1**.: _For any tree \(T[u]\), the label\(\lambda(u,T[u])\) of \(u\) is a list of integers \((a_{1},\ldots,a_{p})\), where \(a_{1}>a_{2}>\cdots>a_{p}\geq 0\), possibly \(a_{p}\) may be marked with a star, and there exists a set of vertices \(\{\ell_{1},\ldots,\ell_{p}\}\), such that:_
* \(mh(T[u])=a_{1}\)_._
* _For_ \(1\leq i<p\)_,_ \(mh(T[u,\ell_{1},\ldots,\ell_{i}])=a_{i+1}\) _and_ \(\ell_{i}\) _is an_ \(a_{i}\)_-critical vertex in_ \(T[u,\ell_{1},\ldots,\ell_{i-1}]\)_. We will say that_ \(\ell_{i}\) _is associated to_ \(a_{i}\)_._
* \(\ell_{p}=u\)_. If_ \(a_{p}\) _is not marked with a star (_\(*\)_), then there is no_ \(a_{p}\)_-critical vertex in_ \(T[u,\ell_{1},\ldots,\ell_{p-1}]\)_. If_ \(a_{p}\) _is marked (with a star), then_ \(\ell_{p}\) _is an_ \(a_{p}\)_-critical vertex. In both cases,_ \(T[u,\ell_{p}]=T[u,u]\) _is the empty tree._
Examples.Let us exemplify the above definition. In the following examples, we start from two trees \(T_{1}\) and \(T_{2}\) whose roots have "almost" the same labels and show that adding one vertex adjacent to their root may lead to two new trees whose roots have "very different" labels. In particular, this illustrates the importance of the presence of a star on the last integer of a label.
First, let \(T_{1}\) be a tree rooted in a vertex \(u_{1}\) with the label \(\lambda(u_{1},T_{1}[u_{1}])=(a_{1},a_{2},a_{3})=(3,2,1)\). In Figure 2 we provide an illustration of one such tree. Since \(a_{1}=3\), then \(mh(T_{1}[u_{1}])=3\) and there exists a vertex \(\ell_{1}^{1}\in T_{1}[u_{1}]\) that is \(3\)-critical. Moreover, \(mh(T_{1}[u_{1},\ell_{1}^{1}])=a_{2}=2\) and there exists a vertex \(\ell_{2}^{1}\in T_{1}[u_{1},\ell_{1}^{1}]\) that is \(2\)-critical. Finally, \(mh(T_{1}[u_{1},\ell_{1}^{1},\ell_{2}^{1}])=a_{3}=1\) and since \(a_{3}\) is not marked with a star, there is no \(1\)-critical vertex in \(T_{1}[u_{1},\ell_{1}^{1},\ell_{2}^{1}]\). By previous remarks, \(T_{1}[u_{1},\ell_{1}^{1},\ell_{2}^{1}]\) is a star centred in \(u_{1}\) and \(\ell_{3}^{1}=u_{1}\) (moreover, the star contains at least \(2\) vertices since \(mh(T_{1}[u_{1},\ell_{1}^{1},\ell_{2}^{1}])>0\)).
Let \(T_{1}^{\prime}\) be the tree obtained from \(T_{1}\) by adding a vertex \(u\) (the root of \(T_{1}^{\prime}\)) adjacent to \(u_{1}\). Note that, \(\ell_{1}^{1}\) (resp., \(\ell_{2}^{1}\)) is still \(3\)-critical (resp., \(2\)-critical) in \(T_{1}^{\prime}[u]\) (resp., in \(T_{1}^{\prime}[u,\ell_{1}^{1}]\)). Note also that \(T_{1}^{\prime}[u,\ell_{1}^{1},\ell_{2}^{1}]\) is actually equal to the tree obtained from \(T_{1}[u_{1},\ell_{1}^{1},\ell_{2}^{1}]\) by making \(u\) adjacent to \(u_{1}\). Hence, \(T_{1}^{\prime}[u,\ell_{1}^{1},\ell_{2}^{1}]\) is a star (containing at least \(3\) vertices) centred in \(u_{1}\), and so \(u\) is \(1\)-critical in \(T_{1}^{\prime}[u,\ell_{1}^{1},\ell_{2}^{1}]\). Therefore, the label of \(u\) in \(T_{1}^{\prime}\) is \((3,2,1^{*})\).
Second, let \(T_{2}\) be a tree rooted at \(u_{2}\) with the label \(\lambda(u_{2},T_{2}[u_{2}])=(a_{1},a_{2},a_{3})=(3,2,1^{*})\). Similarly to the previous example, there exist \(\ell_{1}^{2}\) and \(\ell_{2}^{2}\) that are the \(3\)-critical vertex of \(T_{2}[u_{1}]\) and the \(2\)-critical vertex of \(T_{2}[u_{1},\ell_{1}^{2}]\), respectively. Moreover, since \(a_{3}\) is marked with a star, there exists a \(1\)-critical vertex \(\ell_{3}^{2}\) in \(T_{2}[u_{2},\ell_{1}^{2},\ell_{2}^{2}]\). Let us recall that by definition of a label, \(\ell_{3}^{2}=u_{2}\), and so \(T_{2}[u_{2},\ell_{1}^{2},\ell_{2}^{2}]\) is a star with at least \(3\) vertices centred in the only child of \(u_{2}\).
Let \(T_{2}^{\prime}\) be the tree obtained from \(T_{2}\) by adding a vertex \(u^{\prime}\) (the root of \(T_{2}^{\prime}\)) adjacent to \(u_{2}\). Note that \(T_{2}^{\prime}[u^{\prime},\ell_{1}^{2},\ell_{2}^{2}]\) is actually equal to the tree obtained from \(T_{2}[u_{2},\ell_{1}^{2},\ell_{2}^{2}]\) by making \(u^{\prime}\) adjacent to \(u_{2}\). Therefore, \(T_{2}^{\prime}[u^{\prime},\ell_{1}^{2},\ell_{2}^{2}]\) contains a path with \(4\) vertices and so \(mh(T_{2}^{\prime}[u^{\prime},\ell_{1}^{2},\ell_{2}^{2}])>1\). It follows that there exist three branches at \(\ell_{2}^{2}\) in \(T_{2}^{\prime}[u^{\prime},\ell_{1}^{2}]\) that have monotone hunter number at least \(2\), and so, \(mh(T_{2}^{\prime}[u^{\prime},\ell_{1}^{2}])\geq 3\). This implies that there exist three branches at \(\ell_{1}^{2}\) in \(T_{2}^{\prime}[u^{\prime}]\) that have monotone hunter number at least \(3\). Hence, we get that \(mh(T_{2}^{\prime}[u^{\prime}])\geq 4\) and \(\lambda(u^{\prime},T_{2}^{\prime}[u^{\prime}])=(4)\).
**Claim 3**.: _There exists no tree \(T\) rooted in \(v\in V(T)\) such that \(v\) has label \((1,0)\) in \(T[v]\)._
Proof.: Towards contradiction, let us assume that there exists a tree \(T\) rooted in \(v\in V(T)\) such
Figure 2: An example of the tree \(T_{1}\), described in the examples of Definition 1, such that \(\lambda(u_{1},T_{1}[u_{1}])=(3,2,1)\). Observe that \(mh(T_{1}[\ell_{1}^{1}])=3\) and that \(\ell_{1}^{1}\) is \(3\)-critical since there are two branches attached to \(\ell_{1}^{1}\), both of monotone hunter number equal to \(3\). Similarly, \(mh(T_{1}[\ell_{2}^{1}])=2\) and \(\ell_{3}^{1}\) is \(2\)-critical. Finally, the graph \(T_{1}[u_{1},\ell_{1}^{1},\ell_{1}^{2}]\) is the star on \(3\) vertices, centred in \(u_{1}\). Adding any number of leaves to attached to \(u_{1}\) would result in a tree \(T_{1}^{\prime}\) such that \(\lambda(v,T_{1}^{\prime}[v])=(3,2,1)\). Adding one leaf \(u_{2}\) attached to \(v\) would result in a tree \(T_{2}\) such that \(\lambda(u_{2},T_{2}[u_{2}])=(3,2,1^{*})\). Adding a leaf \(u^{\prime}\) attached to \(u_{2}\) would result in a tree \(T_{2}^{\prime}\) with \(\lambda(u^{\prime},T_{2}^{\prime}[u^{\prime}])=(4)\).
that \(v\) has label \(\lambda=(a_{1},a_{2})=(1,0)\) in \(T[v]\). Then, by the definition of the labelling, there exists a vertex \(\ell_{1}\) in \(T[v]\) such that \(\ell_{1}\) is \(1\)-critical (because \(a_{1}=1\)) and moreover \(\ell_{1}\neq v\) because \(\lambda\neq(a_{1})\). By the definition of a vertex being \(1\)-critical, we get that \(\ell_{1}\) has a child \(y\) such that \(mh(T[y])=1\). Since \(mh(T[y])=1\), we get that \(T[y]\) contains at least one edge and so there exists \(x\in N(y)\). Since \(T[v]\) is connected, there exists a \((\ell_{1},v)\)-path \(P=(p_{1}=\ell_{1},\ldots p_{q}=v)\). Thus, \(P^{\prime}=(x,y,p_{1}=\ell_{1},\ldots,p_{q}=v)\) is a path with at least \(4\) vertices. Hence, \(mh(T[v])>1\) which contradicts that \(\lambda(v)=(1,0)\) since \(a_{1}=mh(T[v])\).
The following notation is used in Algorithm 1 and in its proof (Lemma 17). For any sequence \(\lambda=(a_{1},\ldots,a_{p})\) and any integer \(a>a_{1}\), let \(a\odot\lambda=(a^{\prime}_{1}=a,a^{\prime}_{2}=a_{1},\ldots,a^{\prime}_{p^{ \prime}-1}=a_{p-1},a^{\prime}_{p^{\prime}}=a_{p})\), i.e., \(\odot\) denotes the concatenation. Moreover, let \(pref(\lambda)=(a_{1},\cdots,a_{p-1})\) (the prefix of \(\lambda\)) and \(t(\lambda)=a_{p}\) (the tail of \(\lambda\)).
**Lemma 17**.: _Algorithm 1 takes a tree \(T\) rooted at some vertex \(v\in V(T)\) and the label of each child of \(v\) as inputs and it returns the label of \(v\) in \(T[v]\)._
Proof.: Our goal is to prove that the algorithm returns the label of \(v\) in \(T[v]\), denoted as \(\lambda(v,T[v])=(a^{v}_{1},\ldots,a^{v}_{p^{v}})\). Let \(d_{T[v]}(v)=d\). Let \(\lambda^{alg}\) be the value computed by Algorithm 1.
Observe first that if \(d=0\), by Corollary 4, \(mh(T[v])=0\), and so, by definition of a label, \(\lambda(v,T[v])=(0)\). Note that Algorithm 1 returns \((0)\) in line 2. Hence, \(\lambda^{alg}=\lambda(v,T[v])=(0)\).
Thus let us assume that \(d>0\). Let \(v_{1},\ldots v_{d}\) be the \(d\) children of \(v\) in \(T[v]\). For any \(1\leq i\leq d\), let \(\lambda_{i}=\lambda(v_{i},T[v_{i}])=(a^{i}_{1},\ldots,a^{i}_{p_{i}})\) be the label of \(v_{i}\) in \(T[v_{i}]\). W.l.o.g., let us assume that \(a^{1}_{1}\geq a^{2}_{1}\geq\cdots\geq a^{d}_{1}\) and let \(k=a^{1}_{1}=\max_{i\in\{1,\ldots,d\}}\max_{q\in\{1,\ldots,|\lambda_{i}\}}a^{i}_ {q}\).
For each \(1\leq i\leq d\), let \(T^{i}_{k}=T[v_{i}]\). Also, for \(0\leq m\leq k\), if \(m=a^{i}_{j}\) for any \(1\leq j\leq p_{i}\), let \(T^{i}_{m-1}\) be obtained from \(T^{i}_{m}\) by removing \(T[\ell^{i}_{j}]\) where \(\ell^{i}_{j}\) is the vertex associated to \(a^{i}_{j}\). Otherwise, let \(T^{i}_{m-1}=T^{i}_{m}\), i.e., \(T^{i}_{m}=T[v_{i}]\setminus\bigcup_{a^{i}_{j}\in\lambda_{i},a^{i}_{j}>m}T[a^{ i}_{j}]\). Finally, for any \(0\leq m\leq k\), let \(T_{m}\) be the subtree of \(T[v]\) induced by \(\bigcup_{1\leq i\leq d}V(T^{i}_{m})\). Intuitively, \(T_{m}\) is the subtree obtained from \(T[v]\) by removing the substrees \(T[w]\) for every vertex \(w\neq v\) such that \(mh(T[w])\geq m+1\). Note that \(T_{k}=T[v]=T\) and that \(mh(T_{m})\leq m+1\) for every \(m\leq k\). Note also that \(T_{0}\) consists of \(v\) and, possibly, some of its children.
Let us prove by induction on \(0\leq m\leq k\) that after the \((m+1)\)-th iteration of the loop of the algorithm, the current value of \(\lambda\), the variable of Algorithm 1, denoted by \(\lambda^{m}\), is equal to the label \(\lambda(v,T_{m})\), i.e., the label of \(v\) in \(T_{m}\). If this induction holds, then, when \(m=k\), the algorithm returns \(\lambda^{alg}=\lambda^{k}=\lambda(v,T_{k})=\lambda(v,T[v])\), which concludes the proof.
Let \(n(m)\) be the number of children \(w\) of \(v\), such that \(m\) or \(m^{*}\) is in the label of \(w\).
The base case is for \(m=0\).
* Let us assume first that \(n(0)\geq 1\). Recall that, by the definition of \(T_{0}\), for any child \(w\) of \(v\) in \(T_{0}\), \(mh(T_{0}[w])\leq 0\), i.e., \(T_{0}[w]\) only contains \(w\). Therefore, \(v\) is not \(1\)-critical in \(T_{0}\). Thus, by Case 2 of Corollary 4 and by the definition of the labelling, \(mh(T_{0})=1\) and \(\lambda(v,T_{0})=(1)\), which corresponds to \(\lambda^{0}\) (line 9 of Algorithm 1).
* Let us assume now that \(n(0)=0\), i.e. \(T_{0}\) only contains \(v\). By Case 1 of Corollary 4, \(mh(T_{0})=0\). Thus, \(\lambda(v,T_{0})=(0)\). Moreover, since, \(n(m)=0\), \(\lambda^{0}=(0)\) (line 11 of Algorithm 1).
Let us assume now that \(m=1\). It follows from the case \(m=0\) that \(\lambda^{0}=\lambda(v,T_{0})\). Before analysing the several subcases for \(m=1\), let us recall that \(\lambda(v,T_{m})\) cannot have label \((1,0)\) by Claim 3.
**Algorithm 1** Let \(T\) be a tree rooted in a vertex \(v\). Let \(v_{1},\ldots,v_{d}\) be the \(d\) children of \(v\) in \(T[v]\) and let \(\lambda_{1},\ldots,\lambda_{d}\) be their corresponding labels.
```
1:if\(d=0\)then
2:return\((0)\);
3:endif
4: Let \(\lambda=()\) and let \(k=\max_{i\in\{1,\ldots,d\}}\max_{q\in\{1,\ldots,|\lambda_{i}|\}}\lambda_{i}\);
5:for\(m\) from \(0\) to \(k\)do
6: Let \(n(m)\) be the number of children with \(m\) or \(m^{*}\) in their label;
7:if\(m=0\)then
8:if\(n(m)\geq 1\)then
9:\(\lambda\leftarrow(1)\);
10:else
11:\(\lambda\leftarrow(0)\);
12:endif
13:elseif\(m=1\)then
14://Case 3: one child of \(v\) has \(1\) in its label, its subtree does not contain any \(1\)-critical vertex and no child of \(v\) has \(0\) in its label
15:if\(n(m)=1\), \(\forall 1\leq i\leq d\), \(m\notin pref(\lambda_{i})\) and \(t(\lambda_{i})\neq m^{*}\) and \(\lambda=(0)\)then
16:\(\lambda\leftarrow(1^{*})\);
17://Case 4: \(v\) has a unique child, and moreover, this child is a \(1\)-critical vertex, or
18://Case 5: \(v\) has at least \(2\) children with at least one having \(1\) in its label
19:elseif\(n(m)\geq 1\)then
20:\(\lambda\leftarrow(2)\);
21:endif
22:elseif\(m>1\)then
23://Invariant/intuition: at this step, \(\lambda=\lambda(v,T_{m-1})\) where \(T_{m-1}\) is the subtree obtained from \(T\) by removing the substrees \(T[w]\) such that \(w\neq v\) and \(mh(T[w])\geq m\).
24://Case 6: at least three children of \(v\) have \(m\) or \(m^{*}\) in their label
25:if\(n(m)\geq 3\)then
26:\(\lambda\leftarrow(m+1)\);
27://Case 7: two children of \(v\) have \(m\) or \(m^{*}\) in their label and at least one of their subtrees contains a \(m\)-critical vertex
28:elseif\(n(m)=2\) and \(\exists 1\leq i\leq d\), \(m\in pref(\lambda_{i})\) or \(t(\lambda_{i})=m^{*}\)then
29:\(\lambda\leftarrow(m+1)\);
30://Case 8: two children of \(v\) have \(m\) or \(m^{*}\) in their label. Moreover, their subtrees do not contain any \(m\)-critical vertex
31:elseif\(n(m)=2\) and, \(\forall 1\leq i\leq d\), \(m\notin pref(\lambda_{i})\) and \(t(\lambda_{i})\neq m^{*}\)then
32:\(\lambda\leftarrow(m^{*})\);
33://Case 9: one child of \(v\) has \(m\) or \(m^{*}\) in its label. Moreover, its subtree contains an \(m\)-critical vertex and \(mh(T_{m-1})=m\)
34:elseif\(n(m)=1\), \(\exists 1\leq i\leq d\), \(m\in pref(\lambda_{i})\) or \(t(\lambda_{i})=m^{*}\), and \(\lambda\) contains \(m\) or \(m^{*}\)then
35:\(\lambda\leftarrow(m+1)\);
36://Case 10: one child of \(v\) has \(m\) or \(m^{*}\) in its label. Moreover, its subtree contains an \(m\)-critical vertex and \(mh(T_{m-1})<m\)
37:elseif\(n(m)=1\), \(\exists 1\leq i\leq d\), \(m\in pref(\lambda_{i})\) or \(t(\lambda_{i})=m^{*}\), and \(\lambda\) contains neither \(m\) nor \(m^{*}\)then
38:\(\lambda\leftarrow(m)\odot\lambda\);
39://Case 11: one child of \(v\) has \(m\) or \(m^{*}\) in its label. Moreover, its subtree does not contain an \(m\)-critical vertex
40:elseif\(n(m)=1\) and, \(\forall 1\leq i\leq d\), \(m\notin\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(t(\lambda_{i})\neq m^{*}\)then
41:\(\lambda\leftarrow(m)\);
42:endif
43:endif
44:endif
45:return\(\lambda\);
```
**Algorithm 1** Let \(T\) be a tree rooted in a vertex \(v\). Let \(v_{1},\ldots,v_{d}\) be the \(d\) children of \(v\) in \(T[v]\) and let \(\lambda_{1},\ldots,\lambda_{d}\) be their corresponding labels.
* Assume first that \(n(1)=0\). This implies that \(T_{1}=T_{0}\), and so, we get that \(\lambda^{1}=\lambda^{0}=\lambda(v,T_{0})=\lambda(v,T_{1})\) (in this case, Algorithm 1 does nothing during the 2-nd iteration of the loop).
* Assume next that \(n(1)=1\), and let \(v_{i}\) be the child of \(v\) such that (1) or (\(1^{*}\)) is in the label of \(v_{i}\).
* Let us first assume that we are in Case 3 (line 16 of Algorithm 1), i.e. 1 (not \(1^{*}\)) is the last element of the label of \(v_{i}\), and \(\lambda^{0}=(0)\). Recall that since \(\lambda^{0}=(0)\), and by the case \(m=0\), \(T_{0}\) is a single vertex. Thus, \(v\) has only one child \(w\) in \(T_{1}\), and \(mh(T_{1}[w])=1\) (because \(n(1)=1\)), i.e., \(T_{1}\) is a star centred in \(w\) (but rooted in \(v\)). So, by Case 3 of Corollary 4, \(mh(T_{1})=1\). Moreover, \(v\) is 1-critical in \(T_{1}\). Thus, \(\lambda(v,T_{1})=(1^{*})\), which correspond to \(\lambda^{1}\) (line 16 of Algorithm 1).
* Let us then assume that we are in Case 5 (line 20 of Algorithm 1), i.e. 1 (not \(1^{*}\)) is the last element of the label of \(v_{i}\), and \(\lambda^{0}=(1)\). Observe that if \(\lambda^{0}=(1)\), and by the case \(m=0\), then \(T_{0}\) is a star centred (and rooted) in \(v\). Thus, \(v\) has at least 2 children in \(T_{1}\) (since \(v_{i}\in V(T_{1})\setminus V(T_{0})\)) but there is one child of \(v\), \(v_{i}\), that is not a leaf (i.e., \(v_{i}\) is the root of a subtree with monotone hunter number 1). So, by Case 5 of Corollary 4, \(mh(T_{1})=2\). By Lemma 16, for any vertex \(w\in T_{1}\), there is at most 1 branch at \(w\) that has monotone hunter number at least 2 (so there are no 2-critical vertices). Therefore, \(\lambda(v,T_{1})=(2)\), which corresponds to \(\lambda^{1}\) (line 20 of Algorithm 1).
* Let us consider the Case 4 (line 20 of Algorithm 1), i.e. the last element of the label of \(v_{i}\) is \(1^{*}\) and moreover, \(v_{i}\) is the unique child of \(v\). By Case 4 of Corollary 4, \(mh(T_{1})=2\). By Lemma 16, for any vertex \(w\in T_{1}\), there is at most 1 branch at \(w\) that has monotone hunter number at least 2 (so there are no 2-critical vertices). Thus, \(\lambda(v,T_{1})=(2)\), which corresponds to \(\lambda^{1}\) (line 20 of Algorithm 1).
* Finally, assume that \(n(m)=n(1)\geq 2\). Hence, in \(T_{1}\), \(v\) has at least two children which are the roots of subtrees with monotone hunter number 1. By Case 5 of Corollary 4, \(mh(T_{1})=2\) (recall that by definition of \(T_{1}\), \(mh(T_{1})\leq 2\)). Moreover, by Lemma 16, for any vertex \(w\in T_{1}\), there is at most 1 branch at \(w\) that has monotone hunter number at least 2 (so there are no 2-critical vertices). Therefore, \(\lambda(v,T_{1})=(2)\), which corresponds to \(\lambda^{1}\) (line 20 of Algorithm 1).
We are now ready to prove the induction step. Let \(m\geq 2\) and let us assume that \(\lambda(v,T_{m-1})=\lambda^{m-1}\). We will prove that \(\lambda(v,T_{m})=\lambda^{m}\).
* **Case 6.** We are in the case where \(n(m)\geq 3\) (line 26 of Algorithm 1). Since \(n(m)\geq 3\), in \(T_{m}\), \(v\) has at least 3 children \(v_{j}\), \(v_{j^{\prime}}\) and \(v_{j^{\prime\prime}}\) such that \(mh(T_{m}^{j})=mh(T_{m}^{j^{\prime\prime}})=mh(T_{m}^{j^{\prime\prime}})=m\) (and \(mh(T_{m}^{i})\leq m\) for every \(1\leq i\leq d\)). Thus, we are in the Case 6 of Corollary 4, and so, \(mh(T_{m})=m+1\). Note also that, by Lemma 16 and for any vertex \(w\in T_{m}\), there exists at most 1 branch \(B\) at \(w\) with \(mh(B)\geq m+1\). Therefore, \(T_{m}\) has no \(m+1\)-critical vertex, and so \(\lambda(v,T_{m})=(m+1)\). To conclude, line 26 of Algorithm 1 precisely returns \(\lambda^{m}=(m+1)\). Hence, \(\lambda^{m}=\lambda(v,T_{m})\).
* **Case 7.** We are in the case where \(n(m)=2\) and there exists \(1\leq i\leq d\) such that \(m\in pref(\lambda_{i})\) or \(t(\lambda_{i})=m^{*}\), i.e., \(T_{m}^{i}\) contains an \(m\)-critical vertex (line 29 of Algorithm 1). Let us denote by \(y_{i}\) be the \(m\)-critical vertex in \(T_{m}^{i}\). Since \(n(m)=2\), in \(T_{m}\), \(v\) has exactly 2 children \(v_{j}\) and \(v_{j^{\prime}}\) (and \(i\in\{j,j^{\prime}\}\)) such that \(mh(T_{m}^{j})=mh(T_{m}^{j^{\prime}})=m\) and \(mh(T_{m}^{q})<m\)
for every other child \(v_{q}\) of \(v\) in \(T_{m}\). Since \(y_{i}\) is critical, we are in the Case 7 of Corollary 4. Thus, \(mh(T_{m})=m+1\). Note also that, by Lemma 16, for any vertex \(w\in T_{m}\), there exists at most \(1\) branch \(B\) at \(w\) with \(mh(B)\geq m+1\). Therefore, \(T_{m}\) has no \(m+1\)-critical vertex, and so \(\lambda(v,T_{m})=(m+1)\). To conclude, line 29 of Algorithm 1 precisely returns \(\lambda^{m}=(m+1)\). Hence, \(\lambda^{m}=\lambda(v,T_{m})\).
* **Case 8.** We are in the case where \(n(m)=2\) and \(m\notin pref(\lambda_{i})\) and \(t(\lambda_{i})\neq m^{*}\) for all \(1\leq i\leq d\), i.e., \(T_{m}^{i}\) does not contain any \(m\)-critical vertex (line 32 of Algorithm 1). Since \(n(m)=2\), in \(T_{m}\), \(v\) has exactly \(2\) children \(v_{j}\) and \(v_{j^{\prime}}\) such that \(mh(T_{m}^{j})=mh(T_{m}^{j^{\prime}})=m\) and \(mh(T_{m}^{q})<m\) for every other child \(v_{q}\) of \(v\) in \(T_{m}\). Since \(v_{j}\) and \(v_{j^{\prime}}\) are not \(m\)-critical, we are in the Case 8 of Corollary 4. Thus, \(mh(T_{m})=m\). Since \(n(m)=2\), \(v\) is clearly an \(m\)-critical vertex, and so \(\lambda(v,T_{m})=(m^{*})\). To conclude, line 32 of Algorithm 1 precisely returns \(\lambda^{m}=(m^{*})\). Hence, \(\lambda^{m}=\lambda(v,T_{m})\).
* **Case 9.** We are in the case where \(n(m)=1\), there exists \(1\leq i\leq d\) such that \(m\in pref(\lambda_{i})\), or \(m^{*}=t(\lambda_{i})\) (i.e., there is an \(m\)-critical vertex in \(T_{m}^{i}\)) and \(\lambda_{m-1}\) contains \(m\) or \(m^{*}\), i.e., \(mh(T_{m-1})=m\) (line 35 of Algorithm 1). Let \(y_{i}\) denote the \(m\)-critical vertex in \(T_{m}^{i}\). Since \(n(m)=1\), in \(T_{m}\), \(v\) has exactly \(1\) child \(v_{j}\) (and so, \(i=j\)) such that \(mh(T_{m}^{j})=m\) and \(mh(T_{m}^{q})<m\) for every other child \(v_{q}\) of \(v\) in \(T_{m}\), and therefore \(T_{m-1}=T_{m}[v,y_{i}]\). Thus, by definition of the labelling and since \(m\in\lambda^{m-1}\), \(mh(T_{m-1})=mh(T_{m}[v,y_{i}])=m\). Therefore, we are in the Case 9 of Corollary 4. Thus, \(mh(T_{m})=m+1\). Note also that, by Lemma 16, for any vertex \(w\in T_{m}\), there exists at most \(1\) branch \(B\) at \(w\) with \(mh(B)\geq m+1\). Therefore, \(T_{m}\) has no \((m+1)\)-critical vertex, and so \(\lambda(v,T_{m})=(m+1)\). To conclude, line 35 of Algorithm 1 precisely returns \(\lambda^{m}=(m+1)\). Hence, \(\lambda^{m}=\lambda(v,T_{m})\).
* **Case 10.** We are in the case where \(n(m)=1\), there exists \(1\leq i\leq d\) such that \(m\in pref(\lambda_{i})\), or \(m^{*}=t(\lambda_{i})\) (i.e., there is an \(m\)-critical vertex in \(T_{m}^{i}\)), and \(m\notin\lambda^{m-1}\) (line 38 of Algorithm 1). Let us denote by \(y_{i}\) be the \(m\)-critical vertex in \(T_{m}^{i}\). Since \(n(m)=1\), in \(T_{m}\), \(v_{i}\) is the single child of \(v\) such that \(mh(T_{m}^{i})=m\) and \(mh(T_{m}^{q})<m\) for every other child \(v_{q}\) of \(v\) in \(T_{m}\). Therefore, \(T_{m-1}[v]=T_{m}[v,y_{i}]\). Thus, by definition of the labelling and since \(m\notin\lambda^{m-1}\), \(mh(T_{m-1}[v])=mh(T_{m}[v,y_{i}])<m\). Therefore, we are in the Case 10 of Corollary 4. Thus, \(mh(T_{m})=m\). Let \(\lambda^{m-1}=\lambda(v,T_{m-1})=(a_{1},\ldots,a_{p})\). Recall that, there exist vertices \(\ell_{1},\ell_{2},\ldots,\ell_{p-1}\) such that \(\ell_{h}\) is \(a_{h}\)-critical in \(T_{m-1}[v,\ell_{1},\cdots,\ell_{h-1}]\) for every \(1\leq h<p\). Since \(T_{m-1}[v]=T_{m}[v,y_{i}]\), \(\ell_{h}\) is \(a_{h}\)-critical in \(T_{m}[v,y_{i},\ell_{1},\cdots,\ell_{h-1}]\) for every \(1\leq h<p\). Moreover, for every \(1\leq h\leq p\), \(T_{m-1}[v,\ell_{1},\ldots,\ell_{h-1}]=T_{m}[v,y_{i},\ell_{1},\ldots,\ell_{h-1}]\), and so \(mh(T_{m}[v,y_{i},\ell_{1},\ldots,\ell_{h-1}])=mh(T_{m-1}[v,\ell_{1},\ldots, \ell_{h-1}])=a_{h}\). Therefore, \(\lambda(v,T_{m})=(m,a_{1},\ldots,a_{p})\). To conclude, line 38 of Algorithm 1 precisely returns \(\lambda^{m}=(m)\odot\lambda^{m-1}\). Hence, \(\lambda^{m}=\lambda(v,T_{m})\).
* **Case 11.** Finally, we are in the case where \(n(m)=1\) and \(m\notin pref(\lambda_{i})\) and \(t(\lambda_{i})\neq m^{*}\) for all \(1\leq i\leq d\), i.e., \(T_{m}^{i}\) does not contain any \(m\)-critical vertex (line 41 of Algorithm 1). Since \(n(m)=1\), in \(T_{m}\), \(v\) has exactly \(1\) child \(v_{i}\) such that \(mh(T_{m}^{i})=m\) and \(mh(T_{m}^{q})<m\) for every other child \(v_{q}\) of \(v\) in \(T_{m}\). Since, there is no \(m\)-critical vertex in \(T_{m}\), we are in the Case 11 of Corollary 4. Thus, \(mh(T_{m})=m\). Note also that \(v\) is not \(m\)-critical since \(n(m)=1\). Therefore, \(T_{m}\) has no \(m\)-critical vertex, and so \(\lambda(v,T_{m})=(m)\). To conclude, line 41 of Algorithm 1 precisely returns \(\lambda^{m}=(m)\). Hence, \(\lambda^{m}=\lambda(v,T_{m})\).
The main result of this section follows:
**Theorem 6**.: _The monotone hunter number of any tree can be computed in polynomial time._
Monotone hunter number in the red variant in trees
So far, we have investigated the Hunters and Rabbit game with the additional monotonicity property since monotone strategies are often easier to deal with. Previous works on the Hunters and Rabbit game in bipartite graphs \(G=(V_{r}\cup V_{w},E)\) have shown that studying the _red variant_ of the Hunters and Rabbit game, i.e., when the rabbit is constrained to start in a vertex in \(V_{r}\), could be very fruitful. For instance, recall Lemma 5 which states that \(h(G)=h_{V_{r}}(G)\) for every bipartite graph \(G=(V_{r}\cup V_{w},E)\) and which helped to get many results on the Hunters and Rabbit game [1, 6, 16]. Therefore, it is interesting to consider the monotonicity constraint when restricted to the red variant of the Hunters and Rabbit game. This section is dedicated to this study in the case of trees. Recall that \(mh_{V_{r}}(G)\) denotes the minimum number of hunters required to win against a rabbit starting at \(V_{r}\) in a bipartite graph \(G=(V_{r}\cup V_{w},E)\) and in a monotone way. It can be checked that, in [16], it is actually shown that, for any tree \(T\), \(mh_{V_{r}}(T)\leq\lceil\frac{\log_{2}|V(T)|}{2}\rceil\) (a monotone strategy with respect to \(V_{r}\) is described in [16]). Therefore, the proof of Corollary 2 actually shows that:
**Corollary 5**.: _There exists \(\varepsilon>0\) such that, for any \(k\in\mathbb{N}\), there exists a tree \(T\) with \(mh_{V_{r}}(T)\geq k\) and \(mh(T)\geq(1+\varepsilon)mh_{V_{r}}(T)\)._
Therefore, Proposition 4 and Corollary 5 already show that there exist graphs \(G\) for which \(mh_{V_{r}}(G)<mh(G)\). The main result of this section is that there exists an infinite family of trees \(T\) such that the difference between \(mh_{V_{r}}(T)\) and \(h_{V_{r}}(T)\) is arbitrarily large. In particular, this improves the result of Corollary 2 since \(mh_{V_{r}}(G)\leq mh(G)\) and \(h_{V_{r}}(G)=h(G)\) for any graph \(G\).
More precisely, this section is devoted to proving:
**Theorem 7**.: _For every \(i\geq 3\), there exists a tree \(T\) such that \(mh_{V_{r}}(T)\geq i\) and \(h_{V_{r}}(T)=h(T)=2\)._
Proof.: In Section 5.1, we define a family \((T_{i,2i})_{i\geq 3}\) of trees such that \(h_{V_{r}}(T_{i,2i})=2\) for every \(i\geq 3\) (Lemma 24). Then, in Section 5.2, Lemma 27 proves that \(mh_{V_{r}}(T_{i,2i})\geq i\) for every \(i\geq 3\).
In order to prove the Lemmas 24 and 27 below, we first need to adapt several technical lemmas and propositions above in the case of the red variant in bipartite graphs. Since the proofs of Proposition 6 and Lemma 18 (in the red variant) share many similarities with the previous, already proven, versions, we decided to postpone their proofs in the appendix.
Next proposition is an adaptation of Proposition 2 in the red variant of the game.
**Proposition 6**.: _Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a hunter strategy in a bipartite graph \(G=(V_{r}\cup V_{w},E)\) with respect to \(V_{r}\). Let \(v\in V_{r}\) (resp. \(v\in V_{w}\)) and \(1\leq i\leq\ell\). If there exists a vertex \(u\in N(v)\) and a vertex \(x\in N(u)\) (possibly \(x=v\)) such that \(u\notin\bigcup_{j\leq i}S_{j}\) and \(x\notin\bigcup_{j<i}S_{j}\), then \(v\in Z_{2p}\) for every \(2p\leq i\) (resp. \(v\in Z_{2p+1}\) for every \(2p+1\leq i\))._
Next lemma adapts Lemma 6 in the red variant of the game.
**Lemma 18**.: _Let \(G=(V_{r}\cup V_{w},E)\) be a bipartite graph with at least two vertices. Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a monotone hunter strategy in \(G\) with respect to \(V_{r}\). For any \(0\leq p<i\leq\lceil\ell/2\rceil\), \(Z_{2i}\subseteq Z_{2p}\) and \(Z_{2i+1}\subseteq Z_{2p+1}\)._
Lemma 19 is a direct adaptation, in the red variant of the game, of Lemma 7. The only difference in their proofs is that, in the case of Lemma 19, Proposition 6 must be used instead of Proposition 2. Therefore, we present the proof of Lemma 19 only in the appendix.
**Lemma 19**.: _Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a non-monotone winning hunter strategy in a bipartite graph \(G=(V_{r}\cup V_{w},E)\) with respect to \(V_{r}\). Then, there exist a vertex \(v\in V\) and \(1\leq i\leq\ell\) such that \(v\in Z_{i-1}\setminus S_{i}\) and \(v\in\bigcup_{p<i}S_{p}\)._
Lemma 20 is a direct adaptation, in the red variant of the game, of Lemma 8. The only difference in their proofs is that, in the case of Lemma 20, Proposition 6 and Lemma 19 must be used instead of Proposition 2 and of Lemma 7. Therefore, we present the proof of Lemma 20 only in the appendix.
**Lemma 20**.: _For any non-empty connected subgraph \(H\) of a bipartite graph \(G=(V_{r}\cup V_{w},E)\), \(mh_{V_{r}\cap V(H)}(H)\leq mh_{V_{r}}(G)\). Moreover, if \(|V(H)|>1\), we get that, if there exists a monotone winning hunter strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) in \(G\) with respect to \(V_{r}\), then there exists a monotone winning hunter strategy \(\mathcal{S}^{\prime}\) in \(H\) with respect to \(V_{r}\cap V(H)\) using at most \(\max_{1\leq i\leq\ell}|S_{i}\cap V(H)|\) hunters._
Since the proofs of upcoming Lemmas 21 and 22 (in the red variant) share many similarities with the proofs of Lemmas 9 and 10, respectively, we decided to postpone their proofs in the appendix.
**Lemma 21**.: _For any bipartite graph \(G=(V_{r}\cup V_{w},E)\) and any \(k\geq mh_{V_{r}}(G)\), there exists a parsimonious monotone winning hunter strategy in \(G\) with respect to \(V_{r}\) and that uses \(k\) hunters._
**Lemma 22**.: _Let \(G=(V_{r}\cup V_{w},E)\) be a bipartite graph and \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be a parsimonious monotone winning hunter strategy with respect to \(V_{r}\)._
* _If there exist_ \(1\leq i<j\leq\ell\) _such that_ \(v\in S_{i}\cap S_{j}\)_, then_ \(v\in S_{i+2}\)_._
* _If_ \(v\in V_{r}\) _(resp.,_ \(v\in V_{w}\)_) and there exists an odd (resp., even) integer_ \(1\leq i<\ell\) _such that_ \(v\notin Z_{i-1}\)_, then_ \(v\notin S_{j}\) _for every_ \(j\geq i\)_._
### The family of trees \((T_{i,q})_{i\geq 3,q\geq 6}\): definition and hunter number
In this section, we will prove that the gap between the hunter number and the monotone hunter number in the red variant of the game may be arbitrary large. More precisely, we will design an infinite family \((T_{i})_{i\geq 3}\) of trees which exhibits this behaviour.
Let \(S_{k,q}\) be the rooted tree obtained from \(q\geq 6\) paths of length \(k\geq 3\) (with \(k\) edges) by identifying an endpoint of each path into a common vertex called the _root_ of \(S_{k,q}\) and denoted by \(c\). Equivalently, \(S_{k,q}\) can be obtained from a star with root \(c\) of degree \(q\) by subdividing each edge \(k-1\) times. From now on, let \((V_{r},V_{w})\) be the bipartition of \(V(S_{k,q})\) and let us assume that \(c\in V_{r}\).
**Lemma 23**.: _For any \(k,q\in\mathbb{N}\) such that \(k\geq 3\) and \(q\geq 6\), it holds that \(h(S_{k,q})=mh_{V_{r}}(S_{k,q})=2\)._
Proof.: The fact that \(h(S_{k,q})>1\) comes from the characterisation of trees with hunter number one in [8]. W.l.o.g., let us suppose that the centre \(c\) of \(S_{k,q}\) is in \(V_{r}\). We now prove that \(mh_{V_{r}}(S_{k,q})\leq 2\) and the result then follows from Lemma 5 and Proposition 1.
The strategy \(\mathcal{S}\) with respect to \(V_{r}\) and using two hunters proceeds as follows. At every odd round, the first hunter shoots at \(c\). The second hunter considers sequentially each path \(P=(v_{1},\ldots,v_{k})\) of \(S_{k,q}\setminus c\) by iteratively shooting at \(v_{1},v_{2},\ldots,v_{k}\) (starting by shooting \(v_{1}\) at an even round).
Formally, let \(P^{1},\ldots,P^{q}\) be the \(q\) branches of \(c\) in \(S_{k,q}\), and let \(P^{i}=(v_{1}^{i},\ldots,v_{k}^{i})\) for every \(1\leq i\leq q\) where \(v_{1}^{i}\) is the neighbour of \(c\) in \(P^{i}\). Then, the strategy \(\mathcal{S}\) equals \((\{c\},\{v_{1}^{1}\},\{c,v_{2}^{1}\},\{v_{3}^{1}\},\)
\(\{c,v^{1}_{4}\},\ldots,\{v^{1}_{k-1}\},\{c,v^{1}_{k}\},\{v^{2}_{1}\},\{c,v^{2}_{2} \},\ldots,\{v^{i}_{j-1}\},\{c,v^{i}_{j}\},\ldots,\{c,v^{q}_{k}\})\) if \(k\) is even, and \(\mathcal{S}\) equals \((\{c\},\{v^{1}_{1}\},\{c,v^{2}_{1}\},\{v^{1}_{3}\},\{c,v^{1}_{4}\},\ldots,\{c, v^{1}_{k-1}\},\{v^{1}_{k}\},\{c\},\{v^{2}_{1}\},\{c,v^{2}_{2}\},\ldots,\{v^{i-1}_{k}\}, \{c\},\{v^{i}_{1}\}\\,\{c,v^{i}_{2}\},\ldots,\{v^{i}_{j-1}\},\{c,v^{i}_{j}\},\ldots,\{v^{q}_{k}\})\) if \(k\) is odd.
Clearly, this is a monotone winning hunter strategy in \(S_{k,q}\) with respect to \(V_{r}\).
Let us denote the strategy described in the previous proof by \(\mathcal{S}_{1}\) and let \(\ell_{1}\) be the smallest even integer greater or equal to the length of \(\mathcal{S}_{1}\) (this length equals to \(1+qk\) if \(k\) is even and to \(q(k+1)\) otherwise).
The construction of the tree \(T_{i,q}\).For every \(i\geq 2\) and \(q\geq 6\), let \(T_{i,q}\) be the tree recursively built as follows. First, \(T_{1,q}=S_{3,q}\). Then, for \(i>1\), let us assume that \(T_{i-1,q}\) has been defined recursively and that there exists a winning hunter strategy, of length \(\ell_{i-1}\), using \(2\) hunters in the red variant in \(T_{i-1,q}\) (this holds for \(i-1=1\) from the previous lemma and it will be proven to hold for every \(i\geq 2\) in the next lemma). Let \(T_{i,q}\) be obtained from \(q\) vertex disjoint copies \(T^{1}_{i},\ldots,T^{q}_{i}\) of \(T_{i-1,q}\) and from a vertex \(c_{i}\), the root of \(T_{i,q}\). Then, for every \(1\leq j\leq q\), add a path \(P^{j}_{i}\) of length \(p^{j}_{i}\) (defined below) between the root \(c^{j}_{i}\) of \(T^{j}_{i}\) and \(c_{i}\) (that is, \(c_{i}\) and \(c^{j}_{i}\) are at distance \(p^{i}_{j}\) in \(T_{i,q}\)).
The lengths \(p^{j}_{i}\) are defined recursively as follows. Let \(p^{1}_{i}=2\) and, for every \(1<j\leq q\), let \(p^{j}_{i}\) be the minimum even integer greater or equal to \(\ell_{i-1}+\sum_{1\leq k<j}p^{k}_{i}\) (it will be shown in the next lemma that \(\ell_{i}\) equals the smallest even integer greater or equal to \(q\ell_{i-1}+\sum_{1\leq j\leq q}jp^{j}_{i}\)).
Finally, let us assume that \(c_{i}\in V_{r}\) and note that, since \(p^{j}_{i}\) is even, this implies that \(c_{i},c^{1}_{i},\ldots,c^{q}_{i}\) all belong to \(V_{r}\).
**Lemma 24**.: _For any \(i\in\mathbb{N}^{*}\) and \(q\geq 6\), \(h_{V_{r}}(T_{i,q})=2\)._
Proof.: The fact that \(h_{V_{r}}(T_{i,q})\geq 2\) follows from Lemmas 23 and 1 and since \(T_{i,q}\) contains \(S_{3,q}\) as a subgraph.
We prove that \(h_{V_{r}}(T_{i,q})\leq 2\) by induction on \(i\). More precisely, we prove that there exists a winning hunter strategy \(\mathcal{S}_{i}=(S_{1},\ldots,S_{\ell_{i}})\) for \(T_{i,q}\), with respect to \(V_{r}\), using \(2\) hunters and such that, for any \(j\geq 1\), if the root \(c_{i}\) of \(T_{i,q}\) is in \(Z_{j}\), then \(c_{i}\in S_{j+1}\). This holds for \(i=1\) by Lemma 23. Let \(i>1\) and let us assume by induction that such a strategy \(\mathcal{S}_{i-1}\) has already been defined for \(T_{i-1,q}\).
Recall that, for all \(1\leq j\leq q\), \(c^{j}_{i}\) denotes the root of the copy \(T^{j}_{i}\) of \(T_{i-1,q}\) linked to the root \(c_{i}\) of \(T_{i}\) by a path \(P^{j}_{i}\) of length at least \(\ell_{i-1}+\sum_{1\leq k<j}p^{k}_{i}\). Moreover, recall that \(c_{i}\in V_{r}\).
Let us define the strategy \(\mathcal{S}_{i}\) as follows. It proceeds in \(q\) phases and ensures that, at every round \(h\), if \(c_{i}\in Z_{h}\), then \(c_{i}\in S_{h+1}\) and that, for every round \(h\) arising after the \(j^{th}\) phase, \((Z_{h}\setminus c_{i})\cap\bigcup_{k\leq j}(V(T^{k}_{i})\cup V(P^{k}_{i}))=\emptyset\). This implies that, at the round \(\ell_{i}\), if the rabbit is still alive, it has to be on \(c_{i}\). But during the last phase, we ensured that the rabbit cannot reach \(c_{i}\). Thus, \(\mathcal{S}_{i}\) is a winning strategy.
Let \(1\leq j\leq q\), let \(i^{j}_{0}\) be the last round of phase \(j\) (\(i^{0}_{0}=0\)) and assume by induction on \(j\) that \(i^{j}_{0}\) is even. Let us moreover assume by induction on \(j\) that \((Z_{i^{j-1}_{0}}\setminus c_{i})\cap\bigcup_{k\leq j-1}(V(T^{k}_{i})\cup V(P^{k }_{i}))=\emptyset\). The \(j^{th}\) phase proceeds into two sub-phases as follows.
Very informally, in the first sub-phase, the hunters "push" the rabbit toward the subtrees \(T^{q}_{i}\), then \(T^{q-1}_{i}\) until the subtree \(T^{j}_{i}\). Then, in the second sub-phase, the two hunters clear the subtree \(T^{j}_{i}\) (without the rabbit being able to leave \(T^{j}_{i}\) if it was there). The lengths of the paths linking the roots of the subtrees to \(c_{i}\) (illustrated in Figure 3) guarantee that the rabbit cannot reach \(c_{i}\) before \(T^{j}_{i}\) has been cleared.
Formally, during the first sub-phase, the first hunter shoots at \(c_{i}\) at every odd round. Hence, the rabbit cannot leave the component of \(T_{i,q}\setminus c_{i}\) that it was occupying at the end of the
phase. During the same first sub-phase, the second hunter sequentially shoots the vertices of \(P_{i}^{q},P_{i}^{q-1},\ldots,P_{i}^{j}\) in this order and from the neighbours of \(c_{i}\) to the vertices \(c_{i}^{q},c_{i}^{q-1},\ldots,c_{i}^{j}\). More precisely, for every \(j\leq k\leq q\), let \(P_{i}^{k}=(v_{0}^{k}=c_{i},v_{1}^{k},\ldots,v_{p_{i}^{k}}^{k}=c_{i}^{k})\). The second hunter starts at round \(i_{0}^{j}+2\) by shooting \(v_{1}^{q}\) and then sequentially shoots \(v_{2}^{q},v_{3}^{q},\ldots,v_{p_{i}^{q}}^{q}=c_{i}^{q},v_{1}^{q-1},v_{2}^{q-1},\ldots,v_{p_{i}^{q-1}}^{q-1}=c_{i}^{q-1},v_{1}^{q-2},\ldots,c_{i}^{j}\). Note that, after the round when the second hunter shoots at \(c_{i}^{q}\), if the rabbit was occupying a vertex in \(T_{i}^{q}\cup P_{i}^{q}\) at the beginning of the \(j^{th}\) phase, then it must occupy a vertex at distance at least \(p_{i}^{q}\) from \(c_{i}\). Similarly, after the round when the second hunter shoots at \(c_{i}^{q-1}\), if the rabbit was occupying a vertex in \(T_{i}^{q-1}\cup P_{i}^{q-1}\) at the beginning of the \(j^{th}\) phase, then it must occupy a vertex at distance at least \(p_{i}^{q-1}\) from \(c_{i}\). Moreover, if the rabbit was occupying a vertex in \(T_{i}^{q}\cup P_{i}^{q}\) at the beginning of the \(j^{th}\) phase, then it must occupy a vertex at distance at least \(p_{i}^{q}-p_{i}^{q-1}\) from \(c_{i}\) (since there have been \(p_{i}^{q-1}\) rounds between the shots of \(c_{i}^{q}\) and of \(c_{i}^{q-1}\)). With similar arguments, and by the definition of \(p_{i}^{k}\) for \(j<k\leq q\), after the round when the second hunter shoots at \(c_{i}^{j}\), the rabbit must be at distance at least \(\ell_{i-1}\) from \(c_{i}\) if it was occupying a vertex of \(\bigcup_{j<k\leq q}T_{i}^{k}\cup P_{i}^{k}\) at the end of the \((j-1)^{th}\) phase. Moreover, the rabbit cannot occupy any vertex in \(\bigcup_{k\leq j-1}(V(T_{i}^{k})\cup V(P_{i}^{k}))\) since the first hunter is always shooting \(c_{i}\) during the odd rounds. Finally, the rabbit cannot occupy a vertex in \(P_{i}^{j}\) since the second hunter has just shot sequentially the vertices \(v_{1}^{j},v_{2}^{j},\ldots,v_{p_{i}^{j}}^{j}=c_{i}^{j}\).
Then, the second sub-phase of phase \(j\) starts, during which both hunters execute the strategy \(\mathcal{S}_{i-1}\) in the subtree \(T_{i}^{j}\) (the shot of \(c_{i}^{j}\) by the second hunter during the first sub-phase, i.e., the last round of the first sub-phase, may be used as the first round of \(\mathcal{S}_{i-1}\)). By the induction hypothesis, the strategy \(\mathcal{S}_{i-1}\) ensures that the rabbit cannot reach the root \(c_{i}^{j}\) of \(T_{i}^{j}\) without being shot immediately (if \(c_{i}^{j}\) is in \(Z_{h}\) for some round \(h\), then \(c_{i}^{j}\in S_{h+1}\)). Thus, if the rabbit was
Figure 3: The graph \(T_{i,6}\). The labels on the edges are used to represent their respective lengths. In particular, for every \(2\leq j\leq 6\), we have that \(p_{i}^{j}>\sum_{1\leq k\leq j-1}p_{i}^{k}+\ell_{i-1}\), where \(p_{i}^{1}=2\) and \(\ell_{i-1}\) is equal to the number of rounds needed to clear any copy of the \(T_{i-1,6}\) graph.
occupying a vertex of \(T_{i}^{j}\) at the beginning of the second sub-phase, then the rabbit cannot leave this subtree and it is eventually shot. Otherwise, because \(\mathcal{S}_{i-1}\) has length at most \(\ell_{i-1}\), the rabbit cannot reach \(c_{i}\) before the last shots of the hunters in \(T_{i}^{j}\). Let \(i_{0}^{j}\) be the last round of the phase. Note that \(i_{0}^{j}\) is even since \(p_{i}^{h}\) and \(\ell_{h}\) are even for all \(1\leq h\leq q\), and \(i_{0}^{j-1}\) is even by induction. Then, the \(j^{th}\) phase ends after the (even) round \(i_{0}^{j}\) and with the desired property: the rabbit can only occupy a vertex in \(c_{i}\cup\bigcup_{j<k\leq q}(V(T_{i}^{k})\cup V(P_{i}^{k}))\), i.e., \((Z_{i_{0}^{j}}\setminus c_{i})\cap\bigcup_{k\leq j}(V(T_{i}^{k})\cup V(P_{i}^{ k}))=\emptyset\).
To conclude, note that \(\mathcal{S}_{i}\) is winning in at most \(q\ell_{i-1}+\sum_{1\leq j\leq q}jp_{i}^{j}\) rounds since each phase \(j\) proceeds in \(\ell_{i-1}+\sum_{j\leq k\leq q}p_{i}^{k}\) rounds.
**Theorem 8**.: _For any tree \(T\), there exists a subdivision \(T^{\prime}\) of \(T\) such that \(h(T^{\prime})\leq 2\)._
Proof.: Let \(q\) be the maximum degree of \(T\). Let \(r\) be any vertex of \(T\), and let \(i\) be the eccentricity of \(r\) (i.e., the largest distance between \(r\) and some vertex of \(T\)). Then, there exists a subdivision \(T^{\prime}\) of \(T\), that is a subgraph of \(T_{i,\max\{6,q\}}\) (each vertex of \(T\) being "mapped" to a vertex of degree at least \(3\) of \(T_{i,\max\{6,q\}}\) and \(r\) being "mapped" to the root of \(T_{i,\max\{6,q\}}\)). By Lemmas 1, 5 and 24, \(h(T^{\prime})\leq 2\).
**Corollary 6**.: _For every \(\ell\geq 0\), there exists a tree \(T\) and a subdivision \(T^{\prime}\) of \(T\) such that \(h(T)-h(T^{\prime})\geq\ell\)._
### Non monotonicity of the red variant in trees
Before proving Lemma 27, we need some additional results. Note that the next lemma is the adaptation of Proposition 5 for the red variant of the game.
**Lemma 25**.: _Let \(G=(V_{r}\cup V_{w},E)\) be any bipartite graph and \(H\) be a connected subgraph of \(G\). Let \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be any parsimonious monotone winning hunter strategy in \(G\) with respect to \(V_{r}\). Let \(1\leq i\leq\ell\) and \(x,y\in V(H)\) such that \(x\in\bigcup_{j<i}S_{j}\) and \(y\in Z_{i-1}\) and minimising the distance between such \(x\) and \(y\) in \(H\). If \(x,y\notin S_{i}\), then \(xy\in E(H)\)._
Proof.: Note first that if \(x=y\), then \(\mathcal{S}\) is non-monotone since \(y=x\in(\bigcup_{j<i}S_{j}\cap Z_{i-1})\setminus S_{i}\). Hence, we may assume that \(x\neq y\). Let \(P\) be a shortest path from \(x\) to \(y\) in \(H\) (it exists since \(H\) is connected). Let us assume that \(S_{i}\subseteq V_{r}\) and so \(i\) is odd (the case when \(S_{i}\subseteq V_{w}\) and \(i\) is even is similar). Since \(y\in Z_{i-1}\) and \(S_{i}\subseteq Z_{i-1}\) (since \(\mathcal{S}\) is parsimonious), \(y\in V_{r}\). Let \(a\) be the neighbour of \(x\) in \(P\).
Towards a contradiction, let us assume that \(a\neq y\). By the minimality of the distance between \(x\) and \(y\), \(a\notin Z_{i-1}\) and \(a\notin\bigcup_{j<i}S_{j}\). Let \(b\neq x\) be the other neighbour of \(a\) in \(P\). If \(b\neq y\), then by the minimality of the distance between \(x\) and \(y\), \(b\notin Z_{i-1}\) and \(b\notin\bigcup_{j<i}S_{j}\). Therefore, by Proposition 6, if \(a\in V_{r}\), then \(a\in Z_{i}\) and if \(b\in V_{r}\) then \(b\in Z_{i}\). In both cases, there is a contradiction with the fact that \(P\) minimises the distance between \(x\) and \(y\).
Therefore, we may assume that \(b=y\). This implies that \(x\in V_{r}\). Note also that \(y\notin S_{j}\) for all \(j\leq i\). Indeed, assuming otherwise would contradict the fact that \(\mathcal{S}\) is monotone, since \(y\notin S_{i}\) and \(y\in Z_{i-1}\). Thus, by Proposition 6, and since \(a,y\notin\bigcup_{j\leq i-1}S_{j}\), we have that \(x\in Z_{i-1}\). This contradicts the monotonicity of \(\mathcal{S}\) since \(x\notin S_{i}\) and \(x\in\bigcup_{j<i}S_{j}\).
Before we prove the next lemma, we introduce an extra definition. Let \(G=(V,E)\) be any graph and \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) be any winning hunter strategy in \(G\) with respect to \(X\subseteq V\). We say that \(W\subseteq V\) is _definitively cleaned at the round \(i\)_ if \(W\cap Z_{j}(\mathcal{S})=\emptyset\) and \(W\cap S_{j+1}=\emptyset\) for every \(i\leq j\leq\ell\).
Informally, the following lemma says that if the degree of the root \(r\) of a tree \(T\) is large enough, compared to the number of hunters, then, when a first branch of \(r\) is definitively
cleaned according to any monotone hunter strategy, there must be some other branches of \(r\) whose vertices have never been shot.
**Lemma 26**.: _Let \(T=(V_{r}\cup V_{w},E)\) be a tree rooted in some vertex \(c\in V_{r}\) with neighbours \(N(c)=\{v_{1},\ldots,v_{d}\}\), \(d\geq 2k\). For every \(1\leq i\leq d\), let \(B_{i}\) be the branch at \(c\) containing \(v_{i}\) and assume that \(|V(B_{i})|\geq 2\). Let \(\mathcal{S}=(S_{1}\ldots,S_{\ell})\) be any parsimonious monotone winning hunter strategy in \(T\) with respect to \(V_{r}\) using at most \(k-1\) hunters. Let \(1\leq j\leq\ell\) be the smallest index such that there exists an \(1\leq\alpha\leq d\) such that \(V(B_{\alpha})\) is definitively cleaned at the round \(j\). Then, there exist \(1\leq\beta<\gamma\leq d\), \(\alpha\notin\{\beta,\gamma\}\), such that \((\bigcup_{1\leq i\leq j}S_{i})\cap V(B_{\beta})=\emptyset\) and \((\bigcup_{1\leq i\leq j}S_{i})\cap V(B_{\gamma})=\emptyset\)._
Proof.: Let \(j\) and \(\alpha\) be defined as in the statement and, w.l.o.g., let us assume that \(\alpha=1\). That is, the branch \(B_{1}\) is definitively cleaned at round \(j\), and no other branch has been definitively cleaned before round \(j\).
**Claim 4**.: _For any vertex \(v\in V(T)\) such that there exist a least \(3\) branches at \(v\) with at least two vertices, let \(q\) be the minimum index such that such a branch \(B\) is definitively cleaned at round \(q\). Then, \(S_{q}\cap V(B)\neq\emptyset\)._
Proof of Claim..: By the minimality of \(q\), either \(Z_{q-1}\cap V(B)\neq\emptyset\) or \(S_{q}\cap V(B)\neq\emptyset\). If \(S_{q}\cap V(B)\neq\emptyset\), then the statement holds. Thus, let us assume that \(S_{q}\cap V(B)=\emptyset\). Then \(Z_{q-1}\cap V(B)\neq\emptyset\), as otherwise \(B\) would have been definitively cleaned at round prior to \(q\). Let \(x\in Z_{q-1}\cap V(B)\). Since \(x\notin S_{q}\), \(N_{B}(x)\subseteq Z_{q}\). Since \(B\) is connected and contains at least two vertices, we have that \(N_{B}(x)\neq\emptyset\). Thus \(Z_{q}\cap V(B)\neq\emptyset\), which contradicts that \(B\) is definitively cleaned at round \(q\). \(\diamond\)
It follows by Claim 4 that \(V(B_{1})\cap S_{j}\neq\emptyset\). Thus, since \(|S_{j}|\leq k-1\), there are at most \(k-2\) branches at \(c\), other than \(B_{1}\), which can be shot during the round \(j\). W.l.o.g., let us assume that \(B_{2},\ldots,B_{k-1}\) are the branches at \(c\) that are also shot during round \(j\). That is, \(S_{j}\subseteq\{c\}\cup\bigcup_{1\leq h<k}V(B_{h})\).
For purpose of contradiction, let us assume that there exists at most one branch, w.l.o.g., say \(B_{k}\), such that \(\bigcup_{1\leq i\leq j}S_{i}\cap V(B_{k})=\emptyset\). Hence, we assume that for every \(k<h\leq d\), there exists \(j_{h}<j\) and \(x_{h}\in V(B_{h})\cap S_{j_{h}}\).
For any \(k<h\leq d\), let us denote by \(j_{h}^{*}\) the minimum index such that \(B_{h}\) is definitively cleaned at round \(j_{h}^{*}\). By Claim 4, for any \(k<h\leq d\), \(S_{j_{h}^{*}}\cap V(B_{h})\neq\emptyset\). Thus, since \(V(B_{h})\cap S_{j}=\emptyset\) for every \(k<h\leq d\), we have that \(j_{h}^{*}\neq j\) for every \(k<h\leq d\). In particular, it follows by the minimality of \(j\) that \(j_{h}^{*}>j\).
Let us prove that, for some \(k<h\leq d\), there exists a vertex \(y_{h}\in Z_{j-1}\cap V(B_{h})\). Towards a contradiction, let us assume that for every \(k<h\leq d\), we have \(Z_{j-1}\cap V(B_{h})=\emptyset\). Recall that \(S_{j_{h}^{*}}\cap V(B_{h})\neq\emptyset\), and let \(z\in S_{j_{h}^{*}}\cap V(B_{h})\) (for some \(k<h\leq d\)). Since \(\mathcal{S}\) is parsimonious, \(z\in Z_{j_{h}^{*}-1}\). Thus, there exists a rabbit trajectory \((r_{0},\ldots,r_{j_{h}^{*}-1}=z)\) such that \(r_{q}\notin S_{q+1}\) for every \(0\leq q<j_{h}^{*}-1\). Moreover, \(r_{j-1}\notin V(B_{h})\), since \(Z_{j-1}\cap V(B_{h})=\emptyset\). Since a rabbit trajectory is a walk, and any walk between a vertex from \(V\setminus V(B_{h})\) to a vertex of \(V(B_{h})\) contains \(c\), there exists \(j-1\leq m<j_{h}^{*}-1\) such that \(r_{m}=c\). Since \(r_{m}\notin S_{m+1}\), it follows that \(v_{1}\in Z_{m+1}\), where \(v_{1}\) is the neighbour of \(c\) in \(B_{1}\). This contradicts that \(B_{1}\) is definitively cleaned at round \(j\). Hence, there exists some vertex \(y_{h}\in Z_{j-1}\cap V(B_{h})\).
Finally, for every \(k<h\leq d\), let us choose the vertices \(x_{h}\) and \(y_{h}\), such that \(x_{h}\in V(B_{h})\cap\bigcup_{1\leq i<j}S_{i}\) and \(y_{h}\in Z_{j-1}\cap V(B_{h})\) and the distance between \(x_{h}\) and \(y_{h}\) is minimised. Note that \(y_{h}\notin\bigcup_{1\leq i<j}S_{i}\), since, otherwise, \(\mathcal{S}\) would not be monotone as \(y_{h}\in Z_{j-1}\setminus S_{j}\). Since \(S_{j}\cap V(B_{h})=\emptyset\) and by Lemma 25, we obtain that \(x_{h}y_{h}\in E(B_{h})\). Thus, \(x_{h}\in Z_{j}\) for every \(k<h\leq d\). Since \(\mathcal{S}\) is a monotone strategy, \(x_{h}\in S_{j_{h}}\cap Z_{j}\) (with \(j_{h}<j\)), that \(x_{h}\in S_{j+1}\) for every \(k<h\leq d\). However, \(d\geq 2k\) and so \(|S_{j+1}|\geq k\), a contradiction.
Finally, we will need an extra definition: For any tree \(T=(V_{r}\cup V_{w},E)\), and any vertex \(v\in V(T)\), let \(B\) be any branch at \(v\) such that \(|V(B)|>1\). For any strategy \(\mathcal{S}=(S_{1},\ldots,S_{\ell})\) in \(T\) with respect to \(V_{r}\), let \(m\) be the minimum integer such that \(S_{m}\cap V(B)\neq\emptyset\) and let \(u\in S_{m}\cap V(B)\) (by Lemma 3, such an integer \(m\) exists because \(|V(B)|>1\) and \(B\) is connected, which implies that \(V_{r}\cap V(B)\neq\emptyset\)). Let the _restriction_\(\mathcal{S}_{B}\) of \(\mathcal{S}\) be the hunter strategy, such that for every \(1\leq i\leq\ell\),
\[S^{\prime}_{i}=\begin{cases}S_{i}\cap V(B),&\quad\text{if }S_{i}\cap V(B)\neq\emptyset \\ \{u\},&\quad\text{otherwise}\end{cases}\]
Recall that, \(h(\mathcal{S}_{B})\leq\max_{1\leq i\leq\ell}S_{i}\cap V(B)\) by Lemma 20.
**Lemma 27**.: _For any \(i\in\mathbb{N}^{*}\), \(i\geq 3\) and \(d\geq 2i\), we have that \(mh_{V_{r}}(T_{2i,d})\geq i\)._
Proof.: Let \(\gamma_{2i}\) denote the root of \(T_{2i,d}\). For purpose of contradiction, let us assume that \(mh_{V_{r}}(T_{2i,d})<i\). By Lemma 21, there exists a parsimonious monotone winning hunter strategy with respect to \(V_{r}\) using at most \(i-1\) hunters; let that strategy be \(\mathcal{S}_{2i}=(S_{1}^{2i},\ldots,S_{\ell}^{2i})\).
Figure 4: A representation of the tree \(T_{2i,d}\) illustrating the notation used throughout the proof of Lemma 27. Wiggly edges are used to represent paths whose internal vertices have degree 2. The branch \(B_{1}^{2i}\) is the first branch of \(T_{2i,d}\) at \(\gamma_{2i}\) that is definitively cleaned, and this happens during the round \(j_{2i}\). No vertex of the branches \(B_{2}^{2i}\) and \(B_{3}^{2i}\) has been shot until the round \(j_{2i}\). Among the branches \(B_{2}^{2i}\) and \(B_{3}^{2i}\), the first branch that is definitively cleaned is \(B_{2}^{2i}\), and this happens at the round \(j_{2i}^{\prime}>j_{2i}\). The colour grey is used on the small triangles to denote that we do not know the state of the corresponding branches at the same level as \(B_{1}^{2i}\) that are different from \(B_{2}^{2i}\) and \(B_{3}^{2i}\). Observe that \(B_{2}^{2i}\) contains a copy of \(T_{2i-1,d}\), rooted at \(\gamma_{2i-1}\). Since \(B_{2}^{2i}\) is definitively cleaned at round \(j_{2i}^{\prime}\), we can iterate the same arguments, and define \(B_{1}^{2i-1}\) to be the first branch of \(T_{2i-1},d\) at \(\gamma_{2i-1}\) that is definitively cleaned, and this happens during the round \(j_{2i}<j_{2i-1}<j_{2i}^{\prime}\), and so on, until we have reached the leaves of \(B_{2}^{2i}\).
Let \(1\leq j_{2i}\leq\ell\) be the smallest index such that some branch at \(\gamma_{2i}\), w.l.o.g., \(B_{1}^{2i}\), is definitively cleaned at round \(j_{2i}\). By Lemma 26, there exist two branches at \(\gamma_{2i}\), w.l.o.g., \(B_{2}^{2i}\) and \(B_{3}^{2i}\), such that \((\bigcup_{1\leq q\leq j_{2i}}S_{q})\cap V(B_{2}^{2i})=\emptyset\) and \((\bigcup_{1\leq q\leq j_{2i}}S_{q})\cap V(B_{3}^{2i})=\emptyset\).
Let \(1\leq j_{2i}^{\prime}\leq\ell\) be the smallest index such that at least one branch \(B_{2}^{2i}\) or \(B_{3}^{2i}\) is definitively cleaned.
Note that \(B_{2}^{2i}\) (resp., \(B_{3}^{2i}\)) is connected and has at least two vertices with at least one in \(V_{r}\). Therefore, by Lemma 3, at least one vertex of \(B_{2}^{2i}\) (resp., \(B_{3}^{2i}\)) must be shot before the branch is definitively cleaned. Hence, \(j_{2i}<j_{2i}^{\prime}\). W.l.o.g., assume that \(B_{2}^{2i}\) is definitively cleaned at round \(j_{2i}^{\prime}\) (possibly, \(B_{3}^{2i}\) may also be definitively cleaned at round \(j_{2i}^{\prime}\)).
We now prove by induction on \(0\leq h<2i\), that there exist \(1\leq j_{2i}<j_{2i-1}<\cdots<j_{2i-h}<j_{2i-h}^{\prime}\leq j_{2i-h+1}^{\prime} \leq\cdots\leq j_{2i-1}^{\prime}\leq j_{2i}^{\prime}\leq\ell\) and \(\{B_{1}^{2i-s},B_{2}^{2i-s},B_{3}^{2i-s}\}_{0\leq s\leq h}\) such that, for every \(0\leq s\leq h\):
1. \((\bigcup_{1\leq q\leq j_{2i-s}}S_{q})\cap V(B_{2}^{2i-s})=\emptyset\) and \((\bigcup_{1\leq q\leq j_{2i-s}}S_{q})\cap V(B_{3}^{2i-s})=\emptyset\);
2. \(B_{1}^{2i-s},B_{2}^{2i-s}\) and \(B_{3}^{2i-s}\) are vertex disjoint branches at the root \(\gamma_{2i-s}\) of the copy of \(T_{2i-s,d}\) contained in \(B_{2}^{2i-(s-1)}\) (with \(B_{2}^{2i+1}=T_{2i,d}\) for \(s=0\)), each containing a copy of \(T_{2i-(s+1),d}\) (with \(T_{0,d}=\emptyset\) for \(s=h=2i-1\));
3. \(B_{1}^{2i-s}\) is definitively cleaned at round \(j_{2i-s}\) (not before, i.e., for every \(x<j_{2i-s}\), \(B_{1}^{2i-s}\) is not definitively cleaned at round \(x\)), \(B_{2}^{2i-s}\) is definitively cleaned at round \(j_{2i-s}^{\prime}\) (not before), \(B_{3}^{2i-s}\) is definitively cleaned at some round \(x\geq j_{2i-s}^{\prime}\) (not before).
See Figure 4 for an illustration of the above notation.
We have already proven that the induction hypothesis holds for \(h=0\). Let us assume that it holds for some \(0\leq h<2i-1\) and let us show it holds for \(h+1\). Let \(F\) be the copy of \(T_{2i-(h+1),d}\) (rooted at \(\gamma_{2i-(h+1)}\)) contained in \(B_{2}^{2i-h}\) and let \(1\leq j_{2i-(h+1)}\leq j_{2i-h}^{\prime}\) be the smallest integer such that some branch \(B\) of \(F\) at \(\gamma_{2i-(h+1)}\) is definitively cleaned. Note that each branch of \(F\) at \(\gamma_{2i-(h+1)}\) is connected and has at least two vertices with at least one in \(V_{r}\). Therefore, by Lemma 3, at least one vertex of \(B\) must be shot before it is definitively cleaned. Hence, \(j_{2i-h}<j_{2i-(h+1)}\). Let \(B=B_{1}^{2i-(h+1)}\). Let \(\mathcal{S}_{2i-(h+1)}\) denote the restriction of \(\mathcal{S}_{2i-h}\) on \(F\). By Lemma 26 considering the strategy \(\mathcal{S}_{2i-(h+1)}\) on \(F\), there exist at least two branches at \(\gamma_{2i-(h+1)}\), let us denote them by \(B_{2}^{2i-(h+1)}\) and \(B_{3}^{2i-(h+1)}\), such that \((\bigcup_{1\leq q\leq j_{2i-(h+1)}}S_{q})\cap V(B_{2}^{2i-(h+1)})=\emptyset\) and \((\bigcup_{1\leq q\leq j_{2i-(h+1)}}S_{q})\cap V(B_{3}^{2i-(h+1)})=\emptyset\). Also, it follows by Lemma 3 that \(j_{2i-(h+1)}<j_{2i-(h+1)}^{\prime}\). Finally, \(B_{1}^{2i-(h+1)}\), \(B_{2}^{2i-(h+1)}\) and \(B_{3}^{2i-(h+1)}\) are all contained in \(B_{2i-h}\) and, thus, \(max(j_{2i-(h+1)},j_{2i-(h+1)}^{\prime})=j_{2i-(h+1)}^{\prime}\leq j_{2i-h}^{\prime}\). This finishes the proof of the induction step.
For every \(1\leq s\leq 2i\), let \(H_{s}\) be the subgraph induced by \(B_{1}^{s}\) and \(B_{3}^{s}\) and \(\gamma_{s}\) (so \(H_{s}\) is connected and the subgraphs \(H_{s}\) and \(H_{s^{\prime}}\) are vertex disjoint for any \(s\neq s^{\prime}\)). Since \(B_{1}^{s}\) has been definitively cleaned at round \(j_{s}\), has at least two vertices and by Lemma 3, there exists a vertex \(x_{s}^{\prime}\in V(B_{1}^{s})\cap\bigcup_{1\leq q\leq j_{2i}}S_{q}\).
Note that \(B_{3}^{s}\) is connected and has at least two vertices with at least one in \(V_{r}\). Therefore, by Lemma 3, at least one vertex of \(B_{3}^{s}\) must be shot before the branch is definitively cleaned. Thus, since \(B_{3}^{s}\) is definitely cleaned at round \(z_{s}\geq j_{s}^{\prime}\), but not definitely cleaned at a previous round, \(S_{z_{s}}\cap V(B_{3}^{s})\neq\emptyset\). Moreover, since \(\mathcal{S}_{2i}\) is parsimonious, any vertex \(v\in S_{z_{s}}\cap V(B_{3}^{s})\) is such that \(v\in Z_{z_{s}-1}\). It follows that \((N(v)\cap Z_{z_{s}-2})\setminus S_{z_{s}-1}\neq\emptyset\). Let \(w_{s}\in(N(v)\cap Z_{z_{s}-2})\setminus S_{z_{s}-1}\) and note that \(w_{s}\in N[V(B_{3}^{s})]=V(B_{3}^{s})\cup\{\gamma_{s}\}\). Since \(\mathcal{S}_{2i}\) is monotone, we get that \(w_{s}\notin S_{j}\) for every \(j<z_{s}\) and that \(N(w_{s})\not\subseteq S_{j}\) for every \(j<z_{s}\), i.e. \(w_{s}\) has not been shot before round
\(z_{s}\). Note also that if \(w_{s}=\gamma_{s}\), then the neighbour of \(\gamma_{s}\) in \(B_{1}^{s}\) is in \(Z_{z_{s}-1}\), a contradiction since \(z_{s}>j_{1}\) and \(B_{1}^{s}\) is definitively cleaned at round \(j_{s}\leq j_{1}\). Hence, \(w_{s}\neq\gamma_{s}\) and so \(w_{s}\in V(B_{3}^{s})\).
Similarly, there exists a vertex \(w_{s}^{\prime}\in N(w_{s})\cap Z_{z_{s}-3}\setminus S_{z_{s}-2}\) such that \(w_{s}^{\prime}\) has not been cleaned before round \(z_{s}-1\). Hence, we have two adjacent vertices \(w_{s}\) and \(w_{s}^{\prime}\) in \(N[B_{3}^{s}]\) that have never been shot before the round \(z_{s}-1\). Thus, since \(\{w_{s},w_{s}^{\prime}\}\cap V_{r}\neq\emptyset\), there exists a rabbit trajectory \((\dots,w_{s},w_{s}^{\prime},w_{s},w_{s}^{\prime},\dots)\) consisting in oscillating between \(w_{s}\) and \(w_{s}^{\prime}\) which implies that \(\{w_{s},w_{s}^{\prime}\}\cap Z_{j}\neq\emptyset\) for all \(j<z_{s}\). In particular, since \(j_{1}-1<z_{s}\), there exists a vertex \(y_{s}^{\prime}\in N[B_{3}^{s}]\cap Z_{j_{1}-1}\).
For every \(1\leq s\leq 2i\), let \(x_{s}\) and \(y_{s}\) be two vertices in \(V(H_{s})\) such that \(x_{s}\in V(H_{s})\cap\bigcup_{1\leq q\leq j_{1}}S_{q}\), \(y_{s}\in V(H_{s})\cap Z_{j_{1}-1}\) and the distance between \(x_{s}\) and \(y_{s}\) is minimised. Let \(\mathcal{P}=\{s\mid 1\leq s\leq 2i,y_{s}\in S_{j_{1}}\cup S_{j_{1}+1}\) or \(x_{s}\in S_{j_{1}}\cup S_{j_{1}+1}\}\), i.e. \(\mathcal{P}\) is the set of indices \(s\) such that at least one of \(y_{s}\) or \(x_{s}\) is shot during the round \(j_{1}\) or \(j_{1}+1\). Since \(\mathcal{S}_{2i}\) uses at most \(i-1\) hunters, we have that \(|\mathcal{P}|\leq 2(i-1)\). Thus, let \(1\leq s^{*}\leq 2i\) such that \(s^{*}\notin\mathcal{P}\), i.e., \(x_{s^{*}},y_{s^{*}}\notin S_{j_{1}}\cup S_{j_{1}+1}\). It follows from Lemma 25 that \(x_{s^{*}}y_{s^{*}}\in E(T_{2i,d})\). Since \(y_{s^{*}}\in Z_{j_{2i}-1}\setminus S_{j_{1}}\), we have that that \(x_{s^{*}}\in Z_{j_{1}}\). However, \(x_{s^{*}}\in\bigcup_{1\leq q\leq j_{1}}S_{q}\setminus S_{j_{1}+1}\) and so, \(x_{s^{*}}\) is recontaminated, contradicting the monotonicity of \(\mathcal{S}_{2i}\).
## 6 Kernelization by vertex cover
Let us first remind some of the basic definitions regarding parameterised complexity. An instance of a parameterised version \(\Pi_{p}\) of a decision problem \(\Pi\) is a pair \((I,t)\), where \(I\) is an instance of \(\Pi\) and \(t\) is a non-negative integer, called a _parameter_, associated with \(I\). We say that \(\Pi_{p}\) is _fixed-parameter tractable_ (\(\mathsf{FPT}\)) if there exists an algorithm (called as _FPT algorithm_) that, given an instance \((I,t)\) of \(\Pi_{p}\), solves it in time \(f(t)\cdot|I|^{\mathcal{O}(1)}\), where \(f\) is any computable function of \(t\).
**Definition 2** (Equivalent Instances).: _Let \(\Pi_{1}\) and \(\Pi_{2}\) be two parameterised problems. Two instances, \((I,t)\in\Pi_{1}\) and \((I^{\prime},t^{\prime})\in\Pi_{2}\), are equivalent when \((I,t)\) is a Yes-instance if and only if \((I^{\prime},t^{\prime})\) is a Yes-instance._
A parameterised (decision) problem \(\Pi_{p}\) admits a _kernel_ of size \(f(t)\), for some function \(f\) that depends only on \(t\), if the following is true: there exists an algorithm (called a _kernelization algorithm_) that, given as input an instance \((I,t)\) of \(\Pi_{p}\), runs in \((|I|+t)^{\mathcal{O}(1)}\) time and outputs an equivalent instance \((I^{\prime},t^{\prime})\) of \(\Pi_{p}\) such that \(|I^{\prime}|\leq f(t)\) and \(t^{\prime}\leq t\). If the function \(f\) is polynomial, then the problem is said to admit a _polynomial kernel_. It is well-known that a decidable parameterised problem is \(\mathsf{FPT}\) if and only if it admits a kernel [12].
Recall that a _vertex cover_ of a graph \(G\) is any set \(U\subseteq V(G)\) such that for every edge \(uv\in E(G)\), \(U\cap\{u,v\}\neq\emptyset\). The order of a minimum size vertex cover of \(G\) is usually referred to as the _vertex cover number_ of \(G\) and denoted by \(vc(G)\). In what follows, we consider the Hunters and Rabbit Problem parameterised by the vertex cover number. That is, an instance \(((G,k),t)\) is defined by an input \((G,k)\) where the problem aims at deciding whether \(h(G)\leq k\) and the parameter \(t\) is any upper bound on \(vc(G)\).
First, we have the following observation.
**Proposition 7**.: _For any connected graph \(G\), \(h(G)\leq mh(G)\leq vc(G)\)._
Proof.: Let \(U\) be a vertex cover in \(G\) and \(I\) be the independent set \(V(G)\setminus U\). The hunter player can win simply by shooting all the vertices of \(U\) twice. If the rabbit starts at a vertex \(u\in U\), then it gets shot in the first round. Otherwise, the rabbit was on a vertex \(v\in I\), and then it has to move to a vertex in \(U\) (since \(I\) is an independent set) that is, \(Z_{1}=U\) and then, the rabbit is shot by a hunter in the next round. Finally, note that this strategy is also monotone.
Let \(U\) be a vertex cover of size \(t\geq vc(G)\) of \(G\) and \(I\) be the independent set \(V(G)\setminus U\). For each subset \(S\subseteq U\), we define the following equivalence class: \(\mathcal{C}_{S}=\{v\mid v\in I\text{ and }N(v)=S\}\). Next, we have the following crucial lemma.
**Lemma 28**.: _Let \(G=(V,E)\) be a connected graph, \(U\subseteq V\) be a vertex cover of \(G\), \(k\geq 1\) and let \(S\subseteq U\) be such that \(|\mathcal{C}_{S}|>k+1\). Let \(\mathcal{C}_{S}=\{v_{1},\ldots,v_{q}\}\). Then, \(h(G)\leq k\) (resp., \(mh(G)\leq k\)) if and only if \(h(G[V\setminus\{v_{k+2},\ldots,v_{q}\}])\leq k\) (resp., \(mh(G[V\setminus\{v_{k+2},\ldots,v_{q}\}])\leq k\))._
Proof.: By Lemma 1, \(h(G[V\setminus\{v_{k+2},\ldots,v_{q}\}])\leq h(G)\). Similarly, due to Lemma 8, \(mh(G[V\setminus\{v_{k+2},\ldots,v_{q}\}])\leq mh(G)\). So, it only remains to prove that, if \(h(G)>k\) (resp., \(mh(G)>k\)), then \(h(G[V\setminus\{v_{k+2},\ldots,v_{q}\}])>k\) (resp., \(mh(G[V\setminus\{v_{k+2},\ldots,v_{q}\}])>k\)). Let \(H=G[V\setminus\{v_{k+2},\ldots,v_{q}\}]\) and let \(X=\{v_{1},\ldots,v_{k+1}\}\) (i.e., \(X=V(H)\cap\mathcal{C}_{S}\)).
In the following we show that if \(h(G)>k\) (resp., \(mh(G)>k\)), then \(h(H)>k\) (resp., \(mh(H)>k\)). To this end, we establish that if the rabbit has a winning strategy in \(G\) against \(k\) hunters, then the rabbit has a winning strategy in \(H\) against \(k\) hunters.
**(1) \(h(G)>k\implies h(H)>k\):** Let \(\mathcal{S}=(S_{1},S_{2},\ldots,S_{\ell})\) be any hunter strategy (not necessarily winning) in \(H\) using at most \(k\) hunters. Then, \(\mathcal{S}\) is a hunter strategy (not necessarily winning) in \(G\) using at most \(k\) hunters. Since \(h(G)>k\), there exists a rabbit-trajectory \(\mathcal{R}^{\prime}=(r_{0}^{\prime},r_{1}^{\prime},\ldots,r_{\ell-1}^{\prime})\) in \(G\) such that \(r_{i}^{\prime}\notin S_{i+1}\) for every \(0\leq i<\ell\). Let \(\mathcal{R}=(r_{0},\ldots,r_{\ell-1})\) be such that, for every \(0\leq i<\ell\), let \(r_{i}=r_{i}^{\prime}\) if \(r_{i}^{\prime}\in V(H)\) and, otherwise, let \(r_{i}\) be any vertex of \(X\setminus S_{i+1}\) (such a vertex exists since \(|S_{i+1}|\leq k\) and \(|X|>k\)). Note that \(r_{i}^{\prime}\neq r_{i}\) only if \(r_{i}^{\prime}\notin V(H)\) and therefore \(r_{i}^{\prime}\in\mathcal{C}_{S}\). This implies that, if \(r_{i}^{\prime}\notin V(H)\), then \(r_{i-1}^{\prime},r_{i+1}^{\prime}\in S\subset V(H)\) (since \(N(r_{i}^{\prime})=S\)). Therefore, \(r_{i-1}=r_{i-1}^{\prime}\) and \(r_{i+1}=r_{i+1}^{\prime}\) and \(r_{i-1},r_{i+1}\in N_{H}(r_{i})\) (since \(r_{i}\in X\) and so \(N_{G}(r_{i})=N_{H}(r_{i})=S\)). Therefore, \(\mathcal{R}\) is a rabbit trajectory in \(H\) and, by construction, \(r_{i}\notin S_{i+1}\) for every \(0\leq i<\ell\). Hence, \(\mathcal{S}\) is not a winning hunter strategy. Therefore, \(h(H)>k\).
**(2) \(mh(G)>k\implies mh(H)>k\):** Let \(\mathcal{S}=(S_{1},S_{2},\ldots,S_{\ell})\) be any hunter strategy (not necessarily winning) in \(H\) using at most \(k\) hunters. Then, \(\mathcal{S}\) is a hunter strategy (not necessarily winning) in \(G\) using at most \(k\) hunters. Since \(mh(G)>k\), for every hunter strategy \(\mathcal{S}\) in \(G\) using at most \(k\) hunters, there is a rabbit trajectory \(\mathcal{R}^{\prime}\) that either is a winning rabbit trajectory (the rabbit never gets shot) or recontaminates a vertex (rabbit may be shot at a later round). Now, let \(\mathcal{R}\) be built from \(\mathcal{R}^{\prime}\) similarly to the previous case. If the rabbit never gets shot in \(\mathcal{R}^{\prime}\), then due to the arguments presented in (1), the rabbit evades getting shot in \(\mathcal{R}\) as well. Hence, we assume that the rabbit gets shot in \(\mathcal{R}^{\prime}\), during, say, a round \(p\), but recontaminates a vertex, say \(x\), during a round \(p^{\prime}<p\). Since only vertices of \(H\) can be shot in the hunter strategy \(\mathcal{S}\), only the vertices of \(H\) can be recontaminated by \(\mathcal{R}^{\prime}\) (recall Lemma 7). Hence \(x\in V(H)\). Therefore, \(x\) gets recontaminated by \(\mathcal{R}\) in \(H\) as well. Thus, \(mh(H)>k\).
Finally, we present our kernelization result.
**Theorem 9**.: _The problem that takes an \(n\)-node connected graph \(G\) and an integer \(k\geq 1\) as inputs and decides whether \(h(G)\leq k\) (resp., \(mh(G)\leq k\)), parameterised by any upper bound \(t\) on \(vc(G)\), admits a kernel of size at most \(4^{t}(t+1)+2t\). Moreover, this problem can be solved in \(\mathsf{FPT}\) time \((4^{t}(t+1)+2t)^{t+1}\cdot n^{\mathcal{O}(1)}\)._
Proof.: The kernelization proceeds as follows. First, if \(k>t\), then answer that \(h(G)\leq mh(G)\leq k\) (this is correct by Proposition 7). Otherwise, let \(U\) be a vertex cover of size at most \(2t\) of \(G\) (which can be computed in time \(\mathcal{O}(tn)\) by classical \(2\)-approximation for vertex cover using maximal matching [28]). Let \(H\) be the graph obtained from \(G\) as follows. For every \(S\subseteq U\), if \(|\mathcal{C}_{S}|>k+1\), then remove \(|\mathcal{C}_{S}|-(k+1)\) vertices from \(\mathcal{C}_{S}\). By Lemma 28 (applied iteratively for each \(S\subseteq U\)), \(h(G)\leq k\) (resp., \(mh(G)\leq k\)) if and only if \(h(H)\leq k\) (resp., \(mh(H)\leq k\)).
Moreover, \(|V(H)|=|U|+\sum_{S\subseteq U}|\mathcal{C}_{S}\cap V(H)|\leq 2t+2^{2t}(k+1)\leq 4 ^{t}(t+1)+2t\) (the last inequality holds by Proposition 7). Hence, the above algorithm is the desired kernelization algorithm.
Finally, applying the XP-algorithm [1], it can be decided in time \(|V(H)|^{k+1}\) whether \(h(H)\leq k\). Since, by Proposition 7, \(k\leq t\), this gives the \(\mathsf{FPT}\) algorithm that decides whether \(h(G)\leq k\) (resp., \(mh(G)\leq k\)) in time \((4^{t}(t+1)+2t)^{t+1}\cdot n^{\mathcal{O}(1)}\).
## 7 Some Future Directions
In this paper, we studied the Hunters and Rabbit game by defining the notion of monotonicity for this game. Using this notion of monotonicity, we characterised the monotone hunter number for various classes of graphs. Moreover, we established that, unlike several graph searching games, the monotonicity helps in this game, i.e., the \(h(G)\) can be arbitrary smaller than \(mh(G)\).
There are still several challenging open questions in this area. The most important among them is the computational complexity of Hunters and Rabbit. Although our results establish that computing \(mh(G)\) is \(\mathsf{NP}\)-hard, the computational complexity of computing/deciding \(h(G)\) remains open, even if \(G\) is restricted to be a tree graph.
We also established that both Hunters and Rabbit, as well as its monotone variant, are \(\mathsf{FPT}\) parameterised by \(vc(G)\) by designing exponential kernels. It is not difficult to see that both of these variants admit AND Composition parameterised by the solution size (by taking the disjoint union of the instances). Thus, since computing \(mh(G)\) is \(\mathsf{NP}\)-hard and \(pw(G)\leq mh(G)\leq pw(G)+1\), it is unlikely for Monotone Hunters and Rabbit parameterised by \(k+pw(G)\) to admit a polynomial compression. Note that the same cannot be argued about Hunters and Rabbit since it is not yet proved to be \(\mathsf{NP}\)-hard. Moreover, since \(mh(G)\) is closely related to \(pw(G)\) and pathwidth admits a polynomial kernel with respect to \(vc(G)\)[9], it might be interesting to see if deciding \(mh(G)\leq k\) (resp., \(h(G)\leq k\)) also admits a polynomial kernel when parameterised by \(vc(G)\). Moreover, another interesting research direction is to study the parameterised complexity of both these games by considering parameters such as solution size, treewidth, and pathwidth.
Finally, we propose some open questions concerning the computation of \(h(G)\) for various graph classes including trees, cographs, and interval graphs. Specifically, it will be interesting to design a polynomial time algorithm, similar to Algorithm 1, to compute \(h(T)\) for a tree \(T\), a question that was already proposed in [1]. The natural way that one could tackle this question is through the notion of monotonicity, which we defined and studied in this paper. Unfortunately, Theorem 7 implies that such an approach will not work. This means that a positive answer to this question (if any) would require the introduction of new tools and techniques. Moreover, it would be interesting to know the monotone hunter number of grids.
## 8 Acknowledgements
This work is partially funded by the project UCA JEDI (ANR-15-IDEX-01) and the EUR DS4H Investments in the Future (ANR-17-EURE-004) and the ANR Digraphs and the ERC grant titled PARAPATH and the IFCAM project "Applications of graph homomorphisms" (MA/IFCAM/18/39). |
2309.09413 | Are Soft Prompts Good Zero-shot Learners for Speech Recognition? | Large self-supervised pre-trained speech models require computationally
expensive fine-tuning for downstream tasks. Soft prompt tuning offers a simple
parameter-efficient alternative by utilizing minimal soft prompt guidance,
enhancing portability while also maintaining competitive performance. However,
not many people understand how and why this is so. In this study, we aim to
deepen our understanding of this emerging method by investigating the role of
soft prompts in automatic speech recognition (ASR). Our findings highlight
their role as zero-shot learners in improving ASR performance but also make
them vulnerable to malicious modifications. Soft prompts aid generalization but
are not obligatory for inference. We also identify two primary roles of soft
prompts: content refinement and noise information enhancement, which enhances
robustness against background noise. Additionally, we propose an effective
modification on noise prompts to show that they are capable of zero-shot
learning on adapting to out-of-distribution noise environments. | Dianwen Ng, Chong Zhang, Ruixi Zhang, Yukun Ma, Fabian Ritter-Gutierrez, Trung Hieu Nguyen, Chongjia Ni, Shengkui Zhao, Eng Siong Chng, Bin Ma | 2023-09-18T01:00:40Z | http://arxiv.org/abs/2309.09413v1 | # Are Soft Prompts Good Zero-shot Learners for Speech Recognition?
###### Abstract
Large self-supervised pre-trained speech models require computationally expensive fine-tuning for downstream tasks. Soft prompt tuning offers a simple parameter-efficient alternative by utilizing minimal soft prompt guidance, enhancing portability while also maintaining competitive performance. However, not many people understand how and why this is so. In this study, we aim to deepen our understanding of this emerging method by investigating the role of soft prompts in automatic speech recognition (ASR). Our findings highlight their role as zero-shot learners in improving ASR performance but also make them vulnerable to malicious modifications. Soft prompts aid generalization but are not obligatory for inference. We also identify two primary roles of soft prompts: content refinement and noise information enhancement, which enhances robustness against background noise. Additionally, we propose an effective modification on noise prompts to show that they are capable of zero-shot learning on adapting to out-of-distribution noise environments.
Dianwen Ng\({}^{1,2}\), Chong Zhang\({}^{1}\), Ruixi Zhang, Yukun Ma\({}^{1}\), Fabian Ritter-Gutierrez\({}^{2}\)
Trung Hieu Nguyen\({}^{1}\), Chongjia Ni\({}^{1}\), Shengkui Zhao\({}^{1}\), Eng Siong Chng\({}^{2}\), Bin Ma\({}^{1}\)+\({}^{1}\)Speech Lab of DAMO Academy, Alibaba Group
\({}^{2}\)School of Computer Science and Engineering, Nanyang Technological University, Singapore
Footnote †: This work was supported by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (URI), Nanyang Technological University, Singapore
Prompt Tuning, Explainable Prompt, Speech Recognition
## 1 Introduction
While large pre-trained speech models [1, 2, 3] like HuBERT [4] and WavLM [5] have adopted an inherent understanding of speech and context, it is nevertheless essential to fine-tune these models to the downstream tasks. Full fine-tuning of the model is computationally expensive and does not scale well, requiring training millions of parameters for each downstream application. Additionally, the heavy cost of storing these large models for every individual task compromises their deployment and accessibility.
Soft prompt tuning [6, 7, 8] is a parameter-efficient tuning (PET) method that mitigates the limitations by calibrating a small number of trainable parameters corresponding to a fixed number of latent tokens referred to as "_prompts_". The soft prompts are prepended to the input embeddings to provide guiding signals for the model to better understand the task. The pre-trained model is kept completely frozen during this process, which makes it extremely portable than full fine-tuned parameter weights. Besides, [9, 10, 11] have demonstrated that such a method is competitive or even performs better than the full fine-tuning model. Although prompt tuning has become a popular method for fine-tuning large pre-trained models, the precise mechanisms through which they enhance model performance remain an open question.
In [12], it suggests that soft prompts assist in accelerating convergence and enhance attention. [6] argues that they may resemble natural language tokens, similar to hand-engineered prompts. However, [13] found that soft prompts do not generally correspond to natural language prompts. Notably, many of these explorations were conducted in the context of NLP and might not directly apply to speech models, rendering them less accountable. This motivates us to investigate the role of soft prompts within the context of the automatic speech recognition (ASR) task. Specifically, _what do the soft-prompts mean to the model and how can we effectively exploit them?_ Our key findings are summarized as follows:
* We show that soft prompts can yield better performance, but the model is susceptible to bad modifications on the prompts, which could lead to increase recognition errors. However, we find that they are not necessarily required for inference, as the model can still perform well without them.
* We identified two primary roles of soft prompt: content refinement and augmenting noise information in the latent speech representations to improve robustness against background noise distortion in speech signals.
* We demonstrate an effective modification to the noise prompts to achieve zero-shot learning on adapting to out-of-domain noise environments.
## 2 Empirical Characterization of the Role of Soft-Prompts
In this section, we illustrate the network computation involved in soft-prompt tuning and outline the goals of this paper, which include the experimental setups.
### Network Computation of Prompt Tuning
Let \(\mathbf{P}\in\mathbb{R}^{m\times d}\) represent the trainable prompts comprising \(m\) tokens, each with a dimension of size \(d\), following the acoustic features denoted by \(\mathbf{X}\in\mathbb{R}^{T\times d}\). To incorporate these prompts into our networks, we prepend them to \(\mathbf{X}\), resulting in the augmented matrix \(\mathbf{X_{P}}:=[\mathbf{X},\mathbf{P}]\in\mathbb{R}^{(T+m)\times d}\). This new matrix, \(\mathbf{X_{P}}\), serves as the latent input for the transformer blocks. Then, consider a single-head attention layer, the output of the attention with prompt tuning is presented by
\[\mathcal{O}=\varphi\Big{(}\frac{1}{\sqrt{d}}\mathbf{X_{P}}\mathbf{W_{Q}} \mathbf{W_{K}^{T}}\mathbf{X_{P}^{T}}\Big{)}\mathbf{X_{P}}\mathbf{W_{V}} \tag{1}\]
where \(\mathbf{W_{Q}}\), \(\mathbf{W_{K}}\) and \(\mathbf{W_{V}}\) are the frozen pre-trained weights for _query_, _key_ and _value_. \(\varphi\) denotes the softmax nonlinearity function that acts row-wise for a \((T+m)\times(T+m)\) matrix. We generalize this to the multi-headed attention layer and the output of the attention layer is scalarized with \(\mathbf{W_{O}}\) as in
\[\text{MultiHead}=\text{Concat}\big{(}\mathcal{O}_{1},\mathcal{O}_{2},\dots, \mathcal{O}_{h}\big{)}\mathbf{W_{O}} \tag{2}\]
where \(\mathcal{O}\) is split into \(h\) heads, each with a dimension of \(d/h\).
Here, _soft-prompt tuning_ exclusively occurs within the self-attention module, providing an additive effect on the output representations. The latent features would attend to the prompt vectors through the behavior of the pre-trained self-attention module to receive an extra guiding signal that refines the representations for the downstream task.
Our goal is to investigate the impact of this tuning approach and empirically characterize the role of these prompt vectors in the context of automatic speech recognition (ASR) with training and testing corpus that may contain some background noise, as often encountered in real-world scenarios. We begin by evaluating the effectiveness of prompt tuning compared to the baseline, where the entire network is frozen and uses the same pre-trained speech encoder. We then seek to identify the factors contributing to the performance difference. Following that, we dissect the individual prompt vectors to analyze their role in fine-tuning the latent representations, revealing surprising and intriguing findings that allow for a simple yet effective modification of prompts that demonstrate the zero-shot learning potential in ASR domain adaptation.
### Training Details
We employ the HuBERT [4] encoder, which trains on clean audio speech, as the backbone of our self-supervised pre-training model. Rather than choosing other noise-robust candidates, we particularly select this model to examine how the soft prompts adapt their trainable prompt vectors to provide instructions for handling noisy speech. This approach allows us to explore the prompts' ability to manage the adverse influence of noise independent of the noise-robust capability of the pre-training backbone. In our work, we mainly scrutinize the behavior of prompt tuning with \(m=20\) tokens, as they are relatively easier to manage. However, we also assess the generalization performance with a bigger prompt's size of \(m=50\) and determine the potential gain in accuracy with increased prompt complexity. All models, including the baseline, are fine-tuned with the SUPERRB [14] benchmark decoder. The learning rate is chosen based on grid-searched within the range [2e-5, 3e-4], with 1e-4 being the best on LibriSpeech's dev-clean set. The remaining fine-tuning configurations follow the 100h configs provided by FairSeq. Note that all table results are obtained without using LM.
### Datasets
We trained our models using 100h of synthesized noisy LibriSpeech [15] data, where we corrupt the utterance with a sampled noise ranging from 0 to 20 dB at 80% of the time. The noise dataset comes from FreeSound [16], comprising 16 kHz of noise data categorized into stationary (Type A) and non-stationary (Type B) noise. Type A includes Car, Metro, and Traffic noises, whereas Type B consists of Babble, Airport/Station, Cafe, and AC/Vacuum noises. Each type of noise had 10 and 8 different audio streams in the training and evaluation sets, respectively, resulting in around 2 hours of noise audio. We evaluate the performance of all models on the official test-clean and test-other LibriSpeech datasets without any additional noise augmentation. In addition, we assess the performance on noisy ASR (test-noisy), which includes pre-mixed noisy LibriSpeech data ranging from 0 to 20 dB. This test-noisy dataset comprised a total of 4,200 instances of noisy test data. The noise data and pre-mixed noisy test sets are open sourced [17].
## 3 Analytical Results
### Performance: HuBERT \(\divide@sectionsuit\) vs. Prompt Tuning
Before analyzing the prompt vectors to understand the relationship of the soft-prompts to refining the representations for ASR, we measure the performance gain achieved by introducing soft-prompt tuning vs. frozen (\(\divide@sectionsuit\)) vanilla HuBERT. Additionally, we assess the model's sensitivity by replacing the learned soft-prompts with random standard Gaussians.
In Table 1, we observe that prompt tuning demonstrates
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{No. of Prompts} & \multicolumn{3}{c}{WER (\%) of LibriSpeech Test} \\ \cline{3-5} & & Clean & Other & Noisy \\ \hline HuBERT \(\divide@sectionsuit\) (Base) & N.A. & 7.09 & 17.61 & 29.04 \\ \hline Prompt Tuning & \multirow{2}{*}{20} & 6.64 & 16.30 & 27.06 \\ \cline{3-5} Replace Random Prompts & & 8.27 & 22.63 & 43.69 \\ \hline Prompt Tuning & \multirow{2}{*}{50} & 6.37 & 15.99 & 26.13 \\ \cline{3-5} Replace Random Prompts & & 9.83 & 28.81 & 54.80 \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison between the listed models on the LibriSpeech test set, where the noisy set contains a subset of clean speech corrupted with SNR noise of 0-20dB.
better generalization to the ASR task than frozen HuBERT, resulting in reduced WER scores across all clean, other, and noisy test set. The performance improvement becomes more pronounced as we increase the prompt token size, suggesting that a larger number of prompt tokens might provide more information for fine-tuning contextual representations. Furthermore, we note that the model is strongly influenced by the prompts. This is evident from the substantial performance degradation observed when random prompts are swapped in. This observation underscores the model's susceptibility to adversarial attacks and highlights the importance of prompt integrity and design in maintaining robustness.
Next, following the discussion in [18], we asked a similar question: **Is the improved performance of prompt tuning in noise robustness the result of making the representations _noise-variant_ or _noise-invariant?_** Given that we have access to the labels of the corrupted noise types in Test-noisy, we performed global average pooling on the transformer (taking into account the influence of the prompts) output and conducted noise classification using a random forest. The results, depicted in Fig. 2, illustrate the accuracy obtained from the random forest. Notably, we observed that prompt tuning appears to inject noise information, leading to significantly improved accuracy in noise classification. Thus, prompt tuning helps to refine the representations to be more noise-variant.
### Exemplify the Roles of the Prompts
We attempted to visualize the relationship of the prompt vectors on a t-SNE plot. We found two distinctive clusters for a prompt size of 20 and three for a prompt size of 50. The color clustering in Fig. 2 is obtained from K-Means of the original soft-prompts. The plot strongly suggests that the learned prompts likely correspond to 2-3 primary roles in fine-tuning the contextual representations.
Subsequently, we identified that the two primary roles of prompt tuning with 20 tokens are likely responsible for content tuning and noise information tuning. Specifically, we assessed the impact of deactivating each prompt on recognition performance, as depicted in Fig 3. Particularly, we observed that disabling certain prompts led to an increase in recognition errors on clean speech (i.e., containing content-rich corpus), associating their role in content refinement. Conversely, the other set of prompts within the orange zone exhibited a minor impact, suggesting a trivial role in content tuning. We later argue that this set is responsible for the noise. Note that the two prompt sets coincide with the K-Means clustering in Fig. 2. The identified prompt sets are as listed 1.
Footnote 1: Set 1: {1, 2, 3, 4, 5, 6, 7, 8, 9, 12, 13, 15}; Set 2: {10, 11, 14, 16, 17, 18, 19, 20}, where the indices refer to the prompt ID in Fig. 3.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Prompts (Set)} & \multicolumn{2}{c}{WER (\%) of LibriSpeech Test} \\ \cline{3-5} & & Clean & Other & Noisy \\ \hline \multirow{3}{*}{Prompt Tuning (20)} & Full & 6.64 & 16.30 & 27.06 \\ & Set 1 & 6.65 & 16.34 & 27.21 \\ \cline{1-1} & Set 2 & 6.78 & 16.61 & 27.85 \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparison between the prompt tuning (20) model with the utilization of different prompt sets on the LibriSpeech test set.
Figure 1: Boxplot of accuracy in classifying noisy speech into their mixed noise types using the model’s pooled latent representations that carry embedded noise information.
Figure 3: WER of prompt tuning (20) on different test set after removing a single prompt vector given from their prompt ID.
Figure 2: t-SNE plot of the prompt vectors, displaying the cluster relationship of each prompt in the downsampled 2D space.
We then validate the above claim of the two sets by showing the sensitivity of the performance by inferring input speech with each partition set presented in Table 2. We observe that the WER is statistically unchanged when employing prompt set 1, whereas omitting it would result in increasing error, plausibly caused by the lack of content refinement. Likewise, Fig. 4 shows that employing prompt set 2 exhibits better noise-variant features compared to prompt set 1, strongly suggesting the role in noise-information-tuning.
Finally, we ask: **How do the soft-prompts contribute to the optimization of the downstream decoder and predictor head?** Table 3 presents the performance of the prompt tuning model when the learned soft prompts are not prepended during utterance inference, resulting in an architecture identical to frozen HuBERT. This table illustrates the optimization process and the generalization capabilities of the downstream decoder and predictor head under the influence of the prompts. While we anticipated a slight increase in recognition errors, it is surprising to observe that performance actually improves compared to the base frozen HuBERT, even without utilizing soft prompts during inference. Notably, the model with a prompt size of 50 exhibits an approximate 7% gain, indicating improved optimization where the soft prompts guide the latent representations to assist the downstream decoder and predictor head in converging to a better local minimum. As such, we can exploit this property to train a prompt tuning model and remove the soft prompts for zero inferencing cost improvement.
### Zero-shot Learning on Out-of-Domain Noise
In this section, we ask: **Can we use the characterized role of the prompts to achieve zero-shot learning on out-of-domain noise speech?** Previously, we observed that noise prompts improve noisy ASR by imbuing noise information for robust adaptation. We propose to perform domain shift on noise set for zero-transfer on OOD noise environments. This process introduces new inductive biases to facilitate domain adaptation for the current model. In particular, given a sample set of OOD noise audio, we extract acoustic representations from the existing prompt tuning model, pool them globally, and average to obtain a single vector that represents the inductive bias of the OOD noise. We broadcast this vector to match the number of noise prompts, \(\mathbf{P_{N}}\), and apply a Hadamard product for the domain shift to obtain the new noise prompts, \(\mathbf{P_{N}^{*}}\) (Set 2). The new prompts become \(\mathbf{P^{*}}:=[\mathbf{P_{C}},\mathbf{P_{N}^{*}}]\), where \(\mathbf{P_{C}}\) refers to the content tuning prompts (Set 1).
Table 4 uses a separate OOD noise from FSD50K [19], where we select office-related background noise to corrupt the official LibriSpeech Dev and Test sets to simulate noisy (0-20dB) audio. We observe around 2.5 to 4.6% improvement on the evaluating sets, compared to the vanilla prompt tuning model, without utilizing additional computational resources to retrain the model to the new environmental domain. This possesses valuable real-world applicability, which could potentially save costs on ASR adaptation.
## 4 Conclusion
In conclusion, our study has aided the understanding of prompt tuning for ASR. We have found that soft prompts enhance ASR performance, although they are susceptible to malicious modifications, they are not obligatory for inference. Our analysis identified two key roles for soft prompts: content refinement and noise information enhancement, which enhance robustness against background noise. Lastly, we have proposed a modification for noise prompts to allow zero-shot learning adaptation. We hope that our work adds valuable insights to help better appreciate the functions of soft prompts and develop more effective ways to use them.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{WER (\%) of Noisy LibriSpeech (0 - 20dB)} \\ \cline{2-5} & Dev-Clean & Dev-Other & Test-Clean & Test-Other \\ \hline HuBERT \(\%\) (Base) & 20.78 & 36.03 & 19.86 & 35.89 \\ \hline Prompt Tuning (20) & 19.67 & 34.77 & 18.69 & 34.62 \\ \(\blacksquare\) Noise Shifted Prompts & **19.18** & **33.67** & **17.83** & **33.64** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of different models on out-of-domain noise augmented testing sets from LibriSpeech, where noisy speech is simulated with SNR of 0-20dB.
Figure 4: Boxplot of accuracy in classifying noisy speech into their mixed noise types using the model’s pooled latent representations with different soft prompt (20) subsets 1.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{WER (\%) of LibriSpeech Test} \\ \cline{2-5} & Prompts & Clean & Other & Noisy \\ \hline HuBERT \(\%\) (Base) & N.A. & 7.09 & 17.61 & 29.04 \\ \hline Prompt Tuning & \multirow{2}{*}{20} & 6.64 & 16.30 & 27.06 \\ \(\blacksquare\) Remove All Prompts & & 6.78 & 16.63 & 28.25 \\ \hline Prompt Tuning & \multirow{2}{*}{50} & 6.37 & 15.99 & 26.13 \\ \(\blacksquare\) Remove All Prompts & & 6.60 & 16.39 & 28.37 \\ \hline \hline \end{tabular}
\end{table}
Table 3: WER of different models, exhibiting the generalization power of the downstream decoder and predictor head by removing the use of all soft prompts during feedforward. |
2309.06720 | Deep Attentive Time Warping | Similarity measures for time series are important problems for time series
classification. To handle the nonlinear time distortions, Dynamic Time Warping
(DTW) has been widely used. However, DTW is not learnable and suffers from a
trade-off between robustness against time distortion and discriminative power.
In this paper, we propose a neural network model for task-adaptive time
warping. Specifically, we use the attention model, called the bipartite
attention model, to develop an explicit time warping mechanism with greater
distortion invariance. Unlike other learnable models using DTW for warping, our
model predicts all local correspondences between two time series and is trained
based on metric learning, which enables it to learn the optimal data-dependent
warping for the target task. We also propose to induce pre-training of our
model by DTW to improve the discriminative power. Extensive experiments
demonstrate the superior effectiveness of our model over DTW and its
state-of-the-art performance in online signature verification. | Shinnosuke Matsuo, Xiaomeng Wu, Gantugs Atarsaikhan, Akisato Kimura, Kunio Kashino, Brian Kenji Iwana, Seiichi Uchida | 2023-09-13T04:49:49Z | http://arxiv.org/abs/2309.06720v1 | # Deep Attentive Time Warping
###### Abstract
Similarity measures for time series are important problems for time series classification. To handle the nonlinear time distortions, Dynamic Time Warping (DTW) has been widely used. However, DTW is not learnable and suffers from a trade-off between robustness against time distortion and discriminative power. In this paper, we propose a neural network model for task-adaptive time warping. Specifically, we use the attention model, called the bipartite attention model, to develop an explicit time warping mechanism with greater distortion invariance. Unlike other learnable models using DTW for warping, our model predicts all local correspondences between two time series and is trained based on metric learning, which enables it to learn the optimal data-dependent warping for the target task. We also propose to induce pre-training of our model by DTW to improve the discriminative power. Extensive experiments demonstrate the superior effectiveness of our model over DTW and its state-of-the-art performance in online signature verification.
keywords: Dynamic time warping, attention model, metric learning, time series classification, online signature verification +
## 1 Introduction
Measuring similarity is one of the most important tasks for time series recognition. For example, similarity gives an essential criterion for classifying time series. Many applications, such as activity recognition, computational auditory scene analysis, computer security, electronic health records, and biometrics (e.g., handwritten signature verification) [1], use time series similarity for recognition. One difficulty in measuring similarity is due to nonlinear time distortions. The distortions can appear as temporal shifts, stretches and contractions, and other various nonlinear temporal fluctuations.
To be invariant to nonlinear time distortions, Dynamic Time Warping (DTW) [2] has been widely utilized. Let \(\mathbf{A}=\mathbf{a}_{1},\ldots,\mathbf{a}_{i},\ldots,\mathbf{a}_{I}\) and \(\mathbf{B}=\mathbf{b}_{1},\ldots,\mathbf{b}_{j},\ldots,\mathbf{b}_{J}\) denote two time series, where both \(\mathbf{a}_{i}\) and \(\mathbf{b}_{j}\) are \(D\)-dimensional feature vectors. As shown in Fig. 1(a), DTW establishes a "hard" correspondence between \(\mathbf{A}\) and \(\mathbf{B}\) as a path on a two-dimensional plane, or an \(I\times J\) binary matrix. Here the term "hard" implies "one or zero;" the \((i,j)\)-th element of the matrix becomes 1, if \(\mathbf{a}_{i}\) and \(\mathbf{b}_{j}\) is "matched (i.e., corresponding)," and zero, otherwise. The correspondence is determined to minimize the distance \(\mathbf{A}\) and \(\mathbf{B}\) by dynamic programming (DP).
In DTW, several hand-crafted constraints are often assumed for controlling the correspondence. Traditionally, monotonicity, continuity, and boundary con
Figure 1: (a) DTW conducts a hard correspondence between two sequences \(\mathbf{A}\) and \(\mathbf{B}\). (b) The proposed _deep attentive time warping_ is composed of a bipartite attention module, which generates an attention weight matrix \(\mathbf{P}_{s}\). (c) The attention weight matrix \(\mathbf{P}_{s}\) represents soft correspondence between two sequences \(\mathbf{A}\) and \(\mathbf{B}\).
straints have been utilized. These constraints are appropriate and acceptable in many applications but encounter trade-off problems in the warping flexibility. If the constraints are too loose, DTW causes "over-warping" that cancels important inter-class differences and loses its discriminative power. If we add more constraints heuristically (like [3; 4]) to avoid over-warping, DTW then can not remove intra-class distortions sufficiently.
In recent years, _deep metric learning_ has been applied to various classification tasks. Deep metric learning is a machine learning technique to learn an adaptive feature space that takes into account the similarity (or dissimilarity) relationships among data [5; 6; 7; 8]. In the typical formulation, a Siamese or triplet neural network is trained to learn an embedding space, where closeness between embeddings (i.e., features extracted from the network) encodes the level of similarities between the data samples. It enforces the embeddings to lie close if the samples belong to the same class, and pushes them apart if different.
Deep metric learning for image classification tasks has had many successful results [9; 10; 11]. Whereas for time series, there is still room for improvement. As detailed in Section 2, the past attempts either suffer from the loss of useful temporal information [12; 13; 14] or are not explicitly invariant to nonlinear time distortions [15; 16; 17].
In this paper, we propose a novel neural network-based time warping model, called _deep attentive time warping1_. The proposed method is based on a novel learnable time warping mechanism with contrastive metric learning. Its key idea is a novel attention model, called the _bipartite attention module_. As shown in Fig. 1 (b), this module takes two time series as inputs and predicts an _attention weight matrix_. This matrix represents the "soft" correspondence between all time steps of the two inputs, as shown in Fig. 1 (c). By training the bipartite attention module appropriately for a specific task, we can realize time warping that can mitigate the trade-off between robustness against time distortion and
discriminative power. In other words, the learned soft correspondence will enhance important inter-class differences and, at the same time, remove intra-class distortions.
The proposed method has great versatility and can be used for applications in two different scenarios; one is a stand-alone scenario and the other is a plug-in scenario. As shown in Fig. 2 (a), the former takes two inputs \(\mathbf{A}\) and \(\mathbf{B}\) and determines their difference by utilizing their original feature representation. In the latter scenario, we use existing contrastive metric learning frameworks with the standard DTW. Then, as shown in Figs. 2 (b-1) and (b-2), we replace the DTW with the proposed method. Consequently, our deep attentive time warping is combined with contrastive representation learning and the entire framework becomes totally trainable for better (i.e., contrastive) time warping and feature representation.
We conduct extensive experiments to demonstrate the superior effectiveness of the proposed method. We first conduct two experiments in the stand-alone scenario to confirm how the proposed method provides reasonable time warping for the classification. We prove that the proposed method achieves better classification performance with effective warping than the other time warping techniques through qualitative and quantitative evaluations on the well-known Unipen [18] and the University of California Riverside (UCR) [19] datasets. We
Figure 2: The proposed deep attentive time warping can be used in two scenarios. One is a stand-alone scenario (a), and the other is a plug-in scenario (b-1) and (b-2). In (b-1) and (b-2), blue boxes represent a neural network for contrastive representation learning.
then conduct another experiment in the plug-in scenario. Specifically, through an online signature verification experiment, we prove that the proposed method achieves state-of-the-art performance by outperforming other learnable time warping methods.
The main contributions of this paper are summarized as follows:
* A novel neural network-based time warping method, called deep attentive time warping, is proposed by introducing a bipartite attention module. It is learnable, task-adaptive, and improves the trade-off between robustness against time distortion and discriminative power. We also prove a two-step training process enhances the performance.
* We show the high versatility of the proposed deep attentive time warping by using it in two different scenarios, stand-alone and plug-in.
* Extensive experiments on more than 50 public datasets demonstrate the superior effectiveness of the proposed method over DTW as a stand-alone time warping model.
* We experimentally show that the proposed method in the plug-in scenario achieves better performance than state-of-the-art learnable time warping methods in an online signature verification task.
## 2 Related Work
### Dynamic time warping
DTW [2] (standard DTW) is a time warping method that has been used for a long time as a time series similarity measure invariant to nonlinear time distortion. As noted in Section 1, DTW can determine the hard correspondence between \(\mathbf{A}\) and \(\mathbf{B}\). While DTW exhibits great distortion invariance, it may cause over-warping that often results in incorrect classification.
There are many attempts to improve the performance of DTW. To suppress over-warping, early studies [2; 3] proposed to put a warping window as an
additional constraint to the standard monotonicity and continuity constraints. Roughly speaking, \(\mathbf{a}_{i}\) is able to match with one of \(\mathbf{b}_{i-w},\ldots,\mathbf{b}_{i},\ldots,\mathbf{b}_{i+w}\), where \(w\) is the window width. A smaller \(w\) will have fewer over-warping cases. In [20; 21], the warping path is penalized by the difference of \(i\) and \(j\) of the matched \(\mathbf{a}_{i}\) and \(\mathbf{b}_{j}\). Soft-DTW [22] is an interesting attempt to replace the min operation with a soft-min operation, which is realized by logarithmic and exponential functions. With this replacement, DTW becomes differentiable and can be built in various machine learning frameworks.
### DTW with deep metric learning
In recent years, more efforts have been made on deep metric learning for time series [12; 13; 14; 15; 16; 17]. They are based on a feature extraction mechanism with a Siamese network, which is trained by a loss function evaluating the distance between the features. The extracted features from time series are either global or local. Compared to the standard DTW, these methods achieve better accuracy; however, they do not treat the temporal distortion explicitly. This means that they do not warp one time series to another and, thus, is impossible to introduce an explicit control of warping flexibility.
Several metric learning methods introduce DTW for an explicit removal of temporal distortions. More specifically, they introduce the standard DTW before or after a Siamese network. Prewarping Siamese Network (PSN) [23] and Time Aligned Recurrent Neural Networks (TARNN) [24] perform DTW between two time series and then fed the warped result to a Siamese network for metric learning. In contrast, Deep DTW (DDTW) [25] first extracts a sequence of local features from each time series and then performs DTW. With the introduction of DTW, these methods could achieve better performance than simple metric learning methods. Note that they do not learn the warping characteristics; their temporal distortion removal ability relies on the standard DTW.
Few methods [26; 27] have been proposed for learning warping characteristics. They calculate a quasi-binary _matchability_\(\Phi(i,j)\) between each \((i,j)\) pair. Then, all \(IJ\) point-wise distances between \(i\in[1,I]\) and \(j\in[1,J]\) are aggre
gated by using \(\Phi(i,j)\); if \(\Phi(i,j)\sim 1\), the distance between \(i\) and \(j\) is taken into account. Since the matchability is determined _independently_ for each \((i,j)\) pair by a neural network, it is time-consuming to have all \(IJ\) matchability results. More importantly, this independent determination process cannot control the global warping characteristics, which have been carefully treated even in the standard DTW.
Furthermore, there is a preliminary conference paper of this work [28] and this paper contains significant differences from it. First, we newly propose a plug-in scenario, where our deep attentive time warping is utilized as a differentiable module in a large classification system. We further confirmed that the plug-in usage of our technique achieves the state-of-the-art performance in large-scale signature verification tasks. Moreover, for the stand-alone scenario, we conduct more extensive comparison experiments on over 50 classification tasks in UCR dataset, whereas only four tasks have been tackled in [28]. Technical details are also newly elaborated in this paper.
## 3 Deep Attentive Time Warping
### Overview
We propose _deep attentive time warping_, a novel neural network-based time warping method. As noted in Section 1, the proposed method can be used to evaluate the distance/dissimilarity between two time series (e.g., series of raw signals or deep features) \(\mathbf{A}\) and \(\mathbf{B}\) in its stand-alone scenario of Fig. 2 (a). It also can be used as an attention-based feature extractor in its plug-in scenario, as shown in Fig. 2 (b).
As shown in Figs. 1 (b) and (c), the _bipartite attention module_ generates the attention weight matrix \(\mathbf{P}_{s}\), which represents time warping between \(\mathbf{A}\) and \(\mathbf{B}\) as a soft temporal correspondence. The bipartite attention module is trained by metric learning with contrastive loss. The resulting matrix \(\mathbf{P}_{s}\) is expected to provide not only distortion invariance but also discriminative power, both of which are appropriate for the target task.
### Time warping with the bipartite attention module
The detail of the bipartite attention module is shown in Fig. 3. In the bipartite attention module, two multivariate time series \(\mathbf{A}\) and \(\mathbf{B}\) are first combined by "outer concatenation" to have a two-dimensional array of the concatenated vectors of \(\mathbf{a}_{i}\) and \(\mathbf{b}_{j}\) (i.e., a third-order tensor). Specifically, by replicating \(\mathbf{A}\) horizontally \(J\) times and \(\mathbf{B}\) vertically \(I\) times, we have two \(I\times J\times D\) tensors and concatenate them as an \(I\times J\times 2D\) tensor. The tensor is then input to a Fully Convolutional Network (FCN) which functions as an attention model. In this paper, we utilize U-Net as an FCN.
Before outputting an attention weight matrix \(\mathbf{P}_{s}\), a row-wise softmax operation is applied to the output of the FCN, so that the sum of the values in the rows becomes 1. This operation is important for using \(\mathbf{P}_{s}\) as the soft-correspondence, as shown in Fig. 1 (c). Consequently, the attention weight matrix \(\mathbf{P}_{s}\) is used for warping of \(\mathbf{B}\), as shown in Fig. 4 (a). The time warping of \(\mathbf{B}\) is simply given by the matrix product \(\mathbf{P}_{s}\mathbf{B}\) and expected to be similar to \(\mathbf{A}\). In a similar way, we also have another matrix \(\mathbf{P}_{t}\), which warps \(\mathbf{A}\) to be \(\mathbf{P}_{t}\mathbf{A}\sim\mathbf{B}\), as shown in
Figure 4: The time warping by the attention weight matrix.
Figure 3: Overview of the bipartite attention module.
Fig. 4 (b). The matrix \(\mathbf{P}_{t}\) is given by first transposing the output of FCN and then applying the row-wise softmax operation.
As clarified above, the matrix \(\mathbf{P}_{s}\) (and \(\mathbf{P}_{t}\)) is used as an attention for controlling the time series \(\mathbf{B}\) (\(\mathbf{A}\)) to be similar to \(\mathbf{A}\) (\(\mathbf{B}\)). The bipartite attention module drives the matrices \(\mathbf{P}_{s}\) and \(\mathbf{P}_{t}\) at the same time by utilizing two-dimensional nature of the outer-concatenated representation of \(\mathbf{A}\) and \(\mathbf{B}\). In other words, \(\mathbf{A}\) is used to attend individual elements of \(\mathbf{B}\) and vice versa. This mutual attention is analogous to the cost matrix of the so-called bipartite matching problem. We, therefore, call our special attention scheme bipartite attention and differentiate from popular attention schemes such as additive attention [29] and dot-product attention [30; 31].
Since we use an FCN (U-Net) in the bipartite attention module, the proposed method, theoretically, can handle time series samples with variable lengths. Namely, the lengths \(I\) and \(J\) can be different among samples. In the later experiments, however, we use a fixed-length time series by following the traditional experimental setup of the comparative methods (such as DDTW and PSN). This fixed-length condition also allows efficient batch-based training.
### Learning attention model with contrastive loss
To achieve time warping with sufficient time distortion invariance and discriminative power for a specific task, we learn the bipartite attention model with the following _dual contrastive loss_:
\[\mathcal{L}(\mathbf{A},\mathbf{B})=\mathcal{L}_{s}(\mathbf{A},\mathbf{B})+ \mathcal{L}_{t}(\mathbf{A},\mathbf{B}). \tag{1}\]
Figure 5: The bipartite attention module is optimized in a two-step manner. In the first step, The module is pre-trained to mimic DTW, and in the second step, the module is optimized by contrastive training.
Both \(\mathcal{L}_{s}\) and \(\mathcal{L}_{t}\) are formulated as a contrastive loss [32] specialized for the proposed method. More specifically, \(\mathcal{L}_{s}\) is formulated as
\[\mathcal{L}_{s}(\mathbf{A},\mathbf{B})=\begin{cases}\frac{1}{ID}\|\mathbf{A}- \mathbf{P}_{s}\mathbf{B}\|_{\mathrm{F}}^{2}&\text{if a same-class pair},\\ \max\big{(}0,\tau-\frac{1}{ID}\|\mathbf{A}-\mathbf{P}_{s}\mathbf{B}\|_{ \mathrm{F}}^{2}\big{)}&\text{otherwise},\end{cases} \tag{2}\]
where \(\tau\) is the hyper-parameter for margin. \(\|\cdot\|_{\mathrm{F}}\) denotes the Frobenius norm. If \(\mathbf{A}\) and \(\mathbf{B}\) are a same-class pair, the distance between the input \(\mathbf{A}\) and the warped input \(\mathbf{P}_{s}\mathbf{B}\) is minimized. If not, their distance is optimized to be larger than \(\tau\). The other loss \(\mathcal{L}_{t}\) is defined by using \(\|\mathbf{B}-\mathbf{P}_{t}\mathbf{A}\|_{\mathrm{F}}\) and \(J\) (the length of \(\mathbf{B}\)), instead. Fig. 5 (b) summarizes the above process to train the bipartite attention module.
It should be emphasized that the above contrastive learning has a clear advantage over the standard DTW. The objective of the standard DTW is to minimize the distance between two time series regardless of whether they belong to the same class or not. Therefore, the standard DTW often underestimates the distance for different-class pairs. In contrast, the proposed method considers their classes and therefore can have appropriate time distortion invariance and discriminability at the same time.
### Pre-training with the standard DTW
The proposed deep attentive time warping has much more warping flexibility than the standard DTW. As reviewed in 2.1, several constraints, such as monotonicity and continuity, are imposed to control the warping path in the standard DTW. Since the proposed method does not have such constraints, it has higher flexibility. However, of course, too much flexibility is not appropriate for many applications.
We, therefore, introduce a pre-training phase with the standard DTW so that the proposed method can mimic the DTW before starting its main training phase. Specifically, as shown in Fig. 5 (a), we prepare a binary matrix \(\mathbf{P}_{\mathrm{DTW}}\) showing the DTW path between \(\mathbf{A}\) and \(\mathbf{B}\), then pre-train the FCN to minimize
the following loss function:
\[\mathcal{L}_{\text{pre}}(\mathbf{P}_{s},\mathbf{P}_{\text{DTW}})=\frac{1}{IJ}\| \mathbf{P}_{s}-\mathbf{P}_{\text{DTW}}\|_{\text{F}}^{2}. \tag{3}\]
After the above pre-training phase, the attention model is further trained using the contrastive loss, as described in Section 3.3. By this two-step training scheme, the proposed method can avoid excessive warping flexibility, while keeping more flexibility than the standard DTW. We confirm the positive effect of pre-training through ablation studies in later experiments.
## 4 Preliminary Experiments in Stand-Alone Scenario
We conducted the experimental evaluation of the proposed deep attentive time warping in the stand-alone scenario. As shown in Fig. 2 (a), this scenario uses the proposed method for calculating a distance between two time series and then the distance can be used in, for example, a nearest-neighbor classifier. First, we conduct qualitative evaluations and show the behavior of the proposed method on the online handwritten character dataset, called Unipen, as a simple example. Next, we conduct quantitative evaluations and show the proposed method has a better trade-off between robustness against time distortion and discriminative power than DTW on 52 datasets of the famous UCR Archive.
Note that the experiments in this section mainly aim to confirm the time warping ability of the proposed method, and thus comparative study will be made with rather traditional DTW methods. The comparisons with state-of-the-art learnable time warping methods will be shown in the next section.
### Qualitative evaluations using online handwritten samples
#### 4.1.1 Unipen Dataset
Unipen [18] is comprised of several subsets and we used the most popular ones, Unipen 1a (digits, 10 classes, 7,562 samples in total), Unipen 1b (uppercase alphabet, 26 classes, 6,039 samples), and Unipen 1c (lowercase alphabet, 26 classes, 10,712 samples), for the evaluation. Each sample is a sequence of
2D pen-tip coordinate vectors. For the detailed comparison with fixed-length methods (such as SVM, 1D-CNN, and Siamese), linear resampling is performed on each sample so that their temporal length was 50. The 2D coordinates were normalized to the range \([-1,1]\). For each class, 200 samples were randomly selected for validation, and other 200 for test. All the remaining samples were used for training.
#### 4.1.2 Implementation details
The network architecture in the bipartite attention module follows the original U-net [33], except for an additional batch normalization layer after each convolutional layer. The learning rate was set to 0.0001, and Adam [34] was used as the optimizer. Before pre-training, the network weights were initialized by He initialization [35]. The batch size was set to 512. During training, same-class and different-class pairs were loaded in a ratio of \(1:2\). The hyperparameter \(\tau\) in the contrastive loss was set to 1. The maximum iterations for pre-training of Fig. 5 (a) and the main contrastive training of (b) were set at \(1,000\) and \(10,000\), respectively; and the best model (i.e., the best iteration number) was chosen by the evaluation with the validation set.
For quantitative evaluation, we conducted a classification experiment using the distance by the proposed method. For each test sample, its distances to all training samples were calculated by the proposed method and the \(k\)-nearest neighbor classification was performed to determine its class label at\(k=1\). As the distance between \(\mathbf{A}\) and \(\mathbf{B}\), we use the following "symmetric" distance:
\[d(\mathbf{A},\mathbf{B})=\frac{1}{ID}\|\mathbf{A}-\mathbf{P}_{s}\mathbf{B}\|_ {\mathrm{F}}^{2}+\frac{1}{JD}\|\mathbf{B}-\mathbf{P}_{t}\mathbf{A}\|_{ \mathrm{F}}^{2}. \tag{4}\]
The proposed method achieved 99.0, 98.0, and 95.5% classification accuracies for Unipen 1a, 1b, and 1c, respectively, whereas the standard DTW achieved 98.4, 96.0, and 94.1%. This proves the proposed method achieved sufficient accuracies and, therefore, the following qualitative evaluation results are reliable enough.
#### 4.1.3 Qualitative evaluation results
Figs. 6 (a) and (b) show the results on three test samples of Unipen 1b and 1c, respectively. Those test samples are correctly classified by the proposed method and not by DTW. For each test sample, the top three nearest neighbors by distance of the proposed method and those by the DTW distance are shown. The attention weight matrices and the DTW matching paths are also shown.
In Fig. 6 (a), the proposed method classifies the test sample as 'B' correctly by attention matrices that resemble the DTW path. In contrast, DTW clas
Figure 6: A visualization of the matching path of the improved samples compared with DTW. The character in the red box shows the ground truth.
sifies this 'B' as 'R' incorrectly. It should be emphasized that the proposed method provided an almost meaningless attention matrix between 'B' and 'R,' _intentionally_. This is because the proposed method tries to differentiate them as an expected effect of its contrastive learning. Similar attention matrices are found in other cases. Since DTW has no such function, it always gives smooth correspondence and gets a small distance that causes misclassification.
The third nearest neighbor of 'J' in the middle column of Fig. 6 (a) shows another benefit of the proposed method. This 'J' shows a different stroke order. Since the proposed method does not have a strict monotonicity constraint, its attention map deals with the stroke order variation. From these results, we can observe that the proposed method has an appropriate time warping flexibility that realizes both sufficient time distortion invariance and discriminability.
Fig. 7 shows the distribution of test samples by the multi-dimensional scaling (MDS). Three distance metrics, Euclidean, DTW, and the proposed method, are used for these MDS visualizations. For the proposed method, the distance between a pair of test samples **A** and **B** is evaluated by (4).
These distributions prove that the distance by DTW is more discriminative than Euclidean, and the distance by the proposed method is far more discriminative than DTW. For example, the overlap between 'U' and 'V' in Unipen 1b by the DTW distance disappears in the proposed method. The contrastive metric learning in the proposed method realizes this discriminability, as expected.
Table 1 shows the error rates for binary classifications between ambiguous class pairs in Unipen 1b and 1c. Fig. 8 shows the normalized histograms of the
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & \multicolumn{2}{c}{Unipen 1b} & \multicolumn{2}{c}{Unipen 1c} \\ \cline{2-7} Method & ‘J’ vs. ‘T’ & ‘U’ vs. ‘V’ & ‘g’ vs. ‘y’ & ‘h’ vs. ‘k’ & ‘h’ vs. ‘n’ \\ \hline ours & 3.5 & 6.5 & 3.0 & 3.0 & 6.5 \\ DTW & 14.0 & 11.5 & 12.0 & 8.0 & 10.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Error rates (%) between confusing classes (Unipen). Error rates in red indicate the least rate of each case.
distances by DTW and the proposed method for the ambiguous class pairs in Unipen 1b. The histogram of DTW shows a large overlap between the same and different-class pairs, whereas the proposed method does not. These results also prove the sufficient discriminative power of the proposed method.
### Quantitative analysis on UCR dataset
#### 4.2.1 UCR Dataset
The University of California Riverside (UCR) Time Series Classification Archive (2015 edition) [19] is a famous benchmark that is comprised of 85 different univariate time series datasets. Among them, we selected 52 datasets satisfying the following two conditions. The first condition requires that more than 100 training samples are available. The second requires that the sample
Figure 7: The visualization of the test samples by MDS on (a) Unipen 1b and (b) 1c.
length (\(I\) and \(J\)) should be less than 1,000. The proposed method can, theoretically, deal with any sample length (even longer than 1,000); however, in practice, too long of samples cause memory issues (like other trainable time warping methods). In each dataset, all the samples are already regulated to have the same length. UCR prepares a training sample set and a test sample set for each dataset. Among the training samples, 90% is used for training and 10% for validation. All time series were standardized for each channel to have a mean of zero and a variance of one.
#### 4.2.2 Implementation details
The model architecture, learning rate, optimizer, hyper-parameter, number of iterations, and inference protocol are the same as 4.1.2. The batch size was determined for the maximum memory utilization of the GPU (Tesla V100).
We compared the proposed method with the standard DTW (DTW) [2], window-DTW (w-DTW) [2] and soft-DTW (s-DTW) [22]. The optimal values of the hyperparameters in the comparative methods, as well as the proposed method, were chosen by the validation set2. As noted above, 10% of UCR training set were used as the validation set. For example, the hyperparameter \(\gamma\) in s-DTW was chosen from 0.01, 0.1, 1, 10, 100 using the validation set.
Figure 8: Distance histograms for two confusing class pairs, ‘J’ vs. ‘T’ and ‘U’ vs. ‘V,’ of Unipen 1b. For each pair, the left histogram is about DTW distance and the right is the proposed method.
We used the 1-Nearest Neighbor (1-NN) rule as the classifier. More specifically, we used the distance given by the proposed method and then compared each test sample with all training samples. The class of the training sample with the minimum distance was considered as the classification result. We used the same 1-NN classification approach for the comparative methods.
#### 4.2.3 Quantitative evaluation results
Table 2 shows classification error rates by the proposed method (ours), and the comparative methods, i.e., DTW, w-DTW, s-DTW, on 52 UCR2015 datasets3. As an ablation study, the performance of the proposed method without pre-training phase is also listed in this table. The error rates in red and blue indicate the least rate and the second least rate, respectively.
Footnote 3: In UCR2018 [36], we found six datasets that satisfy the same conditions as UCR2015. The experimental evaluation results on these datasets are given in Appendix A.
For many datasets, the proposed deep attentive time warping achieved lower error rates than the traditional DTW methods. This fact is confirmed by the number of wins; the proposed method shows the lowest error rates for 23 among 52 datasets.
Fig. 9(a) shows a pair-wise comparison between DTW and the proposed method. Each point corresponds to one of the 52 datasets. The 36 points below
Figure 9: Error rate comparison. Each point corresponds to one of the 52 datasets.
the diagonal line indicate that the proposed method achieved a lower error rate than DTW for those datasets. Consequently, this figure also demonstrates a higher effectiveness of the proposed method.
As an ablation study, we observed the performance change by removing the pre-training phase. The accuracies of the proposed method without pre-training are shown in the rightmost column of Table 2 ("w/o pre-train.") and summarized in Fig. 9(b) as a pairwise comparison with the method with pre-training. These results show that the positive effect of pre-training is confirmed on 36 datasets among the 52.
In order to make our evaluation more reliable, we conducted the McNemar's test between the proposed method and each comparative method. The test results are shown in Table 2. If an error rate by a comparative method is printed in **bold**, the proposed method was superior to the comparative method with statistical significance at the 5% level by McNemar's test. If _italic_, the proposed method is inferior with significance at the 5% level.
From the results of McNemar's test, we can confirm the superiority of the proposed method over the comparative methods. More specifically, among 52 datasets, the proposed method was superior to DTW, w-DTW, and s-DTW, and w/o pre-training with the statistical significance at the 5% level on, 20, 22, 18, and 11 datasets, respectively. We can also see that inferior cases (italic) are much less than superior cases (bold). The row "All" in Table 2 shows the error rates of all test samples in all 52 datasets. McNemar's test results in the "All" row also shows that the proposed method was superior to all the comparative methods at the 5% level -- Precisely speaking, the proposed method was superior even at the 1% level.
In order to understand the characteristics of the proposed method, we analyzed the relationship between several dataset features (e.g., dataset size, time length, etc.) and win-lost cases. Among these features, time length shows the most evident relationship, as shown in Fig. 10. This figure shows the histograms of win cases and lost cases with respect to sample time length. To emphasize
the difference between win cases and lost cases, we picked up the datasets that show statistical significance at the 5% level by McNemar's test in Table 2.
The histogram suggests that the lost cases are found for the datasets with very short or very long time lengths. A possible reason for this phenomenon is the fixed network architecture of the bipartite attention module. For example, for longer samples, the network is too shallow to exchange the information between their beginning and ending parts. In future work, we can try to use different network architectures according to the time length of samples.
We further compare the proposed method to results reported in literature. We collected results that propose distance measures for a 1-NN classifier, similar to the proposed method. The comparative methods use a wide variety of distance measure mechanisms, including derivative based methods, Complexity-Invariant Distance (CID) [37], Derivative Transform Distance (DTD\({}_{C}\)) [38], and DTW Derivative Distance (DD\({}_{DTW}\)) [39], dictionary distance based methods, Bag of Patterns (BOP) [40] and Bag of Symbolic
Table 3 lists the methods and the number times the proposed method has a higher accuracy over the comparison method (Wins), the number of times it had a lower accuracy (Loses), the number of ties, and the total number of datasets used in the comparisons. Note, each comparison method reports their results on different datasets within the 2015 UCR Time Series Archive. Therefore, we only count the datasets that are available. Also, since we limit the
Figure 10: Histogram of win cases and lost cases with respect to time length. Note that we only picked up the datasets that show statistical significance at the 5% level by McNemar’s test in Table 2 for emphasizing the difference between win cases and lost cases.
proposed method to datasets with more than 100 training patterns, we exclude the reported datasets with less than 100 training patterns.
For most of the comparison methods, the proposed method performed much better. For BOSS and CID, the proposed method only had a small advantage and for DTD\({}_{C}\), the proposed method had fewer wins. This demonstrates that the proposed method is not only effective at providing an effective warping method, but also a robust distance measure for classification.
## 5 Experiments in Plug-in Scenario
We further conducted the experimental evaluation of the proposed deep attentive time warping in the plug-in scenario of Fig. 2 (b). The aims of this experiment are twofold. First, we evaluate the proposed method in a more practical task that needs representation learning in addition to time warping.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & \multicolumn{3}{c}{Proposed Method} \\ \cline{2-5} Method & Wins & Losses & Ties & Total \\ \hline BOP [40] & **8** & 2 & 0 & 10 \\ BOSS [41] & **12** & 11 & 0 & 23 \\ CID [37] & **13** & 8 & 0 & 21 \\ DD\({}_{DTW}\)[39] & **8** & 3 & 0 & 11 \\ DTD\({}_{C}\)[38] & 9 & **13** & 0 & 22 \\ MSM [42] & **8** & 3 & 0 & 11 \\ TWED [21] & **9** & 2 & 0 & 11 \\ WDTW [20] & **7** & 2 & 0 & 9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between proposed method and comparative methods on the 2015 UCR Time Series Archive datasets. The total number of datasets for each method is determined by the intersection of the datasets used by the proposed method and reported by the comparison methods.
Second, we compare the performance of the proposed method with state-of-the-art learnable time warping methods for the task.
We focused on online signature verification, which is a task to decide whether a test signature is a genuine signature or a skilled forgery (imitated by a forgery). The reasons for using this task are as follows. The first and the most important reason is that several learnable time warping methods have been applied to the same public dataset, MYCT-100. As far as the authors know, there are no other common tasks to which various learnable time warping methods are applied. Second, this task requires representation learning; recent performance improvements on online signature verification owe to representation learning. Third, this task still needs further improvement; even though recent methods achieve 1.0% equal error rate (EER), verification error should be further minimized because of its expected reliability.
Figure 11: Examples of online signatures in MCYT-100.
### Online Signature Dataset
As one of the most common datasets for online signature verification, we use MCYT-100 dataset [43]. This dataset contains 100 subjects, each having 25 genuine signatures and 25 skilled forgeries. Fig. 11 shows several examples from the dataset. Each signature has five channels: 2D coordinates, pressure, azimuth, and altitude angles of the pen tip. We resized the temporal length to 1,024 by following the experimental setup of the comparative methods (such as PSN and DDTW).
According to the tradition of the task, we conduct multiple experiments under different train-test ratios. Specifically, the first \(\eta\%\) of subjects (\(\eta\in\{50,60,70,80,90\}\)) were used for training and the remaining subjects for testing. For testing, the first five genuine signatures of each subject in the test set were used as reference signatures. The remaining genuine signatures and all the skilled forgeries were used as test signatures. For each test signature, its distances to the corresponding five reference signatures were averaged. Based on this averaged distance, all test signatures of all subjects were sorted to form a ranking list. EER was finally calculated as the traditional evaluation metric of the task.
### Comparative methods
In the plug-in scenario experiment, we used two elementary methods and three state-of-the-art methods. The former methods are DTW [2] and simple Siamese Network (Siamese). DTW takes either raw signatures or handcrafted features [44] as input. Siamese was learned with either a global contrastive loss [32] or a local embedding loss [23].
The state-of-the-arts methods are Prewarping Siamese Network (PSN) [23], Time-Aligned Recurrent Neural Networks (TARNN) [24], and Deep DTW (DDTW) [25]. In the latter method, DTW is embedded in a metric learning framework to realize learnable time warping. More specifically, the original PSN and TARNN use DTW before Siamese networks, whereas the original DDTW af
ter. As the Siamese networks, PSN and DDTW employ CNN, whereas TARNN employs RNN.
For the comparison, we plugged the proposed method in the above methods and observed how the performance changed. Specifically, we replace the DTW module in PSN, DDTW, and TARNN by the proposed method and then train the entire network by the process of 5.3.
### Implementation details
Fig. 12 shows the three-step training process for the deep attentive time warping plugged in PSN, TARNN, and DDTW. First, the original model with the standard DTW is trained with its original loss functions. (The details can be found in the original papers [23; 24; 25].) Second, the bipartite attention module is pre-trained with the loss (3), for providing a similar attention weight matrix to the DTW result. Finally, the bipartite attention module is trained with the entire network, while keeping the weights of the Siamese network. It is theoretically possible to train the entire network in an end-to-end manner. However, our preliminary trials proved that this three-step process gives more stable results.
Figure 12: Three-step training process for the plug-in scenario (for PSN/TARNN). Each blue box is the Siamese network for contrastive representation learning. Note that for DDTW, the Siamese network is placed _before_ the bipartite attention module.
The network architecture of the bipartite attention module is the same as the stand-alone scenario. The learning rate was chosen from \(0.1\), \(0.01\), and \(0.001\) in Steps 1 and 3, and set to \(0.001\) in Step 2. The hyper-parameter \(\tau\) in the contrastive loss was set to \(1.4\) by using the validation set. Adam was used as the optimizer. The training was conducted up to \(10,000\) iterations in Steps 1 and 3, and up to \(1,000\) iterations in Step 2. The batch size was set to \(30\), where \(10\) are the same-class pairs and \(20\) are the different-class pairs).
### Results
Table 4 shows the EERs on MCYT-100. The accuracy of the state-of-the-art methods, i.e., PSN, TARNN, and DDTW, has all been improved by replacing
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{Percentage (\(\eta\)\%) of Training Data} \\ \cline{2-6} Method & 90 & 80 & 70 & 60 & 50 \\ \hline DTW [2; 44] & 4.00 & 3.00 & 4.17 & 4.37 & 4.60 \\ w/ raw signatures & 5.00 & 6.25 & 5.73 & 6.37 & 6.96 \\ Siamese & 5.50 & 6.80 & 6.27 & 7.33 & 8.40 \\ w/ local embedding loss [23] & 3.50 & 3.40 & 3.75 & 3.75 & 5.50 \\ \hline PSN [23] & 1.50 & 2.25 & 3.17 & 2.75 & 3.00 \\ + ours (plug-in) & 1.00 & 1.75 & 2.33 & 2.13 & 2.70 \\ w/o pre-training & 1.00 & 2.50 & 3.67 & 3.50 & 4.10 \\ \hline TARNN [24] & 1.00 & 3.00 & 3.50 & 4.25 & 4.50 \\ + ours (plug-in) & 0.50 & 2.25 & 2.67 & 2.88 & 2.80 \\ w/o pre-training & 1.50 & 2.50 & 3.17 & 3.25 & 5.00 \\ \hline DDTW [25] & 1.00 & 2.20 & 2.53 & 2.25 & 2.40 \\ + ours (plug-in) & 0.50 & 2.00 & 2.33 & 2.13 & 2.20 \\ w/o pre-training & 1.50 & 4.00 & 4.83 & 3.50 & 3.90 \\ \hline \hline \end{tabular}
\end{table}
Table 4: EERs (%) of online signature verification on MCYT-100. EER in red and blue indicate the least and the second least rates, respectively.
DTW with the proposed method. This proves that the proposed method is consistently effective as a plug-in to existing learnable time warping frameworks. In addition, from the comparison between PSN and DDTW, the plug-in location (i.e., before or after the representation learning module) is not very important.
Removing pre-training from the proposed method degrades the performance significantly. This fact confirms the necessity of the proposed pre-training method in improving discriminative power and stabilizing the inference of bipartite attention matrices.
Fig. 13 shows the distance histograms of the same-class and different-class pairs by the proposed method and DDTW. The proposed method shows a much smaller overlap than DDTW. DDTW focuses only on contrastive representation learning; in contrast, the proposed method trains the attention weight matrix (i.e., soft-correspondence) in contrastive learning, in addition to representation learning. This makes the proposed method more discriminative than DDTW.
## 6 Conclusion
In this paper, we proposed a novel neural network-based time warping method, called deep attentive time warping. The proposed method is based on a new attention module, called the bipartite attention module, between two
Figure 13: Distance histograms by the proposed method and DDTW [25] on MCYT-100. The horizontal axis is the distance and the vertical axis is normalized.
time series inputs. The module is trained by contrastive metric learning to achieve a learnable and task-adaptive time warping and to improve the trade-off between robustness against time distortion and discriminative power. The effectiveness of the proposed method was confirmed through two scenarios. The first was a stand-alone scenario, where the proposed method was used as a learnable time warping method and compared with the standard DTW and other time warping methods. Through qualitative and quantitative evaluations with Unipen and UCR datasets, the expected effectiveness was confirmed. The second was a plug-in scenario, where the proposed method is embedded in neural network-based metric learning frameworks with representation learning. Through a comparative study with state-of-the-art learnable time warping methods, the effectiveness of the proposed method was further confirmed.
The limitations of this paper are as follows. First, in the current framework, the regulation of the warping flexibility relies on the pre-training to mimic the standard DTW and the soft constraints implicitly imposed by the trained bipartite attention module; this means no explicit penalty for the violation of several reasonable regulations, such as monotonicity and continuity of warping. Although we confirmed the performance superiority over the standard DTW with those warping regulations, there is still a possibility that the introduction of some explicit regulations will further improve the performance. Second, we optimize the network architectures according to the characteristics of the dataset. In this paper, we used the same architecture for all UCR2015 datasets, and therefore the performance degrades for too long or too short time-series samples, as revealed by the analysis in Section 4.2.3. From a practical viewpoint, architecture optimization for better performance will be an important future work. Third,
we have not directly utilized the soft-correspondence (represented by the attention weight matrix) in the final distance evaluation. In fact, the attention weight matrix can be seen as a novel feature showing the relationship between two time series, and therefore we can extract some useful features from it for final distance evaluation and/or final decision making.
## Acknowledgments
This work was partially supported by MEXT-Japan (Grant No. J17H06100 and J22H00540).
|
2309.10624 | 6G Underlayer Network Concepts for Ultra Reliable and Low Latency
Communication in Manufacturing | Underlayer networks in the context of 6G for manufacturing are crucial. They
address the evolving needs of highly interconnected and autonomous systems in
industry. The digitalization of manufacturing processes, driven by the Internet
of Things and increased data availability, enables more efficient and
demand-driven production. However, wireless connectivity, which offers
flexibility and easy integration of components, comes with challenges such as
signal interference or high latency. A new management system is needed to
coordinate and route traffic of multiple networks in a specific coverage area.
This paper proposes underlayer networks designed for manufacturing, providing
low latency, reliability, and security. These networks enable wireless
connectivity and integration of wireless technologies into the manufacturing
environment, enhancing flexibility and efficiency. The paper also discusses
network slicing, spectrum sharing, and the limitations of current wireless
networks in manufacturing. It introduces a network concept for underlayer
networks and evaluates its application in closed-loop communication for machine
tools. The study concludes with future research prospects in this area. | Daniel Lindenschmitt, Jan Mertes, Christian Schellenberger, Marius Schmitz, Bin Han, Jan C. Aurich, Hans D. Schotten | 2023-09-19T14:02:25Z | http://arxiv.org/abs/2309.10624v1 | # The _Hamilton-Jacobi Equations_
###### Abstract
We consider the _Hamilton-Jacobi Equations_
\[\begin{pmatrix}\dot{\phi}_{1}&\phi_{2}&\phi_{3}\\ \dot{\phi}_{4}&\phi_{5}&\phi_{6}\\ \dot{\phi}_{7}&\phi_{8}&\phi_{9}\\ \dot{\phi}_{10}&\phi_{11}&\phi_{12}\end{pmatrix} \tag{1.1}\]
where \(\phi_{i}\) is the _Hamilton-Jacobi Equation_
\[\begin{pmatrix}\dot{\phi}_{1}&\phi_{1}&\phi_{13}\\ \dot{\phi}_{14}&\phi_{15}&\phi_{16}\end{pmatrix} \tag{1.2}\]
\(\dot{\phi}_{10}\)\(\dot{\phi}_{11}\)\(\dot{\phi}_{12}\)\(\dot{\phi}_{13}\)\(\dot{\phi}_{14}\)\(\dot{\phi}_{15}\)\(\dot{\phi}_{16}\)\(\dot{\phi}_{17}\)\(\dot{\phi}_{18}\)\(\dot{\phi}_{19}\)\(\dot{\phi}_{19}\)\(\dot{\phi}_{19}\)\(\dot{\phi}_{12}\)\(\dot{\phi}_{13}\)\(\dot{\phi}_{14}\)\(\dot{\phi}_{15}\)\(\dot{\phi}_{16}\)\(\dot{\phi}_{17}\)\(\dot{\phi}_{18}\)\(\dot{\phi}_{
# 6G Underlayer Network Concepts for Ultra Reliable and Low Latency Communication in Manufacturing
Daniel Lindenschmitt
Institute for Wireless Communication and Navigation
RPTU Kaiserslautern-Landau
D-67663 Kaiserslautern
[email protected] Jan Mertes
Institute for Manufacturing Technology and Production Systems
RPTU Kaiserslautern-Landau
D-67663 Kaiserslautern
[email protected] Christian Schellenberger
Institute for Wireless Communication and Navigation
RPTU Kaiserslautern-Landau
D-67663 Kaiserslautern
[email protected] Marius Schmitz
Institute for Manufacturing Technology and Production Systems
RPTU Kaiserslautern-Landau
D-67663 Kaiserslautern
[email protected] Bin Han
Institute for Wireless Communication and Navigation
RPTU Kaiserslautern-Landau
D-67663 Kaiserslautern
[email protected] Jan C. Aurich
Institute for Manufacturing Technology and Production Systems
RPTU Kaiserslautern-Landau
D-67663 Kaiserslautern
[email protected] Hans D. Schotten
Institute for Wireless Communication and Navigation
RPTU Kaiserslautern-Landau
D-67663 Kaiserslautern
[email protected]
###### Abstract
Underlayer networks in the context of 6G for manufacturing are crucial. They address the evolving needs of highly interconnected and autonomous systems in industry. The digitalization of manufacturing processes, driven by the Internet of Things and increased data availability, enables more efficient and demand-driven production. However, wireless connectivity, which offers flexibility and easy integration of components, comes with challenges such as signal interference or high latency. A new management system is needed to coordinate and route traffic of multiple networks in a specific coverage area. This paper proposes underlayer networks designed for manufacturing, providing low latency, reliability, and security. These networks enable wireless connectivity and integration of wireless technologies into the manufacturing environment, enhancing flexibility and efficiency. The paper also discusses network slicing, spectrum sharing, and the limitations of current wireless networks in manufacturing. It introduces a network concept for underlayer networks and evaluates its application in closed-loop communication for machine tools. The study concludes with future research prospects in this area.
**Keywords** Underlayer networks, 6G, manufacturing, network slicing, spectrum sharing, uRLLC, closed-loop communication, network management
## 1 Introduction
Underlayer networks play a crucial role in the context of 6G for manufacturing as they address the evolving needs of highly interconnected and autonomous systems in the industry. The advancing digitalization of manufacturing processes has opened up new possibilities and flexible concepts, driven by the Internet of Things (IoT) and the availability of a greater volume of data in shorter time-frames. This data influx enables more efficient and demand-driven production, with the concept of batch size 1 emerging as a key driver for maximum flexibility and adaptability in manufacturing.
For a successful transition from traditional production facilities to data-driven and autonomous cyber-physical production systems (CPPS), the interconnection of components is essential. Historically, wired solutions were preferred for this purpose. However, the shift towards wireless connectivity of all manufacturing process components has become a critical aspect of the conversion process. Wireless data transmission between different parts of a production facility allow a more flexible system design and easier integration of new components, enabling manufacturers to rapidly adapt to changing market demands and production requirements.
While wireless connectivity brings numerous benefits, there are also related challenges that need to be considered. Compared to wired transmission, wireless transmission is generally less robust and can be subject to interference, signal attenuation, and environmental factors. Additionally, wireless transmission often incurs higher latencies, which can be a limitation for real-time and safety-critical applications that require near-instantaneous response time.
The implementation of the 5G standard has introduced a multitude of innovative features to radio networks, including the emergence of private networks. This progress has not only expanded the reach of 5G technology to conventional Mobile Network Operators (MNO) and telecommunication companies but has also made it accessible to any interested businesses. In accordance with the standard, national regulatory authorities are able to define a specific frequency range, which is exclusively reserved and allocated for private networks, strictly prohibiting its nationwide usage by telecommunication companies. Governments and regulatory authorities have the flexibility to govern private networks in a unique manner.
This is where underlayer networks for 6G in manufacturing come into play. These networks are designed to address the specific requirements of the industry, including ultra-low latency, high bandwidth, reliability and security. By providing a robust and efficient communication infrastructure, underlayer networks enable wireless connectivity between different components of a manufacturing process while ensuring that safety-critical applications can be supported. This allows for the seamless integration of wireless technologies into the manufacturing environment, enhancing flexibility, adaptability and efficiency. By bringing the idea of private networks and the concept of underlayer networks together, a new management system is needed which is able to coordinate and route the traffic of multiple networks in a specific coverage area.
In Section 2 we sum up current research topics in the area of network slicing and spectrum sharing for 5G systems and a following 6G standard. Further we specify, why a new underlayer network concept is not possible with current technologies. Additionally we illuminate the gap of current 5G systems with respect to robust and low-latency communication from the point of view of a CPPS. Afterwards in Section 3 we introduce a network concept for manufacturing which is based on underlayer networks and their efficient management. We implement this concept for
closed-loop communication for machine tolls in Section 4 and evaluate new purposes with respect to Ultra Reliable and Low Latency Communications (uRLLC), followed by Section 5 with a conclusion and outlook on future work in this area.
## 2 Related Work
### Network Slicing & Spectrum Sharing
Network slicing, a key technology in 5G networks, has gained significant attention due to its ability to create virtual networks with customized characteristics to meet specific application requirements.
In [1] the authors provide an overview of network slicing in 5G networks, specifically highlighting its application in underlayer networks and discuss the challenges, benefits and potential use cases. The focus of [2] is on network slicing in underlayer networks for 5G. An analysis of the architecture and implementation challenges of network slicing is presented. Furthermore, a framework for efficient and scalable network slicing in this context is proposed. As a comprehensive survey, the authors of [3] focus on the enabling technologies for network slicing in 5G networks, with a particular emphasis on their application in underlayer networks. They cover various aspects, including network slicing architectures, resource management, security and service orchestration. In [4] the authors investigate resource allocation techniques for network slicing in 5G underlayer IoT networks. They address the challenges of efficient resource allocation, propose algorithms and approaches to optimize resource utilization and enhance the performance of network slicing in this context.
While network slicing is trying to fulfill all demands of an MNO in the virtual configuration domain on the same hardware resources, spectrum sharing is another possible way of shaping private mobile networks to the desired functionalities on the physical connection by using different hardware resources. Appropriate mechanisms in the spectrum sharing domain can increase spectral efficiency and thus increase the overall capacity of a wireless network. Additionally, each owner of a part of the spectrum in the area of private networks is able to configure the underlayer networks with respect to the demands.
In [5] the authors discuss the significance of spectrum efficiency in 5G networks and highlight the use of advanced spectrum sharing techniques to improve it. The survey focuses on cognitive radio, device-to-device communication, in-band full-duplex communication, non-orthogonal multiple access and Long Term Evolution on unlicensed spectrum, providing an overview of their principles and research methodologies. The challenges of deploying these techniques in the context of evolving 5G networks are addressed, along with the integration of multiple spectrum sharing techniques and potential challenges. Regulatory aspects of spectrum sharing are also an important topic in this area. The authors of [6] and [7] highlight the increasing need for practical solutions to effectively share spectrum bands in next-generation wireless networks. They review various spectrum sharing methods and categorize them based on their operational frequency regime (licensed or unlicensed bands). They also explore potential implementation scenarios and necessary amendments for legacy cellular networks. The paper also discusses the applications of artificial intelligence and machine learning techniques in facilitating spectrum sharing and identifies open research challenges for future investigations.
Spectrum sharing in an upcoming 6G standard can be a promising method to optimize spectrum utilization and might be able to establish an efficient way of implementing underlayer networks. Developing effective management models and considering regulatory aspects are crucial for the success of this technology in a future 6G communication system. In Section 3 an underlayer network concept is introduced, which is establishing robust and low-latency communication via a network management system.
### Communication Technologies in Manufacturing
CPPS are characterized by high flexibility, scalability and reconfigurability. Therefore, a high degree of interconnected entities are needed to enable decentralized and distributed computing units [8].
Depending on the hierarchical level of the information exchange from enterprise to field device level as defined in ISO IEC 62264-1:2013 [9], different requirements regarding the communication technology exist. Enterprise-level data transmission requires many connected devices as well as high data rate and data integrity. This is usually not subject to strict limitations in terms of latency and reliability. However, for low-level field device communication and real-time closed-loop process control, a reliable and low-latency communication is required [10]. Especially for factory automation on machine tool level, low latencies between 0.5 - 10 ms, high reliability with a packet loss rate of \(10^{-9}\)[11] and time determinism [12] is needed.
Due to that, currently mainly wired technologies (e.g. fieldbus and Ethernet-based systems) are deployed for use cases with real-time requirements [13]. However, to meet the required scalability and flexibility for CPPS, wireless communication networks are needed [14].
Next to the combination of different, heterogeneous solutions for industrial wireless networks to support different industrial use cases [15], mobile communication networks are especially important for automation in manufacturing due to the easy deployment, scalability. Moreover, unlike other wireless radio solutions, cellular networks can meet different communication requirements for different use cases simultaneously [16].
In particular, the development of the 5G mobile communications standard with focus on industrial applications is intended to provide the wireless infrastructure for CPPS.
However, currently no industrial wireless network - including currently available private 5G networks with Release 15 [17] - can meet the requirements for closed-loop control of machine tools. Due to that, dedicated and often proprietary wireless networks have to be used to enable a range of use cases in manufacturing. In order to maintain the advantage of centralized, wireless administration, the network should be implemented as a underlayer network, which can be administered from a higher-level network and enables seamless data transmission across different networks. An architecture for underlayer networks that meets the requirements for uRLLC and enables wireless closed-loop control of a machine tool is presented in the following section.
## 3 6G underlayer network concepts for manufacturing
In order to increase productivity and reduce costs, it will be necessary to establish an efficient way of data communication in manufacturing systems of the future, which is tailored to the needs of the respective application and still ensures connectivity between all components. By establishing the concept of 6G underlayer networks, we provide a solution for interconnection and coordination of various devices and systems within a factory. They serve as the backbone that enables real-time data exchange, synchronization and collaboration between industrial robots, machine tools, sensors, control systems and other smart machines.
### General approach
Since the introduction of the 5G mobile communications standard and the associated possibility of being able to license part of the frequency spectrum, e.g. in Germany in the range from 3.7 GHz to 3.8 GHz, as private spectrum independently of the MNOs, a large number of new possibilities and applications have emerged. Their requirements differ from public cellular networks in regards to e.g. network control or the integration of additional functionalities. Network slicing has created the first opportunities in 5G to adapt mobile networks more easily to different requirements and spectrum sharing is already being used to divide public and private networks.
In a future 6G standard, the requirements for flexibility and simpler operation will increase further, especially with regard to private networks, which is why existing technologies will have to be adapted. By establishing so-called underlayer networks, it will be possible to operate a very small-granular network structure that is also capable of ensuring communication with an overlaying network. Underlayer networks are independent cellular communication networks that can be adapted to the respective requirements of the application and, compared to the use of non-cellular standards such as WiFi, offer significant advantages in the area of robust and low-latency communication due to a significantly higher degree of determinism in the network. Independently, other communication standards can be used if they meet the specifications of the application in terms of Quality of Service (QoS) parameters as underlayer networks. This ensures parallel operation of cellular and non-cellular networks in different frequency ranges, which can exchange data via a corresponding gateway if required. Due to the need for networks with different configurations and QoS requirements or different private mobile networks by more than one operators in the identical area, it is necessary to introduce a central network configuration and control. Underlayer networks are connected to the overlay network via a gateway. This gateway is used to transmit configurations commands to the underlayer network from the central network controller and to exchange data between overlay and underlayer networks. Particularly when data is exchanged between different operators, trustworthy communication between all parties in the network must be ensured [18]. With the introduction of 6G underlayer networks, application-oriented communication can be established, which can adapt to changing requirements in an organic and agile way by means of network control [19].
### Network configuration
Currently three distinct networks are planned. They share 100 MHz of spectrum between 3.7 and 3.8 GHz which can be licensed by the Bundesnetzagentur (BNetzA)1 for local use. The spectrum will be split into two 20 MHz blocks and one 60 MHz block. The two smaller blocks are used for the underlayer networks. One for the uRLLC of the controllers and the other one for sensor data communication. The two underlayer networks share the same master node which is bridging the connection to the overlay 6G network. The overlayer network is using the allocated 60 MHz for non mission critical communication with relaxed QoS parameters.
Footnote 1: BNetzA : _Regionale und lokale Netze_, [https://www.bundesnetzagentur.de/](https://www.bundesnetzagentur.de/)
DE/Fachthemen/Telekommunikation/Frequenzen/OeffentlicheNetze/LokaleNe
tze/lokalenetze-node.html (2020)
The current network configuration is static, but for larger systems with more dynamic applications an automated network management is required. The master node of the underlayer network, which is always connected to the overlayer network, can request spectrum at a specific place from the network management entity. The current required spectrum at this place can be calculated and the decision to grant or reject the request can be made. Since the underlayer network is location specific with a small footprint more than one underlayer network could conceivably be operated in the same cell of the overlayer network.
## 4 Use Case: Closed-loop communication for machine tools
### Requirements for applications in manufacturing
As described in Section 2.2, different requirements have to be met to enable the utilization of wireless communication technologies for manufacturing. Moreover, closed-loop machine tool control is the use case with the highest requirements regarding communication performance in terms of latency, jitter, and reliability regarding to 3GPP TR 22.804[20].
The requirements for the overall system can be summarized as follows:
* Low-latency communication and time determinism: 0.5 to 10 ms end-to-end latency to enable closed-loop control
* Reliability and robustness: Low packet loss rate and network robustness to enable safety-critical applications (packet loss rate below \(10^{-9}\))
* Integration to overlay network: Simultaneous deployment of different wireless communication technologies that are interconnected to enable the required scalability and flexibility of CPPS
To meet these above listed requirements the setup described in the following Section has been developed and evaluated.
### Setup
As shown in Figure 1, the implemented setup consists of different hardware and software components as well as different communication systems. Regarding the utilized hardware, a 3-axis milling machine tool and a CNC unit for machine tool control is deployed on the shop floor. The machine tool consists of actuators and sensors such as stepper motors, the tool spindle, limit switches and vibration sensors. Moreover, an Ethernet-capable FPGA (MESA 7i76E) to handle motion control is connected to the CNC. In addition, an edge server that is located near the shop floor is used to offload information of non-latency critical applications via a mobile communication network.
The communication infrastructure for the underlayer network consists of two wireless token ring systems - so called EchoRing [21] - for latency-critical and reliable communication and sensor integration (latency \(<\) 2ms, packet loss rate down to \(10^{-9}\))2. The wireless token ring systems enable communication between various components on the shop floor. Specifically, an uRLLC ring between the CNC and FPGA is deployed to facilitate machine tool control, including PID control. Communication is based on UDP to enable low latencies. Another system with less rigorous requirements with support of up to eight devices is implemented for wireless sensor integration into the closed-loop system.
Footnote 2: R3 Solutions: _Data Sheet - Bridge E_, [https://cta-redirect.hubspot.com/cta/redirect/4230617/a4178ddb-0a7d-44d-b3ac-a4548f41ca2b](https://cta-redirect.hubspot.com/cta/redirect/4230617/a4178ddb-0a7d-44d-b3ac-a4548f41ca2b) (2013-06)
In addition, a 6G mobile communication network for non-time-critical communication is part of the communication system. The information between the two token ring systems are merged by a master device that connects and transfers them to the 6G mobile communication network.
On the software side, LinuxCNC is utilized as an open and adaptable CNC software. In addition, a digital twin of the machine tool is developed based on the Unity gaming engine, which enables monitoring, manual control, simulation and diagnosis of the process. The digital twin is running on the edge server and gathers information via the 6G network from sensors and the CNC unit. Thus, interfaces were developed to seamlessly integrate sensors into both the CNC system and the digital twin, enabling efficient data acquisition and utilization.
### Evaluation of functionality
To evaluate the functionality of the proposed architecture based on the EchoRing system, a wired test setup is implemented to estimate the needed network performance characteristics regarding latency and jitter. The test setup is is utilizing network emulation (NetEm) which is integrated into the Linux kernel. The kernel clock rate has been adapted to 1 kHz to enable deterministic delay emulation of 1 ms. The average packet size transmitted via UDP is around 80-159 bytes.
The different configurations are shown in Table 1. The experiments started with the highest combination of the performance characteristics (latency = 5 ms, jitter = 0.3 ms). For each working configuration the operation of machine tool has been tested for one hour. If the operation was not interrupted due to following errors of the feedback control, the functionality can be considered as valid. In Table 1, the combinations that did not work are marked with an "x". For the combinations with "(\(\check{\check{\vee}}\))", an adaption of the linux driver for initialization of the FPGA had to be done. The combinations that worked without any adaptions are marked with "\(\check{\vee}\)".
The experiments show that the maximum latency to operate the machine tool is 3 ms with a the maximum jitter of 0.2 ms. However, driver adaptions, tuning of the PID controller, the watchdog, as well as the cycle times of the CNC of the machine tool is needed for stable operation of the system. Therefore, it can be concluded that the EchoRing solution meets the communication requirements as it provides time-deterministic, low-latency communication below 2ms.
### Benefits and challenges
The implemented architecture leads to different benefits. First, it enables flexible and scalable manufacturing systems due to meeting various requirements simultaneously regarding the communication performance. Even wireless communication between machine tools and CNC units for machine tool control can be realized. Moreover, serveral EchoRing systems can be deployed and integrated simultaneously for different network requirements. Due to integration
Figure 1: Implemented Setup
of the underlayer networks into a mobile communication network, scalability - especially for data intensive, non-latency critical use cases - is enabled. Second, the wireless communication architecture facilitates retrofitting of numerous equipment (e.g. sensors, actors, computing units) in existing manufacturing systems or for machine tools. Third, the system enables real-time communication and time determinism in a wireless communication architecture for manufacturing. This leads to new use cases in manufacturing systems e.g. for real-time control, virtualized SPS or real-time diagnosis with retrofit sensors.
By introducing an efficient network management in Section 3, which takes over the coordination of the underlayer networks among each other as well as with the overlayer network, first approaches for a successful implementation of these new use cases were introduced. However, there are several challenges that need to be addressed. One challenge is the timing synchronization when integrating several EchoRing systems with different cycle times into the closed-loop of a CNC system. Another challenge is the performance of the wireless machine tool control system in a real-world setting. Therefore, testing on the shop floor is necessary. Factors such as harsh environmental conditions, including massive machine housing and covered line of sight, need to be investigated. Moreover, the deployment of multiple different wireless systems operating in the same manufacturing systems could lead to interferences and thus to inhibited scalability.
## 5 Conclusion and future work
In this paper, we have presented a concept for 6G underlayer networks in the scope of robust and low-latency communication. While wireless transmission offers greater flexibility and ease of integration, the introduced concept of underlayer networks ensures the reliability, low latency and security required for seamless and efficient communication in highly interconnected and autonomous manufacturing systems. We introduced a underlayer network concept for 6G in manufacturing, which offers a robust and efficient communication infrastructure, enabling wireless connectivity, flexibility, and adaptability in a manufacturing environment. By integrating the idea of private networks, network slicing and spectrum sharing, a new management system is required to coordinate and route traffic across multiple networks. The underlayer network concept is put in the context of a manufacturing scenario. This architecture enables wireless closed-loop control of machine tools and seamless data transmission across different networks. It leverages the advantages of cellular communication networks, offering robust communication through a higher degree of determinism. The underlayer networks can be configured and controlled centrally, ensuring trustworthy, low-latency communication between all parties in the network. The concept of underlayer networks provides an agile and application-oriented communication solution that can adapt to changing requirements and support future 6G communication systems.
For future work, a more detailed examination of the introduced architecture in a dynamic manufacturing environment is planned, where the specific tasks of network control in non-static scenarios will be investigated and evaluated. In addition, the architecture is currently in a implementation phase. The setup will be subject of ongoing and future research, e.g. for determining the manufactured part quality with wireless closed-loop control.
## 6 Acknowledgment
The authors acknowledge the financial support by the German _Federal Ministry for Education and Research (BMBF)_ within the project Open6GHub {16KISK004}.
|
2309.12033 | Face Identity-Aware Disentanglement in StyleGAN | Conditional GANs are frequently used for manipulating the attributes of face
images, such as expression, hairstyle, pose, or age. Even though the
state-of-the-art models successfully modify the requested attributes, they
simultaneously modify other important characteristics of the image, such as a
person's identity. In this paper, we focus on solving this problem by
introducing PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles
face attributes from a person's identity. Our key idea is to perform training
on images retrieved from movie frames, where a given person appears in various
poses and with different attributes. By applying a type of contrastive loss, we
encourage the model to group images of the same person in similar regions of
latent space. Our experiments demonstrate that the modifications of face
attributes performed by PluGeN4Faces are significantly less invasive on the
remaining characteristics of the image than in the existing state-of-the-art
models. | Adrian Suwała, Bartosz Wójcik, Magdalena Proszewska, Jacek Tabor, Przemysław Spurek, Marek Śmieja | 2023-09-21T12:54:09Z | http://arxiv.org/abs/2309.12033v1 | # Face Identity-Aware Disentanglement in StyleGAN
###### Abstract
Conditional GANs are frequently used for manipulating the attributes of face images, such as expression, hairstyle, pose, or age. Even though the state-of-the-art models successfully modify the requested attributes, they simultaneously modify other important characteristics of the image, such as a person's identity. In this paper, we focus on solving this problem by introducing PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles face attributes from a person's identity. Our key idea is to perform training on images retrieved from movie frames, where a given person appears in various poses and with different attributes. By applying a type of contrastive loss, we encourage the model to group images of the same person in similar regions of latent space. Our experiments demonstrate that the modifications of face attributes performed by PluGeN4Faces are significantly less invasive on the remaining characteristics of the image than in the existing state-of-the-art models.
## 1 Introduction
Modern generative models, such as StyleGAN [14, 15, 16], produce high-quality images, which are frequently indistinguishable from real ones. One of the current challenges is to introduce the functionality for manipulating the attributes of existing images. In the case of face images, we would like to modify the expression, the type of facial hair, or even the gender of the person in the photo.
Although the state-of-the-art conditional generative models, such as PluGeN [32] or StyleFlow [3], are capable of modifying selected face attributes, there is no guarantee that only requested attributes are changed. Experiments show that modifications of intended attributes often affect other attributes as well as the identity of a person. It means that the latent space used for modifications is so entangled that manipulating only selected attributes independently from other characteristics of the image is impossible.
There may be various reasons why existing models cannot create disentangled latent representation. In this paper, we argue that the conditional generative models are usually trained on generated (fake) images and they have never seen images representing the same person with different combinations of attributes. To introduce the information about the person's identity, we need to perform training on real images instead of generated ones only.
Working with real images is straightforward in autoencoder-based generative models, but there appear notable problems in the case of GANs since there is no built-in method for encoding images into the GAN latent space. The problem is especially challenging for StyleGAN architecture because of the structure of its style space. While generated images are identified by a single style code \(\mathbf{w}\in\mathcal{W}\subset\mathbb{R}^{512}\), not every image can be accurately mapped into \(\mathcal{W}\)[1]. To overcome this issue, most techniques (employing the encoder or gradient-based optimization) perform the
Figure 1: Sample effects of attributes manipulation performed by PluGeN4Faces.
search in the extended style space \(\mathcal{W}_{*}^{k}\), where a style code consists of \(k\) different 512-dimensional style vectors \(\mathbf{w}_{1},\dots,\mathbf{w}_{k}\in\mathbb{R}^{512}\) (typically \(k=18\)) - one for each layer of the StyleGAN architecture that can receive input via AdaIn [1, 30, 35]. Operating on the whole set of style codes significantly increases the dimensionality of latent codes and theoretically makes the complexity of the problem more challenging.
In this paper, we introduce PluGeN4Faces (**P**lugin **G**enerative **N**etworks for **Faces**), a plugin model for disentangling the latent space of StyleGAN in the case of face images. PluGeN4Faces provides full control on manipulating face attributes so that the modification of the requested attributes has a minimal effect on the identity of a person and the remaining face attributes (including background), see Figure 1 for sample results. PluGeN4Faces works as a plugin to pre-trained StyleGAN, which means that it does not change the weights of StyleGAN but only transforms its style space into a disentangled one. In consequence, a training process is extremely simple and absorbs limited computational resources.
In contrast to competitive models, PluGeN4Faces is trained on face images retrieved from movie frames, which can present a given person in various poses and with different attributes. The information about a person's identity is used in PluGeN4Faces by employing a contrastive loss. Namely, we encourage the model to group images of the same person in similar regions of latent space, see Figure 2. To use real images in training, we implement PluGeN4Faces as a conditional invertible normalizing flow, where the condition represents the identifier of the style code. In other words, PluGeN4Faces transforms every style code \(\mathbf{w}_{i}\), for \(i=1,\dots,k\), by the flow conditioned on the index \(i\). In this way, we are able to implement a compact disentanglement module operating on real images.
We evaluate PluGeN4Faces on face images retrieved from the FFHQ database as well as movie frames. We show that PluGeN4Faces allows for effective manipulation of face attributes. Moreover, the applied modifications preserve the person's identity to a significantly greater extent than in competitive models. The presented sample results are supported by the quantitative analysis, which confirms the advantage of PluGeN4Faces over related models.
The contribution of the paper is summarized as follows:
* We introduce a plugin to StyleGAN for manipulating the attributes of real images. In contrast to existing models, it is trained on real images encoded into StyleGAN style space using the encoder network.
* We improve the representation disentanglement in conditional generative models by applying a type of contrastive loss, which explicitly encodes the person's identity. In consequence, the manipulation of the requested attributes is less invasive on the remaining image characteristics (including person's identity).
* The proposed solution is evaluated in a strict quantitative way, which allows for a fair comparison with related models. The proposed metrics together with our sample results clearly demonstrate the advantage of PluGeN4Faces over competitive methods.
## 2 Related work
Conditional VAE (cVAE) is one of the first methods of including additional label information in a generative model [17], which has been successfully applied in a variety of disciplines including image generation [18, 28, 33]. However, the independence of latent codes and labels is not assured, which has a negative impact on the generation quality. Conditional GAN (cGAN) is an alternative that is able to produce examples of significantly better quality [4, 12, 22, 24, 25], but the training of the model is more difficult [19]. Fader Networks [20] overcome this limitation by combining components of cVAE and cGAN, as they use both encoder-decoder architecture and the discriminator, which predicts the image attributes from a corresponding latent vector obtained from the encoder. As with previous methods, Fader Networks does not preserve the disentanglement of attributes, moreover, the
Figure 2: Explicit disentanglement of attribute and identity features performed by PluGeN4Faces. While each labeled attribute is modeled as an individual latent dimension, the contrastive loss allows us to group latent codes representing images of the same person in similar regions of the space.
training is even more difficult than that of standard GANs.
While the described approaches focus on creating conditional generative models from scratch, recent work frequently focuses on manipulating the latent codes of pre-trained networks. In this scenario, data complexity is not that big of a limitation, hence flow models can be easily applied. StyleFlow [3] and PluGeN [32] operate on the latent space of GAN using a normalizing flow module: conditional CNF [9] and NICE [7], respectively. While StyleFlow is adapted to work only on StyleGAN [16], PluGeN demonstrates great results also with other models and in different domains. For StyleGAN, they are both trained using latent codes sampled from latent space \(W\) and attributes of images that correspond to them. Competitive approaches include [8, 10, 23, 29]. InterFaceGAN [26] aims to manipulate various properties of the facial semantics via linear models applied to the latent space of GANs. Hijack-GAN [31] goes beyond linear models and designs a proxy model to traverse the latent space of GANs.
Along with the latent codes manipulation techniques, methods of embedding examples into the GAN latent space can be used to allow manipulation of existing examples. There are two main embedding approaches: (i) an encoder network that maps an image into the latent space [30], (ii) an optimization algorithm that iteratively improves a latent code so that it produces a desired image [1, 2, 36]. Moreover, combinations of these two approaches exist, in which the encoder outputs an approximate embedding that is then improved by the optimization algorithm [34]. These methods allow us to train our model using real images, which are encoded into the extended StyleGAN latent space \(\mathcal{W}_{*}^{k}\) that enables manipulation of existing images. As shown in [1], the use of \(\mathcal{W}_{*}^{k}\) latent space instead of \(\mathcal{W}\) reduces the alteration of the original image.
## 3 Identity-aware disentanglement
OverviewPluGeN4Faces is a conditional invertible normalizing flow module (cINF), which is attached to the style space of StyleGAN. It transforms the style codes of pre-trained StyleGAN into a disentangled space so that:
* the labeled attributes are modeled by the individual latent coordinates,
* images of the same person are grouped in similar regions of the latent space.
While realizing the first of the above conditions allows us to edit the values of requested attributes, the second one prevents severe changes in the image during attribute manipulation.
In this section, we first review the StyleGAN architecture and recall the way of encoding real images into its style space. Next, we present a probabilistic structure of PluGeN4Faces, and cINF mapping function. We discuss the training procedure and the inference phase.
StyleGAN architectureStyleGAN architecture[15] consists of two main parts: (a) a mapping network that transforms latent codes \(\mathbf{z}\in\mathcal{Z}\) sampled from Gaussian noise \(\mathcal{N}(\mu,I)\) to the style vectors \(\mathbf{w}\in\mathcal{W}\), (b) a synthesis network that creates an image from the style code replicated several times. The replicated style codes represent the inputs to subsequent layers of the synthesis network.
Instead of manipulating latent codes \(\mathbf{z}\in\mathcal{Z}\), we usually operate on the style space \(\mathcal{W}\) to perform attribute modification, which was shown to be significantly more disentangled [3]. However, it is well-known that not all real images can be encoded into the StyleGAN's style space \(\mathcal{W}\)[35]. A typical approach for coping with this issue is to extend the search space and look for \(k\) different style codes \((\mathbf{w}_{1},\dots,\mathbf{w}_{k})\in\mathcal{W}_{*}^{k}\), which together could synthesize the original input [1, 30]. Each \(\mathbf{w}_{i}\) represents the input to the \(i\)-th layer of the synthesis network. Even though a sequence of style codes from the extended style space does not reflect any latent code \(\mathbf{z}\), it allows for the convenient reconstruction and manipulation of real images. One can design an encoder [30] or implement a gradient-based procedure for embedding real images into the extended style space. In this paper, we employ an encoder network.
Probabilistic structure of PluGeN4FacesWe assume that every image \(\mathbf{x}\) is described by the composition of the attribute and non-attribute vectors \((\mathbf{c},\mathbf{s})\), where \(\mathbf{c}\in\mathbf{C}=(C_{1},\dots,C_{M})\) and \(\mathbf{s}\in\mathbf{S}=(S_{1},\dots,S_{N-M})\). While each attribute variable \(c_{i}\in C_{i}\) contains information about the selected attribute, the non-attribute vector \(\mathbf{s}\) is used to describe the remaining characteristic of data including background and personal identity in the case of face images. To control the value of every attribute independently of each other, a factorized form of the probability distribution of the random vector \((\mathbf{C},\mathbf{S})\) is assumed. Given a vector of true labels \(\mathbf{y}=(y_{1},\dots,y_{M})\), the conditional distribution of \((\mathbf{c},\mathbf{s})\) is defined by
\[p_{\mathbf{C},\mathbf{S}|\mathbf{Y}=\mathbf{y}}(\mathbf{c},\mathbf{s})=\prod _{i=1}^{M}p_{C_{i}|Y_{i}=y_{i}}(c_{i})\cdot p_{\mathbf{S}}(\mathbf{s})\ \text{, for }( \mathbf{c},\mathbf{s})\in\mathbb{R}^{N}.\]
In the above formula, the \(i\)-th label \(y_{i}\) affects only the \(i\)-th attribute variable \(C_{i}\). As a parametric form of \(p_{C_{i}|Y_{i}=y_{i}}\), we use a 1-dimensional Gaussian density \(\mathcal{N}(y_{i},\sigma)\). By changing the condition \(Y_{i}=y_{i}\), we modify the mean of the Gaussian. The distribution of the non-attribute vector is modeled as a multivariate standard Gaussian density \(\mathcal{N}(\mathbf{0},\mathbf{I}_{N-M})\). The non-attribute vector \(\mathbf{s}\) is responsible for covering information about a person's identity, image background, etc., so images presenting the same person should have similar values of \(\mathbf{s}\).
Invertible mappingTo realize the above parameterization, a two-way mapping between the style space of the pre-trained StyleGAN and the disentangled space \((\mathbf{C},\mathbf{S})\) has to be established. Since we work with real images (not only generated ones), we employ the StyleGAN encoder [30], which produces a sequence of style codes \(\{\mathbf{w}_{1},\dots,\mathbf{w}_{k}\}\in\mathcal{W}_{*}^{k}\) representing a given image \(\mathbf{x}\) in the subsequent layers of the StyleGAN synthesis network. Thus we need to map a sequence of style codes \((\mathbf{w}_{i})_{i=1}^{k}\) (representing a single image \(\mathbf{x}\)) into a sequence of the attribute and non-attribute vectors \((\mathbf{c}_{i},\mathbf{s}_{i})_{i=1}^{k}\). To find such an invertible transformation, we use a conditional INF (cINF), which is parametrized by the identifier of the style code. More precisely, the cINF, \(\mathcal{F}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\), takes the style code \(\mathbf{w}_{i}\) and the index of \(i\)-th layer as a condition and returns a disentangled representation of \(\mathbf{w}_{i}\) as:
\[(\mathbf{c}_{i},\mathbf{s}_{i})=\mathcal{F}(\mathbf{w}_{i}|\text{layer}=i).\]
Here, both \(\mathbf{c}_{i}=(c_{1}^{i},\dots,c_{M}^{i})\) and \(\mathbf{s}_{i}=(s_{1}^{i},\dots,s_{N-M}^{i})\) are vectors corresponding to a given \(\mathbf{w}_{i}\).
TrainingThe conditional INF is trained by minimizing the negative log-likelihood taken over all style codes. Given a sequence of style codes \((\mathbf{w}_{i})_{i=1}^{k}\) representing an image \(\mathbf{x}\) with labels \(\mathbf{y}\), we aim at minimizing:
\[-\sum_{i=1}^{k}\log p_{\mathbf{W}_{i}|\mathbf{Y}=\mathbf{y}}( \mathbf{w}_{i})=\\ -\sum_{i=1}^{k}\log\left(p_{\mathbf{C},\mathbf{S}|\mathbf{Y}= \mathbf{y}}(\mathbf{c}_{i},\mathbf{s}_{i})\cdot\left|\det\frac{\partial \mathcal{F}^{-1}(\mathbf{w}_{i}|\text{layer}=i)}{\partial\mathbf{w}_{i}}\right| \right)=\\ -\sum_{i=1}^{k}\left(\sum_{j=1}^{M}\log p_{C_{i}^{j}|Y_{i}=y_{i}} (c_{i}^{j})+\log p_{\mathbf{s}_{i}}(\mathbf{s}_{i})+\right.\\ \left.\log\left|\det\frac{\partial\mathcal{F}^{-1}(\mathbf{w}_{i }|\text{layer}=i)}{\partial\mathbf{w}_{i}}\right|\right), \tag{1}\]
where \((\mathbf{c}_{i},\mathbf{s}_{i})=\mathcal{F}^{-1}(\mathbf{w}_{i}|\text{layer}=i)\) are the attribute and non-attribute vectors describing the \(i\)-th style code \(\mathbf{w}_{i}\) (in the \(i\)-th StyleGAN layer).
In addition to the negative log-likelihood minimization, which focuses on modeling labeled attributes, we introduce a contrastive loss responsible for the explicit encoding of the face identity. Thanks to the contrastive loss, manipulating the labeled attributes will have a minimal effect on changing other attributes (including identity) of the face image.
To construct our contrastive loss, we take \(n\) images \(\mathbf{x}_{1},\dots,\mathbf{x}_{n}\) of a given person and encode them into the style space of StyleGAN using the encoder network. Such images can be retrieved from subsequent frames of movies. For each image, the encoder produces a sequence of style codes, which represent the input to subsequent layers of StyleGAN generator. For transparency, we restrict our attention to the \(l\)-th layer in the following description. For \(n\) images, we have \(n\) style codes \(\mathbf{w}_{1},\dots,\mathbf{w}_{n}\), in which \(\mathbf{w}_{i}\) is the representation of \(\mathbf{x}_{i}\) in the \(l\)-th layer (we drop the index of the layer for simplicity). Making use of conditional INF, we find a disentangled representation of \(\mathbf{w}_{i}\) as
\[(\mathbf{c}_{i},\mathbf{s}_{i})=\mathcal{F}(\mathbf{w}_{i}|\text{layer}=l).\]
To force the structure on non-attribute variables, where images of the same person are represented by similar non-attribute vectors, we apply the following con
Figure 3: Architecture of PluGeN4Faces. Given the representation of the input image using a sequence of style codes, PluGeN4Faces uses INF to model labeled attributes as individual latent dimensions. The remaining characteristic of the image (including the person’s identity) are modeled in separate dimensions using contrastive loss.
trastive loss:
\[\sum_{i\neq j}\|\mathbf{s}_{i}-\mathbf{s}_{j}\|^{2}=2n\sum_{i=1}^{n}\|\mathbf{s}_ {i}-\mathbf{m}\|^{2}, \tag{2}\]
where the mean \(\mathbf{m}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{s}_{i}\) is used to reduce the number of comparisons [27]. Minimization of (2) leads to mapping the set of \(n\) input images to similar values of non-attributes vectors. We apply this loss to images representing the same person.
To sum up, the complete loss of PluGeN4Faces is given by taking together the introduced contrastive loss (2) and negative log-likelihood (1). For the first loss component, we need a set of images representing the same person, while for the second one, we use images with labeled attributes.
InferenceTo edit attributes of a real image \(\mathbf{x}\), we find its style codes \(\mathbf{w}_{1},\dots,\mathbf{w}_{k}\) using the encoder network. Next, we map each style code \(\mathbf{w}_{i}\) using the inverse of cINF to obtain the attribute and non-attribute vectors \((\mathbf{c}_{i},\mathbf{s}_{i})\), for \(i=1,\dots,k\). The requested attributes are modified in each attribute vector \(\mathbf{c}_{i}\) and next they are mapped back by the cINF to the style codes. The synthesis network generates the image with edited attributes from the modified style codes.
## 4 Experiments
Experimental settingWe consider Flickr-Faces-HQ dataset (FFHQ) containing 70 000 high-quality images of resolution \(1024\times 1024\). The Microsoft Face API was used to label 8 attributes in each image (gender, glasses, hair/bald, facial hair/beard, expression/smile, age, pitch, and yaw).
Additionally, to explicitly control the person's identity we use images retrieved from video clips. More precisely, we use images from videos and celebrity interviews scraped from YouTube with 573 videos, an average of 19.33 images per video and 12 194 images in total. As in the case of FFHQ dataset, attributes of every image are also labeled using Microsoft Face API.
To evaluate the proposed disentanglement model access to an independent face attribute classifier is needed. For this purpose, we train the ResNet-18 model [11] on the FFHQ and 10 000 randomly generated StyleGAN face images. The model is trained with 8 outputs in a multi-label manner, treating the Microsoft Face API labels as targets. We standardize the labels as well as apply the shrinkage loss [21] as we find that it helps with dataset imbalance. We use the same loss for binary and continuous labels as this works equally well for classification [13]. Although not all of the face attribute labels are binary, we call this model _classifier_ in the remainder of this paper to avoid any confusion with the other models used in the experiments.
We use StyleGAN (version 2) as a backbone model, which was trained on FFQH dataset. Real images are encoded to the extended latent space \(\mathcal{W}_{*}^{\mathbf{k}}\) of StyleGAN, where \(k=18\), using the encoder network [30]. In consequence, every image is represented using a sequence of style codes \(\{\mathbf{w}_{1},\dots,\mathbf{w}_{k}\}\in\mathcal{W}_{*}^{\mathbf{k}}\), where \(\mathbf{w}_{i}\in\mathbb{R}^{512}\). The encoder is trained on the combination of images from FFHQ and movie datasets.
PluGeN4Faces is instantiated using conditional RealNVP flow model [6] that operates on the individual latent codes \(\mathbf{w}_{i}\in\mathbb{R}^{512}\) of StyleGAN. The condition is an identifier \(i\) of the style code (being the input to the \(i\)-th StyleGAN layer) represented as a one-hot vector.
As a baseline, we choose two state-of-the-art conditional models, PluGeN and StyleFlow, which can be used with a pre-trained StyleGAN. PluGeN uses NICE flow model to transform individual style codes \(\mathbf{w}_{i}\) to disentangled space. In other words, PluGeN uses a single shared NICE model (with the same parameters) as a mapping between each style code and the target disentangled space. StyleFlow is parameterized by the conditional continuous flow, where the conditioning factor corresponds to the labeled attributes. Similarly to PluGeN, StyleFlow uses a single flow, which is applied to various style codes.
Qualitative resultsIn this section, we illustrate the sample results produced by the proposed model. First, we perform a single edit of binary attributes. Next, we consider sequential edits, where subsequent modifications on binary attributes are added one by one. In both cases, we perform a minimal modification needed to change a decision of the attribute classifier. More precisely, we perform a gradual change of the attribute and inspect the reaction of the attribute classifier on the modified attribute of the generated image. If the classifier recognizes the attribute of the generated image with sufficient confidence, then we stop modification and return the generated image. By making use of an independent classifier, we are guaranteed to obtain a fair comparison regardless of the scale used by the models.
Figure 4 presents the results of single (left) and sequential edits (right). At first glance, all considered models give visually appealing effects and perform successfully the requested modifications. Observe however that PluGeN and StyleFlow changed the ethnicity and age of the person in the top left example when modifying the attribute "hair". Such a behavior is
not accepted and does not hold in the case of PlueMeN4Faces (see the 1st row of Figure 1, where PlueMeN4Faces added hair without changing the ethnicity). It is impressive that all models were able to combine the attribute "beard" with a woman's face in the bottom left example. Nevertheless, the face produced by PlueMeN4Faces has more female features than the ones generated by PlueMeN and StyleFlow. Looking at sequential edits (right), it is evident that PlueMeN4Faces kept the color of clothes and background unchanged,
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c}{**PlueMeN4Faces (ours)**} & \multicolumn{4}{c}{**PlueGeN**} & \multicolumn{4}{c}{**StyleFlow**} \\ \hline & FR & ArcFace & Raw & Raw & FR & ArcFace & Raw & Raw & FR & ArcFace & Raw & Raw \\ & MSE \(\downarrow\) & MSE \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & MSE \(\downarrow\) & MSE \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & MSE \(\downarrow\) & MSE \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline male & 0.20 & **0.25** & **30.34** & **0.84** & **0.20** & 0.26 & 29.96 & 0.83 & 0.25 & 0.35 & 26.97 & 0.75 \\ female & **0.22** & **0.28** & **29.58** & **0.84** & 0.24 & 0.31 & 26.49 & 0.80 & 0.27 & 0.38 & 26.91 & 0.74 \\ glasses & 0.40 & 0.65 & 20.97 & 0.65 & 0.42 & 0.64 & 19.62 & 0.64 & **0.36** & **0.53** & **22.20** & **0.66** \\ no glasses & **0.12** & **0.11** & **39.05** & **0.95** & 0.12 & 0.11 & 37.37 & 0.93 & 0.20 & 0.21 & 27.93 & 0.77 \\ bald & **0.14** & **0.18** & **29.50** & **0.82** & 0.22 & 0.27 & 24.32 & 0.74 & 0.21 & 0.28 & 27.45 & 0.72 \\ hair & **0.07** & **0.04** & 38.67 & **0.95** & 0.10 & 0.07 & 33.19 & 0.90 & 0.10 & 0.09 & **38.77** & 0.88 \\ old & 0.45 & **0.67** & **22.75** & **0.66** & 0.45 & 0.72 & 20.65 & 0.62 & **0.45** & 0.70 & 20.63 & 0.57 \\ young & **0.43** & **0.63** & **22.71** & **0.69** & 0.46 & 0.75 & 20.61 & 0.63 & 0.43 & 0.73 & 21.40 & 0.60 \\ beard & 0.29 & 0.35 & 23.54 & 0.75 & 0.33 & 0.47 & 21.25 & 0.67 & **0.21** & **0.23** & **31.18** & **0.80** \\ no beard & **0.10** & **0.07** & **39.58** & **0.94** & 0.11 & 0.09 & 35.11 & 0.91 & 0.15 & 0.15 & 32.10 & 0.83 \\ smile & **0.11** & **0.07** & **35.75** & **0.93** & 0.14 & 0.10 & 29.87 & 0.86 & 0.17 & 0.16 & 29.83 & 0.79 \\ no smile & 0.19 & 0.16 & 29.63 & **0.86** & 0.22 & 0.21 & 24.31 & 0.74 & **0.17** & **0.15** & **30.96** & 0.81 \\ up & **0.22** & **0.23** & 24.58 & **0.76** & 0.24 & 0.27 & 22.30 & 0.71 & 0.26 & 0.35 & **25.32** & 0.67 \\ down & **0.16** & **0.14** & 28.63 & **0.84** & 0.18 & 0.17 & 26.13 & 0.80 & 0.18 & 0.22 & **33.44** & 0.78 \\ right & **0.25** & **0.32** & 19.88 & **0.60** & 0.25 & 0.32 & 19.30 & 0.59 & 0.29 & 0.41 & **23.72** & 0.55 \\ left & **0.22** & **0.27** & 21.58 & **0.65** & 0.22 & 0.27 & 20.88 & 0.64 & 0.26 & 0.36 & **26.64** & 0.60 \\ \hline avg & **0.22** & **0.28** & **28.54** & **0.79** & 0.24 & 0.31 & 25.71 & 0.75 & 0.25 & 0.33 & 27.84 & 0.72 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Identity disentanglement. For each image, we change of the values of attributes listed in rows and compare the relation between original input image and the modified one in terms of 4 measures: (i–ii) MSE between image embeddings taken from Face Recogiontion and ArcFace models, (iii) PSNR and (iv) SSIM applied on raw images.
Figure 4: Single attribute manipulations (left) and sequential edits of multiple attributes (right).
which is not the case of PluGeN and StyleFlow. Moreover, the type of glasses is also unaffected by attribute manipulations performed by PluGeN4Faces. On the downside, it should be noted that all models make the face slightly older when the attributes "bald" or "beard" are used.
We also illustrate the manipulations of continuous attributes by showing the path between two extreme values of a given attribute, see Figure 5. Although the requested modifications have been successfully realized by the models, PluGeN4Faces was less invasive to the images. PluGeN could not avoid adding glasses when changing the age (left); it modified the gender of the child's face when turning the head left (middle right); it changed the color of clothes in the bottom left example. StyleFlow modified the age of a child when turning his head right (middle right) as well as added male features to the face presented in the bottom right example when the head was turned down. PluGeN4Faces was free of the aforementioned drawbacks, which demonstrates that it better disentangles the image space and is able to preserve more of the original features during edits.
Identity preservationIn this part, we support our sample results with quantitative evaluation, which aims at verifying how well PluGeN4Faces disentangles the image representation. To this end, we change a single attribute of a given image and compare the resulting picture with the original image (before modification). Again, for a fair comparison, we employ a classifier and apply a minimal modification which is accepted by the attribute classifier.
Figure 5: Interpolation on the extreme values of continuous attributes.
To compare the difference between images, we apply two approaches. In the first one, we calculate the mean square error (MSE) between embeddings of the original and modified images taken from a pre-trained network. To this end, we employ two networks applicable to processing face images: ArcFace1[5] and FR2. A model with a lower MSE preserves more features (including identity) from the original image. Second, to explicitly compare the difference between images we also use the PSNR and SSIM measures applied to raw images. Such measures suit perfectly to compare the modification of low-level features such as the background.
Footnote 1: [https://github.com/deepinsight/insightface](https://github.com/deepinsight/insightface)
Footnote 2: [https://github.com/ageitgey/face_recognition](https://github.com/ageitgey/face_recognition)
Table 1 shows how the proposed measures react to changing subsequent face attributes. Each row corresponds to the requested value of the modified attribute. The results consistently confirm that PluGeN4Faces obtains significantly better scores than PluGeN and StyleFlow in most cases. One can observe that modifying the "age" attribute has a significant effect on the disentanglement measures, which suggests that changing the age leads to changes in a person's identity. It is interesting that modifying gender in face images has a moderate influence on face identification. This could mean that both models successfully disentangled this attribute from the remaining image information. The smallest changes are observed for manipulating "smile" and "hair" attributes.
Attributes disentanglementWe also verify the disentanglement between labeled attributes in a strict quantitative way. Namely, we force the change of a single attribute and verify whether the values of other labeled attributes changed as well. Ideally, the values of the remaining attributes should stay intact.
For binary attributes (smile gender, glass, hair, and beard), we apply a standard accuracy measure, which
\begin{table}
\begin{tabular}{l|c c c c c|c|c} \hline & gender & glasses & bald & beard & smile & avg. & acc. of \\ & & & & & & modif. \\ \hline & \multicolumn{6}{c}{**PluGeN4Faces (ours)**} \\ \hline gender & - & 96.99 & **90.90** & 85.75 & 89.27 & 90.72 & **91.94** \\ glasses & **95.25** & - & 92.01 & 86.69 & 89.48 & 90.86 & 99.10 \\ bald & **94.79** & 97.17 & - & **86.98** & **90.23** & **92.29** & **96.19** \\ beard & **94.92** & 96.46 & **93.41** & - & **90.75** & **93.88** & 66.91 \\ smile & **95.84** & **96.13** & 93.41 & 86.86 & - & **93.06** & **98.14** \\ avg. & & & & & **92.16** & **90.46** \\ \hline & \multicolumn{6}{c}{**PluGeN**} \\ \hline gender & - & **97.70** & 90.69 & **85.81** & 89.87 & **91.02** & 84.28 \\ glasses & 93.28 & - & 92.57 & 86.77 & 89.68 & 90.58 & **99.41** \\ bald & 93.74 & **97.20** & - & 86.48 & 89.87 & 91.82 & 72.37 \\ beard & 86.82 & **97.14** & 93.03 & - & 90.34 & 91.83 & 75.93 \\ smile & 92.17 & 96.05 & **93.45** & 86.75 & - & 92.10 & 97.28 \\ avg. & & & & & 91.47 & 85.86 \\ \hline & \multicolumn{6}{c}{**StyleFlow**} \\ \hline gender & - & 95.38 & 90.46 & 85.65 & **90.23** & 90.43 & 90.52 \\ glasses & 94.48 & - & **92.82** & **87.09** & **90.42** & **91.20** & 98.70 \\ bald & 91.86 & 95.46 & - & 86.77 & 87.32 & 90.35 & 73.80 \\ beard & 83.47 & 95.80 & 92.59 & - & 89.70 & 90.39 & **77.65** \\ smile & 94.92 & 96.11 & 93.39 & **87.34** & - & 92.94 & 76.04 \\ avg. & & & & & 91.06 & 83.34 \\ \hline \end{tabular}
\end{table}
Table 2: Attributes disentanglement measured by the accuracy (higher is better). For each image, we change of the values of attributes listed in rows and verify whether the remaining attributes (listed in columns) stay unchanged. We report the percentage of successes (accuracy). In the last column, we also report the accuracy of modifying the requested attribute (listed in rows).
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline & gender & glasses & bald & beard & smile & age & pitch & yaw \\ \hline & \multicolumn{6}{c}{**PluGeN4Faces (ours)**} \\ \hline gender & - & **87.01** & **90.91** & 78.83 & 95.17 & 96.53 & **98.92** & **99.79** \\ glasses & **93.65** & - & **91.51** & **96.15** & **95.20** & 95.83 & **98.31** & **99.79** \\ bald & **93.98** & **89.18** & - & **96.17** & **96.87** & **98.68** & **99.05** & **99.75** \\ beard & **90.81** & **86.97** & **91.49** & - & **94.48** & 96.58 & 98.54 & **99.71** \\ smile & **95.50** & **88.61** & **95.53** & **96.86** & - & **98.38** & **98.96** & **99.74** \\ age & **86.02** & **93.94** & **83.36** & **87.94** & **89.28** & - & 95.50 & **99.58** \\ pitch & **95.57** & **89.25** & **94.45** & **96.94** & **96.25** & **98.93** & - & **98.82** \\ yaw & 91.41 & 83.49 & 90.59 & 93.66 & 93.35 & 96.96 & 97.60 & - \\ avg & **92.42** & **80.55** & **91.12** & **92.36** & **94.37** & 97.41 & **98.12** & **99.74** \\ \hline & \multicolumn{6}{c}{**PluGeN**} \\ \hline gender & - & 86.86 & 89.73 & **79.93** & **95.56** & **96.84** & 98.53 & 99.65 \\ glasses & 92.52 & - & 91.14 & 95.44 & 94.09 & **96.00** & 97.74 & 99.64 \\ bald & 92.95 & 87.07 & - & 95.14 & 95.33 & 98.11 & 98.73 & 99.60 \\ beard & 85.43 & 85.65 & 88.66 & - & 93.41 & **97.21** & **98.57** & 99.47 \\ smile & 90.66 & 85.87 & 94.02 & 93.73 & - & 98.19 & 98.51 & 99.59 \\ age & 80.06 & 38.11 & 76.66 & 79.38 & 89.00 & - & **96.13** & 99.44 \\ pitch & 94.47 & 85.30 & 94.02 & 96.41 & 95.84 & 98.62 & - & 99.74 \\ yaw & **92.42** & **84.31** & **92.48** & **95.18** & **94.62** & **98.32** & **98.01** & - \\ \hline avg & 89.78 & 79.02 & 89.53 & 90.74 & 93.97 & **97.61** & 98.03 & 99.59 \\ \hline & \multicolumn{6}{c}{**StyleFlow**} \\ \hline gender & - & 80.42 & 87.13 & 65.90 & 94.37 & 95.64 & 97.76 & 99.42 \\ glasses & 91.11 & - & 90.69 & 93.99 & 93.48 & 95.03 & 97.65 & 99.46 \\ bald & 89.72 & 83.20 & - & 93.86 & 92.88 & 97.28 & 97.97 & 99.03 \\ beard & 80.51 & 84.55 & 89.33 & - & 93.80 & 95.89 & 97.84 & 99.08 \\ smile & 92.84 & 86.65 & 92.73 & 95.57 & - & 97.81 & 98.30 & 99.62 \\ age & 82.13 & 34.16 & 75.60 & 80.92 & 88.34 & - & 93.24 & 98.74 \\ pitch & 90.44 & 82.07 & 91.69 & 94.46 & 94.15 & 97.76 & - & 99.51 \\ yaw & 87.72 & 78.93 & 86.40 & 92.49 & 91.91 & 95.45 & 95.
shows whether the classifier keeps its original prediction on non-modified attributes. Additionally, we employ a ranking measure, which can be used for discrete as well as continuous attributes because classifier scores do not have to be discretized in this case. In this approach, we rank input (non-modified) images based on the scores returned by the classifier on the attribute \(A_{i}\). Next, we change the value of the attribute \(B\) and again calculate the ranking using the classifier scores based on the attribute \(A_{i}\). We compare the rankings before and after the change using the Rank Correlation Coefficient (Spearman's \(\rho\)), which gives a maximal value of 1, for two identical rankings. Higher values indicate better disentanglement. We repeat this experiment for all attributes \(A_{1},\dots,A_{k}\).
Table 2 shows that all models obtain the average accuracy on non-target attributes above 90% and around 80% on the attributes being modified, which means that it is still more difficult to perform the modification than to keep the values of other features. Taking the average of accuracy scores reveals that PluGeN4Faces outperforms PluGeN and StyleFlow in both metrics. Looking at the ranking correlation presented in Table 3, we observe that the advantage of PluGeN4Faces over PluGeN and StyleFlow is even higher. It gives higher scores in 41 out of 56 cases.
The lowest correlation scores were obtained when we modified the age attribute (which aligns with the conclusion of the previous experiment). It was almost impossible to keep the ranking on the glasses attribute, which might be explained by the fact that the training does not contain young people wearing glasses. Previous sample results presented in Figure 5 also showed that increasing the age attribute accidentally leads to adding glasses. Analogical negative behavior occurs in the case of beard and hair attributes, which are highly correlated with age. This analysis shows that it is very difficult to overcome the bias introduced in a training set and provide high-quality disentanglement between some face attributes.
## 5 Conclusion
We introduced PluGeN4Faces for disentangling face attributes from the person's identity. The proposed model works as a plugin to the pre-trained StyleGAN model, which makes it extremely easy to use in practice. Our key idea relies on applying contrastive learning on images retrieved from movie frames that contain information about a person's identity. Our experiments supported by the rigorous quantitative analysis demonstrate that PluGeN4Faces is focused on manipulating the requested attributes and is less invasive to the remaining image attributes than the existing methods.
|
2301.00201 | Exploring Singularities in point clouds with the graph Laplacian: An
explicit approach | We develop theory and methods that use the graph Laplacian to analyze the
geometry of the underlying manifold of point clouds. Our theory provides
theoretical guarantees and explicit bounds on the functional form of the graph
Laplacian, in the case when it acts on functions defined close to singularities
of the underlying manifold. We also propose methods that can be used to
estimate these geometric properties of the point cloud, which are based on the
theoretical guarantees. | Martin Andersson, Benny Avelin | 2022-12-31T13:48:42Z | http://arxiv.org/abs/2301.00201v1 | # Exploring singularities in point clouds with the graph Laplacian: an explicit approach
###### Abstract.
We develop theory and methods that use the graph Laplacian to analyze the geometry of the underlying manifold of point clouds. Our theory provides theoretical guarantees and explicit bounds on the functional form of the graph Laplacian, in the case when it acts on functions defined close to singularities of the underlying manifold. We also propose methods that can be used to estimate these geometric properties of the point cloud, which are based on the theoretical guarantees.
Key words and phrases:Graph Laplacian, geometry, singularities 2020 Mathematics Subject Classification: Primary 58K99; Secondary 68R99, 60B99
## 1. Introduction
High dimensional data is common in many research problems across academic fields. It is often assumed that a data set \(X=\{X_{i}\}_{i}^{n}\subset\mathbb{R}^{N}\) lies on a lower-dimensional set \(\Omega\) and is in fact a sample from a probability distribution over \(\Omega\). It is also often assumed that \(\Omega\) can be represented as the union of several manifolds \(\Omega_{i}\), where each \(\Omega_{i}\) represents a different class in a classification problem. For instance, if a data set contains two classes, \(i\) and \(j\), class \(i\) might be contained in \(\Omega_{i}\) and class \(j\) in \(\Omega_{j}\), with the two classes potentially being disjoint. However, classification is not always so clear-cut: For instance, in the MNIST dataset, where handwritten digits of "\(1\)" \(\in\Omega_{1}\) and "\(7\)" \(\in\Omega_{7}\) can appear very similar, suggesting that \(\Omega_{1}\cap\Omega_{7}\neq\emptyset\). Therefore, understanding geometric situations such as intersections is of interest in classification problems.
In the manifold model of data, an intersection between two different manifolds \(\Omega_{i},\Omega_{j}\) is either represented just as such, or it can be viewed as a singularity if we consider \(\Omega=\Omega_{i}\cup\Omega_{j}\) as a single manifold. Other regions in \(\Omega\) that can be viewed as singular, such as boundaries and edges, may also be of interest as they can signify important features in the data.
To study such singularities, we use the graph Laplacian \(L_{n,t}\). This operator, which depends on the number of data points \(n\) and a parameter \(t\), can act on functions defined on the data set \(X\). As \(n\) tends to infinity and \(t\) tends to \(0\), \(L_{n,t}\) converges to the Laplace-Beltrami operator in the interior of a single manifold [1]. In this work, we primarily study the behavior of \(x\to L_{n,t}f(x)\) for functions \(f\), when \(x\) is close to singular points.
Our contribution in this paper is primarily an extension and reframing of work done in [2]. At the same time, we also focus on the specific case when the function \(f\) is assumed to be of the form \(f(x)=v\cdot x\), where \(v\) is a unit vector. We also consider more restricted classes of manifolds.
Since \(L_{n,t}\) converges to Laplace-Beltrami, a second order differential operator, in the interior of \(\Omega\), we expect that for \(f\) as above \(L_{n,t}f(x)\approx 0\). However, for singular points like intersections, the limit operator is of first order [2], and \(L_{n,t}f(x)\neq 0\), which can be seen in Fig. 1.
Our results show how \(x\to L_{t}f(x)\) and, through a finite-sample bound, how \(x\to L_{n,t}f(x)\) behaves. More specifically, given \(x_{0}\in\Omega_{i}\) near some singularity, and \(x\) in the ball \(B_{R}(x_{0})\), including the case when \(x\not\in\Omega_{i}\), we show how the function \(x\to L_{n,t}f(x)\) deviates from being constantly \(0\) and has specific functional forms. These forms depend on the type of singularity. In [2] they showed what these forms are, up to some asymptotically defined error term, as \(t\to 0\), We build on this to get explicit expressions of \(L_{t}f(x)\) when \(t\) is fixed.
Overview of results: First, in Section 4.1 we consider the case that \(\Omega\) is flat manifold of dimension \(d\), and where we have a geometric situation similar to Fig. 3.
To set up the results, we start with an \(x_{0}\in\Omega\), and let \(x\in B_{R}(x_{0})\), where \(R=\sqrt{t}r_{0}>0\), and use \(\hat{x}\) to denote the projection of \(x\) to \(\Omega\). We also define \(v_{n,\Omega}\) as the projection of \(v\) onto \(x-\hat{x}\), and \(v_{n,\partial\Omega}\) is the projection of \(v\) onto the outwards normal of \(\partial\Omega\). Then we show the following:
* In Theorem 1, we let \(\left\|x-x_{0}\right\|=r\sqrt{t}\) and \(\theta\) is the angle between vectors \(x-x_{0}\) and \(\hat{x}-x_{0}\). If \(x\) is not close to \(\partial\Omega\), then \[L_{t}f(x)=A(x)t^{\frac{d+1}{2}}v_{n,\Omega}\sin(\theta)re^{-\sin^{2}(\theta)r^ {2}}+t^{\frac{d+1}{2}}B(x).\] The function \(A\) is close to being constantly equal to \(\pi^{d/2}\), and \(B\) can be made, uniformly, arbitrarily small. Both functions have explicit bounds.
* Theorem 2 shows what happens when \(x\) is close to \(\partial\Omega\): \[L_{t}f(x)=\widehat{A}_{1}(x)t^{\frac{d+1}{2}}v_{n,\Omega}\sin( \theta)re^{-\sin^{2}(\theta)r^{2}}\\ +\widehat{A}_{2}(x)t^{\frac{d}{2}}v_{n,\partial\Omega}e^{-\sin^{2 }(\theta)r^{2}}\quad+B(x)t^{\frac{d+1}{2}}e^{-r_{0}^{2}},\] where functions \(\widehat{A}_{1},\widehat{A}_{2}\) and \(B\) have explicitly computable bounds.
In Section 4.2 and Section 4.3 we prove more general results:
* In Theorem 3 we relax the conditions on \(\Omega\), considering non-flat manifolds, and prove a weaker version of Theorem 1.
* In Theorem 4 we relax the conditions further, and allow for noise when sampling from \(\Omega\).
To connect \(L_{t}\) to \(L_{n,t}\), in Section 4.4 we prove two finite-sample bounds.
Finally, in Section 5.1 we propose methods to find intersections in data and estimate the angle of such intersections, which are motivated by the aforementioned theorems and Corollary 4.5. We also provide numerical experiments, in Section 5, to test these methods.
## 2. Earlier work
The framework of assuming an underlying low-dimensional manifold of data, in conjunction with graph-related tools and in particular the graph
Laplacian, has been used extensively. Some examples include work in clustering [3, 4, 5, 6, 7], dimensionality reduction [8, 9], and semi-supervised learning [10].
Several of the approaches to study data sets that use the graph Laplacian leverage that if the manifold is smooth enough and well-behaved, then the graph Laplacian approximates some well-understood operator (for instance the Laplace-Beltrami operator [11]), which has useful mathematical properties.
Therefore, the question of convergence properties of the graph Laplacian is useful and important, and it has partly been explicated in [1, 12, 13, 9]. In particular, and highly influential of this paper, is what the asymptotic convergence looks like near singularities of the manifold, which was shown in [2].
## 3. Basic mathematical objects and theory
In this section, we provide more precise definitions and introduce the basic mathematical theory we will be using to present and prove our results. This is similar to the problem setup in [2].
### Conditions on manifolds
We will consider sets of the form \(\Omega=\cup_{i}^{m}\Omega_{i}\), where each \(\Omega_{i}\) is a smooth and compact \(d\)-dimensional Riemannian submanifold of \(\mathbb{R}^{N}\). We will assume that if \(\Omega_{i},\Omega_{j}\), and \(i\neq j\) have a non-empty intersection, then this intersection will have dimension lower than \(d\).
Associated to \(\Omega\) will be a probability measure with density \(p:\Omega\to\mathbb{R}\) such that the restriction of \(p\) to \(\Omega_{i}\) is smooth, and there are constants \(a\) and \(b\) such that \(0<a\leq p\leq b\).
If \(x\in\Omega_{i}\), we can consider the tangent space \(T_{\Omega_{i},x}\simeq\mathbb{R}^{d}\), which we will identify as a subspace of the ambient space \(\mathbb{R}^{N}\). More precisely, given open
Figure 1. Graph Laplacian \(L_{n,t}\) acting on a linear function \(f\). Purple color showing positive, and green color negative values of \(L_{n,t}f\), where lack of color indicates values near \(0\)
subsets \(U\subset\mathbb{R}^{d}\) and \(W\subset\Omega_{i}\) (\(W\) is open in the subspace topology of \(\Omega_{i}\)), and a coordinate chart \(\alpha:U\to W\) such that \(\alpha(0)=x\), we define \(T_{\Omega_{i},x}\) as the image of \(\mathbb{R}^{d}\) under the action of the Jacobian. We denote the Jacobian \(D\alpha:U\to\mathbb{R}^{N\times d}\), evaluated at \(0\), by \(D\alpha(0)\). The best linear approximation to \(u\mapsto\alpha(u)\) is of course given by \(u\mapsto x+D\alpha(0)u\), and \(x+T_{\Omega_{i},x}\) is the best flat approximation to \(\Omega_{i}\) around \(x\).
The definition of \(\Omega\) implies that a point \(x\in\Omega\) can have more than one associated tangent space. For example, if \(x\in\Omega_{i}\cap\Omega_{j}\) and \(i\neq j\), then both \(T_{\Omega_{i},x}\) and \(T_{\Omega_{j},x}\) exist, and they can be different.
A note on notation is that we will denote the interior of a manifold \(\Omega_{i}\) by \(\operatorname{Int}\Omega_{i}\), and the boundary by \(\partial\Omega_{i}\).
### Types of singularities
The following are what we will refer to as singular points, which will be of four different kinds. Given \(x\in\Omega=\cup\Omega_{i}\), we have the following types:
**(Type 1)**: There is a submanifold \(\Omega_{i}\) such that \(x\in\partial\Omega_{i}\).
**(Type 2)**: There are submanifolds \(\Omega_{i}\neq\Omega_{j}\) such that \(x\in\operatorname{Int}\Omega_{i}\cap\operatorname{Int}\Omega_{j}\).
**(Type 3)**: There are submanifolds \(\Omega_{i}\neq\Omega_{j}\) such that \(x\in\partial\Omega_{i}\cap\operatorname{Int}\Omega_{j}\).
**(Type 4)**: There are submanifolds \(\Omega_{i}\neq\Omega_{j}\) such that \(x\in\partial\Omega_{i}\cap\partial\Omega_{j}\).
The different types above can of course have non-empty intersection with each other, and a _non-singular_ point is simply a point \(x\in\operatorname{Int}\Omega_{i}\) such that if \(j\neq i\), then \(x\neq\Omega_{j}\). See Section 3.2 for two examples of singularities.
### Integration on \(\Omega\)
We will integrate scalar-valued functions, \(f:\Omega\to\mathbb{R}\), over \(\Omega\). When formulating integration of scalar-valued functions over submanifolds of \(\mathbb{R}^{N}\), we follow the approach in [14]. Because we need some preliminary results concerning integration on \(\Omega\), we make some important definitions explicit.
First, let \(x_{1},\ldots,x_{k}\) be vectors in \(\mathbb{R}^{N}\) for \(k\leq N\). If \(I=(i_{1},i_{2},\ldots,i_{k})\) is a \(k\)-tuple of integers such that \(i_{1}\leq i_{2}\leq\cdots\leq i_{k}\), define \(X_{I}\in\mathbb{R}^{k\times k}\) as the \(k\times k\) matrix containing only rows \(i_{1},\ldots,i_{k}\) of the matrix \(X=(x_{1},\ldots,x_{k})\). Now we can define the _volume function_\(V:R^{N\times k}\to\mathbb{R}\), by \(V(X)=\sqrt{\det^{2}(X^{t}X)}=\left[\sum_{I}\det^{2}X_{I}\right]^{1/2}\), where the \(I\)'s span over \(k\)-tuples as above, see [14, Theorem 21.4].
In general, given a coordinate chart \(\alpha:U\to W\), where \(U\subset\mathbb{R}^{d}\), \(W\subset\Omega_{i}\subset\mathbb{R}^{N}\) are open subsets, and \(D\alpha\) is the Jacobian of \(\alpha\), we can express
Figure 2. There is a singularity in the intersection of the lines above. The left figure shows a point of Type 4, and the right figure shows a point of Type 2.
integration over \(W\) as
\[\int_{W}f\,\mathrm{d}V=\int_{U}f\circ\alpha\,\mathrm{V}(D\alpha).\]
In the coming proofs, when integrating around a point \(x\in\operatorname{Int}\Omega_{i}\), we will change coordinates to the standard basis in \(T_{\Omega_{i},x}=\mathbb{R}^{k}\). With this we mean that we can find open sets \(W\subset\Omega_{i}\) around \(x\) such that the projection map \(\pi:W\to B\subset x+T_{\Omega_{i},x}\) is a diffeomorphism, where \(x+T_{\Omega_{i},x}\coloneqq\{\,x+y\mid y\in T_{\Omega_{i},x}\,\}\). To integrate over \(T_{\Omega_{i},x}\) we use the map \(\pi^{-1}\) precomposed with an inclusion map.
More specifically and without loss of generality, by translation and an orthonormal coordinate change, we can assume that \(T_{\Omega_{i},x}=\mathbb{R}^{d}\times\{0\}^{n-d}\). In this coordinate system we can write
\[\alpha:U\xrightarrow{i}x+T_{\Omega_{i},x}\xrightarrow{\pi^{-1}}W\subset \Omega_{i}, \tag{3.1}\]
where \(i\) is the natural inclusion map and \(U\) an open subset in \(\mathbb{R}^{k}\).
### Important bounds
The following bounds will be used later in our proofs: First, let \(T_{\Omega,x},U,W,\pi\) be as in Section 3.3. Then for any \(y\in W\),
\[\|y-\pi(y)\|\leq O(\|x-\pi(y)\|^{2}). \tag{3.2}\]
This follows since \(\Omega_{i}\) is smooth and the tangent space represents the best flat approximation of \(\Omega_{i}\) around \(x\).
To formulate the second bound, we need the lemma below.
**Lemma 3.1**.: _Let \(U,W,x,y,\Omega_{i},\pi,i,\alpha\) be as in Section 3.3. Then the following holds for the volume function \(V\):_
\[V(D\alpha(y))=1+O(\|x-\pi(y)\|^{2}).\]
Proof.: Since \(\alpha=\pi^{-1}\circ i\), and the tangent space is the best flat approximation of \(\Omega\), we can parametrize the \(W\) by \(\alpha(u)=(u,g(u))\). It is then easy to see that for \(i=1,\dots,d\) we have
\[\partial_{i}\alpha(y)=(e_{i},\partial_{i}g(u)),\]
where \(\partial_{i}g(0)=0\) and \(\|\partial_{i}g(u)\|=O(\|u\|)\). Now
\[\det D\alpha_{I}=\begin{cases}1&\text{if }I=(1,2\dots,d)\\ O(\|u\|)&\text{otherwise.}\end{cases}.\]
If we Taylor expand \(x\to\sqrt{x}\), we get
\[V(D\alpha)=\left(\sum_{I}(\det D\alpha_{I})^{2}\right)^{1/2}=1+O(\|u\|^{2}),\]
and by applying the above on \((u,0)=x-\pi(y)\) we are finished.
Further, since we have a finite union \(\Omega=\cup_{i}\Omega_{i}\) and each \(\Omega_{i}\) is compact, (3.2), the previous lemma implies that we can find a uniform bound \(L\) such that for all tuples \((U,W,x,y,\pi,\Omega_{i})\)
\[\|y-\pi(y)\|\leq L\left\|x-\pi(y)\right\|^{2} \tag{3.3}\]
and
\[|V(D\alpha)-1|\leq L\left\|x-\pi(y)\right\|^{2} \tag{3.4}\]
holds.
#### 3.4.1. \((L,r)\)-regular manifolds
To formulate our results we will need some measure of how regular, with regard to curvature, our set \(\Omega\) is. The following definition captures the necessary information.
**Definition 3.2**.: Let \(\Omega=\cup\Omega_{i}\) be a union of compact submanifolds in \(\mathbb{R}^{N}\). We also let \(r>0\) be the largest radius such that any point \(x\in\operatorname{Int}\Omega\) allows coordinate charts \(\alpha:U\to B_{r}(x)\cap\Omega_{i}\), where \(U\subset\mathbb{R}^{d}\) and \(B_{r}(x)\subset\mathbb{R}^{N}\) is an open ball of radius \(r\) around \(x\). Further, assume also that conditions (3.3) and (3.4) hold over all tuples \((U,W,x,y,\pi,\Omega_{i})\) for some \(L>0\). Then we say that \(\Omega\) is _\((L,r)\)-regular_.
**Example 3.3**.: Any smooth and compact submanifold is \((L,r)\)-regular. For instance the graph of the function \(x\to x^{2}\) over the compact interval \([-1,1]\) is \((1,1)\)-regular.
### Graph Laplacian
In this section we introduce the graph Laplacian and how it acts on real-valued functions defined on \(\mathbb{R}^{N}\).
Given \(n\) i.i.d. random samples \(X=\{X_{1},\ldots,X_{n}\}\) from the distribution with density \(p\) on \(\Omega\), we build a weighted fully connected graph \(G=(V,E)\) as follows: We let each sample \(X_{i}\) represent a vertex \(i\), and for vertices \(i,j\in V\) the weight on \((i,j)\in E\) is given by
\[W_{n,t}(i,j):=W_{n,t}(X_{i},X_{j})=\frac{1}{n}K_{t}(X_{i},X_{j})=\frac{1}{n}e^ {-\frac{\|X_{i}-X_{j}\|^{2}}{t}}.\]
The function \(W_{n,t}\) is naturally viewed as an \(n\times n\) matrix, and the variable \(t\) is in the literature often referred to as the _bandwidth_ of the _kernel_\(K_{t}\).
**Remark 3.4**.: In the limit analysis as \(n\to\infty\), it is useful also normalize by \(\frac{1}{t^{d/2+1/2}}\). But, since a priori we do not know the dimension \(d\), we will work without this normalization.
We define the diagonal weighted degree matrix as
\[D_{n,t}(i,i)=\sum_{j}W_{n,t}(i,j),\]
and the _graph Laplacian_\(L_{n,t}\) as
\[L_{n,t}=D_{n,t}-W_{n,t}.\]
**Remark 3.5**.: This is often referred to as the _unnormalized graph Laplacian_. There are other normalizations of this matrix which are used, for example, in [6, 7, 13]. One difference between these normalizations are their limit properties.
Given the fully connected graph \(G=(E,V)\), the graph Laplacian above can be seen as an operator acting on arbitrary functions \(f:V\to\mathbb{R}\) in the following way:
\[L_{n,t}f(X_{i})=\frac{1}{n}\sum_{j}K_{t}(X_{i},X_{j})(f(X_{i})-f(X_{j})), \quad(X_{i},X_{j})\in E.\]
We extend this operator to acting on functions \(f\in C_{c}(\mathbb{R}^{N},\mathbb{R})\), by the canonical choice
\[L_{n,t}f(x)=\frac{1}{n}\sum_{j}K_{t}(x,X_{j})(f(x)-f(X_{j})),\quad x\in\mathbb{R }^{N}. \tag{3.5}\]
Our main results will be stated in terms of the expected operator:
\[L_{t}f(x)=\mathbb{E}_{p}[L_{n,t}f(x)]=\int_{\Omega}K_{t}(x,y)(f(x)-f(y))p(y)\, \mathrm{d}y. \tag{3.6}\]
That this is well-defined follows from the assumptions that \(X_{1},\ldots,X_{n}\) are i.i.d., that \(f\) is continuous and that \(\Omega\) is compact.
One immediate consequence of the linearity of the integral is that
\[\begin{split} L_{t}f(x)&=\int_{\Omega}K_{t}(x,y)(f( x)-f(y))\,\mathrm{d}y\\ &=\sum_{i}\int_{\Omega_{i}}K_{t}(x,y)(f(x)-f(y))p(y)\,\mathrm{d}y.\end{split} \tag{3.7}\]
In our approach it is useful to work with the _restricted Laplacian_\(L_{t}^{i}\), which is defined by
\[L_{t}^{i}f(x)=\int_{\Omega_{i}}K_{t}(x,y)(f(x)-f(y))p(y)\,\mathrm{d}y. \tag{3.8}\]
### Gamma functions
In the proofs of several of our results we will need to handle the _Gamma function_\(\Gamma(\cdot)\), and both the _lower_ and _upper incomplete gamma functions_, \(\gamma(\cdot,\cdot)\) and \(\Gamma(\cdot,\cdot)\) respectively. These are well-known and are defined by the equations
\[\Gamma(a) =\int_{0}^{\infty}t^{a-1}e^{-t}\,\mathrm{d}t,\] \[\gamma(a,x) =\int_{0}^{x}t^{a-1}e^{-t}\,\mathrm{d}t,\] \[\Gamma(a,x) =\int_{x}^{\infty}t^{a-1}e^{-t}\,\mathrm{d}t.\]
In this paper both \(a\) and \(x\) are non-negative real numbers.
We will need the following bounds: First, if \(a\geq 1\), then \(t^{a-1}\geq x^{a-1}\) and
\[\Gamma(a,x)\geq x^{a-1}\int_{x}^{\infty}e^{-x}\,\mathrm{d}t=x^{a-1}e^{-x}. \tag{3.9}\]
Secondly, if \(e^{x}>2^{a}\), then by [15, Theorem 4.4.3],
\[\Gamma(a,x)\leq ax^{a-1}e^{-x}. \tag{3.10}\]
Finally, we need the lower bound
\[\gamma(a,a)\geq\frac{1}{2}\Gamma(a). \tag{3.11}\]
That this holds can be seen by viewing \(\gamma(a,x)\) as an unnormalized version of the cumulative distribution function of the Gamma distribution, for which it is well-known that the median \(\nu\) is less than \(a\).
## 4. Main results
Now that we have the necessary definitions and mathematical background, we are ready to present and prove our main results. Before stating the theorems, we will provide a brief section that explains the geometry of some terms that will be used in the theorem statements. This will help make the theorems easier to understand.
**Remark 4.1**.: Some of our results are given in the particular case when \(\Omega=\cup\Omega_{i}\) is such that each \(\Omega_{i}\) is flat. This is easier to analyze and gives better bounds, but it is also motivated by a particular use-case: Sets of the form
\[\Omega_{i}=\{W\in\mathbb{R}^{k}:|f_{W}(x)-g(x)|=0,\quad x\in\mathcal{D}\},\]
where \(f_{W}\) is a neural network with weights \(W\) and ReLU activation functions. Here \(g\) is a target function, and \(\mathcal{D}\) some dataset. That is, the zero sets of the optimization problem which one tries to minimize during training of a common type of neural network.
### General structure of results
By (3.7) it is enough to understand the restricted Laplacian, \(L_{t}^{i}\) defined in (3.8). Because of this, our results are formulated to show the behavior of \(L_{t}^{i}\). Depending on what type of singularity being examined, it is easy to extend the results to the full Laplacian. In Corollary 4.5 we give one example of how to extend the results to the sum \(\sum_{i=1}^{2}L_{t}^{i}\) when one is close to an intersection of two manifolds.
#### Geometry and notation for Section 4.1
We will in several theorems also formulate the function \(x\to L_{t}^{i}f(x)\) partly in terms of new coordinates \((r,\theta)\). Here \(r\) is defined by the relation \(\left\|x-x_{0}\right\|=\sqrt{t}r\), and given the projection \(\hat{x}\) of \(x\) to a plane \(\Omega_{i}\), we define \(\theta\in[0,\pi/2]\) to be the angle between vectors \(x_{0}-x\) and \(\hat{x}-x\), as the schematic in Fig. 3. By simple geometry, it also follows that \(\left\|\hat{x}-x\right\|=r\sin\theta\).
Given a vector \(v\in\mathbb{R}^{N}\), we will have reason to write the expression \(v\cdot(\hat{x}-x)\) as
\[v\cdot(\hat{x}-x)=r\sqrt{t}\sin(\theta)v\cdot\frac{\hat{x}-x}{\left\|\hat{x}- x\right\|}=r\sqrt{t}\sin(\theta)v_{n,\Omega_{i}}(x),\]
where we have defined
\[v_{n,\Omega_{i}}(x)\coloneqq v\cdot\frac{\hat{x}-x}{\left\|\hat{x}-x\right\|}.\]
In other words, \(v_{n,\Omega_{i}}\) is the projection of \(v\) onto a unit normal vector of \(\Omega_{i}\), but it depends on \(x\). We define this function to be \(0\) when \(x=\hat{x}\), and let us note that for \(x\neq\hat{x}\), this function is constant up to its sign. This implies that evaluating \(r\sqrt{t}\sin(\theta)v_{n,\Omega_{i}}(x)\) is the same as letting \(v_{n,\Omega_{i}}\) be fixed, but allowing \(\theta\) to change sign depending on which side of \(\Omega_{i}\)\(x\) is, i.e. as if we have fixed the coordinate system in which we measure the angle \(\theta\). We will in our theorem statements suppress the \(x\)-dependancy of \(v_{n,\Omega_{i}}\), to increase readability.
Additionally, in Theorem 2 we will have a term \(v_{n,\partial\Omega_{i}}\) that is specific to that theorem. This will be defined in the case where there is a boundary close to \(x\). In Fig. 3, this would imply there is a boundary of \(\Omega_{1}\) nearby.
To give the definition if this term, we first let \(\hat{x}_{\partial\Omega_{i}}\) be the projection of \(\hat{x}\) to \(\partial\Omega_{i}\). We can now define a unit normal at \(\hat{x}_{\partial\Omega_{i}}\), denoted by \(n_{\partial\Omega_{i}}\). Two choices are natural, a normal pointing either towards, or away from \(\Omega_{i}\). We define \(n_{\partial\Omega_{i}}\) as the latter. Given a vector \(v\in\mathbb{R}^{N}\), we can define
\[v_{n,\partial\Omega}\coloneqq v\cdot n_{\partial\Omega}.\]
In Theorem 2 we will be close to part of the boundary \(\partial\Omega\) where \(n_{\partial\Omega}\) is constant. This implies that, unlike \(v_{n,\Omega_{i}}\), \(v_{n,\partial\Omega}\) does not depend on \(x\), but is (locally) constant.
Geometry and notation for Section 4.2To help with the geometric picture for general manifolds, the situation is as explained in Section 4 and Fig. 3: the terms \(x,x_{0},\hat{x},\theta\) and \(v_{n,\Omega_{i}}\) are in the same relation to each other as in Section 4, but instead of projecting \(x\) to a flat manifold \(\Omega_{i}\) we project \(x\) to the (flat) tangent plane \(T_{\Omega_{i},x_{0}}\). In that sense the geometry for more general manifolds is not more difficult, but handling error terms is more involved.
### Flat manifolds
In this section we assume that \(\Omega=\cup_{i}\Omega_{i}\), where each \(\Omega_{i}\) is a flat manifold, which means that each coordinate chart around \(x\in\operatorname{Int}\Omega_{i}\) is an isometry between an open neighborhood \(U\) of \(x\), where \(U\) is a ball in \(\mathbb{R}^{d}\)
In Theorem 1 we give a result concerning the behavior of \(x\to L_{t}^{i}f(x)\) when we are _not_ close to the boundary \(\partial\Omega\). This case is easier to prove, and we give explicit bounds of all terms involved, and express them with elementary functions.
In Theorem 2 we show what happens when we are close to \(\partial\Omega\), but we have more involved expressions for some terms.
In the following theorems, it is the point \(x_{0}\) one should think of as potentially being a singular point, see Fig. 3, and the theorems show us how \(x\to L_{t}^{i}f(x)\) behaves in a neighborhood around this singular point. By combining Theorem 1 and Theorem 2, it is possible to consider several types of singularities defined in Section 3.2.
Figure 3. Schematic picture of the geometry of Theorem 1, where \(\Omega_{1}\) is the object of interest and \(x\in\Omega_{2}\) for visualization purposes.
**Theorem 1**.: _Let \(f(x)=v\cdot x\) for some unit vector \(v\in\mathbb{R}^{N}\) and assume that \(p\) is the uniform density over \(\Omega=\cup_{i}\Omega_{i}\). Let \(x_{0}\in\Omega_{i}\) and assume that \(\partial\Omega_{i}\cap B_{2R}(x_{0})=\varnothing\) for \(R=r_{0}\sqrt{t}\), where \(r_{0}>2\). Further, \(x\in B_{R}(x_{0})\), and \(v_{n,\Omega_{i}}\), \(r\) and \(\theta\) are as described in Section 4. If \(t\leq\frac{R^{2}}{d/2+1}\), \(d\geq 1\) and \(r<1\), then we have that_
\[L_{t}^{i}f(x)=t^{d/2+1/2}\left(A(\theta,r_{0},d)v_{n,\Omega_{i}}\sin\theta re^{- \sin^{2}\theta r^{2}}+B(x)e^{-r_{0}^{2}}\right),\]
_where \(A,B\) are real-valued functions. The function \(B\) depends on \(x\) and is uniformly bounded by \(|B(x)|\leq 2^{\frac{d+1}{2}}r_{0}^{d}|\mathbb{S}^{d-1}|\); and \(A\) depends on \(x\) only through \(\theta\), and is bounded by_
\[\max(\pi^{d/2},2\pi^{d/2}-|\mathbb{S}^{d-1}|2^{d/2}r_{0}^{d-1}e^{-r_{0}^{2}+1} )\leq A(\theta,r_{0},d)\leq 2\pi^{d/2}.\]
Proof.: Since \(x\to L_{t}^{i}f(x)\) is translation and rotation invariant, we can without loss of generality assume that \(\Omega_{i}\) oriented in \(\mathbb{R}^{N}\) in such a way which makes it a subset of \(\mathbb{R}^{d}\times\{0\}^{N-d}\).
We want to evaluate
\[L_{t}^{i}f(x)=\int_{\Omega_{i}}K_{t}(x,y)(f(x)-f(y))p\,\mathrm{d}y.\]
We begin by splitting the integral above into
\[\int_{\Omega_{i}}K_{t}(x,y)(f(x)-f(y))p\,\mathrm{d}y =\int_{B_{R}(x)\cap\Omega_{i}}K_{t}(x,y)(f(x)-f(y))p\,\mathrm{d}y\] \[\quad+\int_{\Omega_{i}\smallsetminus B_{R}(x)}K_{t}(x,y)(f(x)-f(y) )p\,\mathrm{d}y\] \[=I_{1}+I_{2}. \tag{4.1}\]
For estimating \(I_{2}\), by translation invariance we can WLOG assume that \(x=0\). Now we make a change of variables and rescale \(y\), which allows us to say that
\[|I_{2}| =\left|\int_{\Omega_{i}\smallsetminus B_{R}(0)}K_{t}(0,y)(f(0)-f(y) )p\,\mathrm{d}y\right|\] \[=\left|\int_{\Omega_{i}\smallsetminus B_{r_{0}\sqrt{t}}(0)}e^{-|y|^ {2}/t}v\cdot(-y)p\,\mathrm{d}y\right|\] \[=\left|\int_{\left(\frac{1}{\sqrt{t}}\Omega_{i}\right)\smallsetminus B _{r_{0}}(0)}e^{-|y|^{2}}v\cdot(-y\sqrt{t})t^{d/2}p\,\mathrm{d}y\right|\] \[\leq t^{d/2+1/2}\int_{\mathbb{R}^{d}\smallsetminus B_{r_{0}}}e^{-|y |^{2}}\|y\|p\,\mathrm{d}y.\]
Now, by first changing to spherical coordinates and integrating out the angular parts, we deduce that
\[|I_{2}|\leq t^{d/2+1/2}\left|\mathbb{S}^{d-1}\right|p\int_{r_{0}}^{\infty}e^{ -s^{2}}s^{d}ds=pt^{d/2+1/2}\left|\mathbb{S}^{d-1}\right|\Gamma\left(\frac{d+1 }{2},r_{0}^{2}\right). \tag{4.2}\]
To finalize the bound of \(I_{2}\), we note that it follows from the assumption \(t\leq\frac{R^{2}}{d/2+1}\) that \(r_{0}^{2}>\frac{d+1}{2}\), and we can use (3.10) and (4.2) to conclude
\[|I_{2}|\leq B(x)t^{d/2+1/2}e^{-r_{0}^{2}}, \tag{4.3}\]
where \(B(x)\) is some function such that
\[B(x)\leq\frac{d+1}{2}r_{0}^{d}p|\mathbb{S}^{d-1}|.\]
To bound \(I_{1}\), we use the following simple geometric fact:
\[\|x-y\|^{2}=\|\hat{x}-y\|^{2}+\|\hat{x}-x\|^{2}=\|\hat{x}-y\|^{2}+\sin^{2} \theta r^{2}t,\]
which implies that
\[e^{-\|x-y\|^{2}/t}=e^{-\sin^{2}\theta r^{2}}e^{-\|\hat{x}-y\|^{2}/t}.\]
From the above we can conclude
\[I_{1} =e^{-\sin^{2}\theta r^{2}}\int_{B_{R}(x)\cap\Omega_{i}}e^{-|\hat{ x}-y|^{2}/t}v\cdot(x-y)p\,\mathrm{d}y\] \[=e^{-\sin^{2}\theta r^{2}}\Bigg{(}\int_{B_{R}(x)\cap\Omega_{i}}e^ {-|\hat{x}-y|^{2}/t}v\cdot(x-\hat{x})p\,\mathrm{d}y\] \[\qquad\qquad+\int_{B_{R}(x)\cap\Omega_{i}}e^{-|\hat{x}-y|^{2}/t} v\cdot(\hat{x}-y)p\,\mathrm{d}y\Bigg{)}\] \[=e^{-r^{2}\sin^{2}\theta}(II+III). \tag{4.4}\]
It is easier to integrate over ball centered around \(\hat{x}\), and to this end we define \(\delta\geq 0\) by
\[\delta=\sqrt{R^{2}-tr^{2}\sin^{2}\theta}. \tag{4.5}\]
Then since \(\hat{x}\) is the orthogonal projection of \(x\), we have that \(B_{R}(x)\cap\Omega_{i}=B_{\delta}(\hat{x})\cap\Omega_{i}\).
Let us focus on \(II\): We use the (4.5) and change to spherical coordinates, which yields
\[II =v\cdot(\hat{x}-x)t^{d/2}\int_{B_{\delta/\sqrt{t}}(\hat{x})\cap \Omega_{i}}e^{-|\hat{x}-y|^{2}}p\,\mathrm{d}y.\] \[=v\cdot(\hat{x}-x)t^{d/2}|\mathbb{S}^{d-1}|p\int_{0}^{\delta/ \sqrt{t}}e^{-s^{2}}s^{d-1}ds=v\cdot(\hat{x}-x)t^{d/2}|\mathbb{S}^{d-1}|p\gamma (d/2,\delta^{2}/t)\] \[=v\cdot\frac{\hat{x}-x}{\|\hat{x}-x\|}t^{d/2+1/2}r\sin\theta| \mathbb{S}^{d-1}|p\gamma(d/2,\delta^{2}/t). \tag{4.6}\]
To estimate the RHS of (4.6) we will bound the \(\gamma\) from above and below: Using \(r_{0}^{2}\geq\frac{d+2}{2}\), \(r<1\) and the definition of \(\delta\), we get
\[\frac{d}{2}\leq r_{0}^{2}-\sin^{2}\theta r^{2}=\frac{\delta^{2}}{t}.\]
By (3.11) we now see that
\[\frac{1}{2}\Gamma(d/2)\leq\gamma(d/2,d/2)\leq\gamma(d/2,\delta^{2}/t). \tag{4.7}\]
Further, an application of (3.9) yields
\[\gamma(d/2,\delta^{2}/t) \leq\gamma(d/2,r_{0}^{2})=\Gamma(d/2)-\Gamma(d/2,r_{0}^{2})\leq \Gamma(d/2)-(r_{0}^{2})^{d/2-1}e^{-r_{0}^{2}}\] \[=\Gamma(d/2)-r_{0}^{d-2}e^{-r_{0}^{2}}. \tag{4.8}\]
Now (4.6)-(4.8) together with \(|\mathbb{S}^{d-1}|=\frac{2\pi^{d/2}}{\Gamma(d/2)}\) finally gives
\[II=A(d,r_{0},\theta)v_{\Omega_{i}}t^{d/2+1/2}r\sin\theta,\]
where
\[\max(p\pi^{d/2},2\pi^{d/2}p-p\left|\mathbb{S}^{d-1}\right|r_{0}^{d-2}e^{-r_{0}^{2} }\leq A(d,r_{0},\theta)\leq 2p\pi^{d/2}. \tag{4.9}\]
Finally, \(III=0\). This follows from that \(B_{R}(x)\cap\partial\Omega_{i}=\emptyset\), the rotational symmetry of \(K\), and the fact that the linear function is odd. Collecting (4.1), (4.3), (4.4) and (4.6) we get
\[L_{t}f(x)=t^{d/2+1/2}\left(A(d,r_{0},\theta)v_{n,\Omega_{i}}\sin\theta re^{- \sin^{2}\theta r^{2}}+B(x)e^{-r_{0}^{2}}\right).\]
The following theorem is an extension of Theorem 1 to the case when the ball \(B_{R}(x_{0})\cap\partial\Omega_{i}\neq\emptyset\), which gives rise to an additional term in the expression of \(L_{t}^{i}f(x)\). We again refer to the schematic picture of Fig. 3 and comments in Section 4 for explanation of the coordinates \((r,\theta)\), function \(v_{n,\Omega_{i}}\) and constant \(v_{n,\partial\Omega_{i}}\).
**Theorem 2**.: _Let \(f(x)=v\cdot x\) for some unit vector \(v\in\mathbb{R}^{N}\), and assume that \(p\) is the uniform density over \(\Omega=\cup_{i}\Omega_{i}\). Let \(x_{0}\in\Omega_{i}\) and assume that \(\partial\Omega_{i}\cap B_{2R}(x_{0})\) is part of a \(d-1\) dimensional plane for \(R=r_{0}\sqrt{t}\), where \(r_{0}>2\). Further, \(x\in B_{R}(x_{0})\), and \(v_{n,\Omega_{i}}\), \(v_{n,\partial\Omega_{i}}\), \(r\) and \(\theta\) are as described in Section 4. If \(t\leq\frac{R^{2}}{d/2+1}\), \(d\geq 1\) and \(r<1\), then we have that_
\[L_{t}^{i}f(x) =\widehat{A}_{1}(x)t^{\frac{d+1}{2}}v_{n,\Omega_{i}}\sin(\theta) re^{-\sin^{2}(\theta)r^{2}}+\widehat{A}_{2}(x)t^{\frac{d}{2}}v_{n,\partial \Omega_{i}}e^{-\sin^{2}(\theta)r^{2}}\] \[\quad+B(x)t^{\frac{d+1}{2}}e^{-r_{0}^{2}},\]
_for explicitly computable function \(\widehat{A}_{2}\), and with explicitly computable bounds of function \(\widehat{A}_{1}\). The function \(B\) has the same bounds as in Theorem 1._
**Remark 4.2**.: The function \(\widehat{A}_{1}\) is bounded by
\[\frac{1}{2\delta_{0}}\left(e^{-k_{0}^{2}}\gamma\left(\frac{d-1}{2},\delta_{0} ^{2}-k_{0}^{2}\right)-\frac{2(\delta_{0}^{2}-k_{0}^{2})^{\frac{d-1}{2}}}{d-1} \right)\leq\widehat{A}_{1}\leq\Gamma\left(\frac{d-1}{2}\right)\sqrt{\pi}\]
and \(\widehat{A}_{2}\) is given by
\[\widehat{A}_{2}=\frac{|\mathbb{S}^{d-2}|}{2}\left(e^{-\delta_{0}^{2}}\frac{( \delta_{0}^{2}-k_{0}^{2})^{(d-1)/2}}{d-1}+\frac{1}{2}e^{-k_{0}^{2}}\gamma \left(\frac{d-1}{2},\delta_{0}^{2}-k_{0}^{2}\right).\right)\]
To define \(k_{0}\) and \(\delta_{0}\), we recall the geometric picture of Section 4. Then \(K\) is the projection of \((\hat{x}-\hat{x}_{\partial\Omega_{i}})\) to \(n_{\partial\Omega_{i}}\), \(k_{0}=K/\sqrt{t}\), and \(\delta_{0}=\sqrt{r_{0}^{2}-r^{2}\sin^{2}\theta}\).
Proof.: We will follow the proof of Theorem 1 and modify where needed. Let \(I_{2},II\) and \(III\) be defined as in (4.1) and (4.4). Then, since \(I_{2}\) is bounded like in (4.3), we only need to find bounds for \(II\) and \(III\).
Let \(\delta\) be defined as in (4.5) and define \(\delta_{0}=\delta/\sqrt{t}\). Recall also the fact that \(B_{R}(x)\cap\Omega_{i}=B_{\delta}(\hat{x})\cap\Omega_{i}\). Now the difference in bounding \(II\) and \(III\) to the proof of Theorem 1 is that \(B_{\delta}(\hat{x})\cap\partial\Omega_{i}\) is nonempty. Since, by assumption, \(\partial\Omega_{i}\) is part of a \(d-1\)-dimensional flat space, \(B_{\delta}(\hat{x})\cap\Omega_{i}\) is a \(d\)-dimensional ball, but missing a spherical cap.
We now use cylindrical coordinates \((h,\varrho,\varphi)\) to describe the domain
\(B_{\delta/\sqrt{t}}(\hat{x})\cap\Omega_{i}\). In these new coordinates we are centered around \(\hat{x}\), and \((\varrho,\varphi)\) are coordinates for a \(d-1\)-dimensional ball tangential to \(\partial\Omega\), while
the perpendicular coordinate \(h\) is oriented along the outwards normal of \(\partial\Omega_{i}\). Let us denote this unit normal by \(n_{\partial\Omega}\), and the projection of \(\hat{x}\) to \(\partial\Omega\) by \(\hat{x}_{\partial\Omega}\). We now set \(K=(\hat{x}-\hat{x}_{\partial\Omega})\cdot n_{\partial\Omega}=\sqrt{t}k_{0}\), where \(-\delta_{0}\leq k_{0}\leq\delta_{0}\).
Then, with \(III\) defined in (4.4) we get
\[III=\int_{-\delta}^{K}\int_{0}^{\sqrt{\delta^{2}-h^{2}}}\int_{\mathbb{S}^{d-2}} K_{t}(\hat{x},y)v\cdot(\hat{x}-y)\varrho^{d-2}\,\mathrm{d}\varphi\,\mathrm{d} \varrho\,\mathrm{d}h.\]
We split \(v\) into a normal component \(v_{n}=(v\cdot n_{\partial\Omega})n_{\partial\Omega}\) and a component \(v_{T}=v-v_{n}\) which is tangential to the boundary \(\partial\Omega\). Then, since the function \(y\to v_{T}\cdot(\hat{x}-y)\) is odd as a function centered around \(\hat{x}\), and the domain of integration is symmetric around \(\hat{x}\), we know that the tangential component of \(III\) satisfies
\[III_{T}:=\int_{-\delta}^{K}\int_{0}^{\sqrt{\delta^{2}-h^{2}}}\int_{\mathbb{S}^ {d-2}}K_{t}(\hat{x},y)v_{T}\cdot(\hat{x}-y)\varrho^{d-2}\,\mathrm{d}\varphi\, \mathrm{d}\varrho\,\mathrm{d}h=0.\]
By definition of \(v_{n,\partial\Omega}\), we have that \(v_{n}\cdot(\hat{x}-y)=v_{n,\partial\Omega}(n_{\partial\Omega}\cdot(\hat{x}-y) )=v_{n,\partial\Omega}h\), which implies that
\[III =v_{n,\partial\Omega}\int_{-\delta}^{K}\int_{0}^{\sqrt{\delta^{2 }-h^{2}}}\int_{\mathbb{S}^{d-2}}K_{t}(\hat{x},y)h\varrho^{d-2}\,\mathrm{d} \varphi\,\mathrm{d}\varrho\,\mathrm{d}h\] \[=v_{n,\partial\Omega}\int_{-\delta}^{K}\int_{0}^{\sqrt{\delta^{2 }-h^{2}}}\int_{\mathbb{S}^{d-2}}e^{-h^{2}/t-\varrho^{2}/t}h\varrho^{d-2}\, \mathrm{d}\varphi\,\mathrm{d}\varrho\,\mathrm{d}h\] \[=t^{d/2}v_{n,\partial\Omega}\int_{-\delta_{0}}^{k_{0}}he^{-h^{2}} \int_{0}^{\sqrt{\delta_{0}^{2}-h^{2}}}\int_{\mathbb{S}^{d-2}}e^{-\varrho^{2}} \varrho^{d-2}\,\mathrm{d}\varphi\,\mathrm{d}\varrho\,\mathrm{d}h.\]
Continuing with the two inner integrals,
\[\int_{0}^{\sqrt{\delta_{0}^{2}-h^{2}}}\int_{\mathbb{S}^{d-2}}e^{- \varrho^{2}}\varrho^{d-2}\,\mathrm{d}\varphi\,\mathrm{d}\varrho =\frac{|\mathbb{S}^{d-2}|}{2}\int_{0}^{\delta_{0}^{2}-h^{2}}e^{-s }s^{d/2-3/2}\,\mathrm{d}s\] \[=\frac{|\mathbb{S}^{d-2}|}{2}\gamma\left(\frac{d-1}{2},\delta_{0} ^{2}-h^{2}\right).\]
Using this expression in the full integral and applying partial integration in the second equality below yields
\[III =t^{d/2}v_{n,\partial\Omega}\frac{|\mathbb{S}^{d-2}|}{2}\int_{- \delta_{0}}^{k_{0}}e^{-h^{2}}h\gamma\left(\frac{d-1}{2},\delta_{0}^{2}-h^{2} \right)\,\mathrm{d}h\] \[=t^{d/2}v_{n,\partial\Omega}\frac{|\mathbb{S}^{d-2}|}{2}\left( \frac{1}{2}\left[-e^{-h^{2}}\gamma\left(\frac{d-1}{2},\delta_{0}^{2}-h^{2} \right)\right]_{-\delta_{0}}^{k_{0}}\right.\] \[\qquad-\frac{1}{2}e^{-\delta_{0}^{2}}\int_{-\delta_{0}}^{k_{0}}( \delta^{2}-h^{2})^{(d-3)/2}h\,\mathrm{d}h\right)\] \[=t^{d/2}v_{n,\partial\Omega}\frac{|\mathbb{S}^{d-2}|}{2}\left( \frac{1}{2}e^{-k_{0}^{2}}\gamma\left(\frac{d-1}{2},\delta_{0}^{2}-k_{0}^{2} \right)+e^{-\delta_{0}^{2}}\frac{(\delta_{0}^{2}-k_{0}^{2})^{(d-1)/2}}{d-1} \right).\]
Thus, we know that
\[III=t^{d/2}v_{n,\partial\Omega}\frac{|\mathbb{S}^{d-2}|}{2}\left(e^{-\delta_{0 }^{2}}\frac{(\delta_{0}^{2}-k_{0}^{2})^{(d-1)/2}}{d-1}+\frac{1}{2}e^{-k_{0}^{2} }\gamma\left(\frac{d-1}{2},\delta_{0}^{2}-k_{0}^{2}\right)\right). \tag{4.10}\]
We now address the integral \(II\) defined in (4.4), which means we need to calculate
\[J\coloneqq\int_{B_{R}(x)\cap\Omega_{i}}e^{-\|\tilde{x}-y\|^{2}/t}p\,\mathrm{d}y.\]
After a change cylindrical coordinates as for \(III\), we rewrite this integral as
\[J=\int_{-\delta_{0}}^{k_{0}}e^{-h^{2}}\gamma\left(\frac{d-1}{2},\delta_{0}^{2} -h^{2}\right)\,\mathrm{d}h.\]
We can immediately bound \(J\) from above by
\[\Gamma\left(\frac{d-1}{2}\right)\int_{-\delta_{0}}^{k_{0}}e^{-h^{2}}\,\mathrm{ d}h\leq\Gamma\left(\frac{d-1}{2}\right)\int_{-\infty}^{\infty}e^{-h^{2}}dx= \Gamma\left(\frac{d-1}{2}\right)\sqrt{\pi}. \tag{4.11}\]
Now we bound \(J\) from below: Since the integrand is positive, we can without loss of generality assume that \(k_{0}<0\). Then a change of variables \(h=-\sqrt{\delta_{0}^{2}-y}\) yields that
\[J \geq e^{-\delta_{0}^{2}}\int_{0}^{\delta_{0}^{2}-k_{0}^{2}}e^{y} \gamma\left(\frac{d-1}{2},y\right)\frac{1}{2\sqrt{\delta_{0}^{2}-y}}\,\mathrm{ d}y\] \[\geq e^{-\delta_{0}^{2}}\frac{1}{2\delta_{0}}\int_{0}^{\delta_{0} ^{2}-k_{0}^{2}}e^{y}\gamma\left(\frac{d-1}{2},y\right)\,\mathrm{d}y.\]
Using partial integration above we then get
\[J \geq\frac{e^{-\delta_{0}^{2}}}{2\delta}\left[e^{y}\gamma\left( \frac{d-1}{2},y\right)-\frac{y^{\frac{d-1}{2}}}{\frac{d-1}{2}}\right]_{0}^{ \delta_{0}^{2}-k_{0}^{2}}\] \[=\frac{e^{-\delta_{0}^{2}}}{2\delta_{0}}\left(e^{\delta_{0}^{2}- k_{0}^{2}}\gamma\left(\frac{d-1}{2},\delta_{0}^{2}-k_{0}^{2}\right)-\frac{2( \delta_{0}^{2}-k_{0}^{2})^{\frac{d-1}{2}}}{d-1}\right).\]
Simplifying further gives us
\[J\geq\frac{1}{2\delta_{0}}\left(e^{-k_{0}^{2}}\gamma\left(\frac{d-1}{2}, \delta_{0}^{2}-k_{0}^{2}\right)-e^{-\delta_{0}^{2}}\frac{2\left(\delta_{0}^{2 }-k_{0}^{2}\right)^{\frac{d-1}{2}}}{d-1}\right). \tag{4.12}\]
Thus, equation (4.10) and the bounds in (4.12) and (4.11) proves the theorem.
### General manifolds
In this section we no longer assume that \(\Omega_{i}\) is flat, but more general, as defined in 3.1. We will also assume that \(\Omega\) is \((L,r)\)-regular, see 3.2. The type of singularity we deal with for a more general manifold will be a Type 2, and we will assume we are not too close to any boundary.
**Theorem 3** (General manifold).: _Let \(f(x)=v\cdot x\) for some unit vector \(v\in\mathbb{R}^{N}\) and assume that \(p\) is the uniform density over a \((L,2R)\)-regular union of manifolds \(\Omega=\cup\Omega_{i}\). Let \(x_{0}\in\Omega_{i}\) and assume that \(\partial\Omega_{i}\cap B_{2R}(x_{0})=\varnothing\) for \(R=r_{0}\sqrt{t}\), where \(r_{0}>2\). Further, \(x\in B_{R}(x_{0})\), and \(v_{n,\Omega_{i}}\), \(r\) and \(\theta\) are as described in Section 4. If \(L4R^{2}\leq\frac{1}{2}\), \(t\leq\frac{R^{2}}{d/2+1}\), \(d\geq 1\) and \(r<1\), then we have that_
\[L_{t}^{i}f(x)=t^{d/2+1/2}\widehat{A}(x)v_{n,\Omega_{i}}r\sin\theta e^{-r^{2} \sin^{2}\theta}+t^{d/2}C_{L,R}(x)4p\pi^{d/2}+e^{-r_{0}^{2}}D(x).\]
_In the above, \(\widehat{A}\) is a function such that_
\[|A(d,r,\theta)-\widehat{A}(x)|\leq(1+3C_{L,R})A(d,r,\theta)\]
_where \(A(d,r,\theta)\) as in Theorem 1; \(C_{L,R}\) is a function such that_
\[|C_{L,R}(x)|\leq LR^{2}(1+4LR^{2})+(4LR^{2})^{2};\]
_and \(|D(x)|\leq\operatorname{diam}(\Omega)\)._
Proof.: We begin by splitting up the domain \(\Omega_{i}\):
\[\begin{split} L_{t}^{i}f(x)&=\int_{\Omega_{i}}K_{t }(x,y)(f(x)-f(y))p\,\mathrm{d}y\\ &=\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,y)(f(x)-f(y))p\,\mathrm{d }y\\ &\quad+\int_{\Omega_{i}\setminus B_{R}(x)}K_{t}(x,y)(f(x)-f(y))p \,\mathrm{d}y\\ &=I+II.\end{split} \tag{4.13}\]
We first note that
\[II=\int_{\Omega_{i}\setminus B_{R}(x)}K_{t}(x,y)(f(x)-f(y))p\,\mathrm{d}y\leq e ^{-R^{2}/t}\operatorname{diam}(\Omega). \tag{4.14}\]
To estimate \(I\) we will make a change of variables to the tangent space at \(x_{0}\) and use arguments similar to those in the proof of Theorem 1. Specifically, let \(\pi:\Omega_{i}\cap B_{R}(x)\to T_{\Omega_{i,x}}\cap B_{R}(x)\) be the projection map, and \(\alpha=\pi^{-1}\circ i:\mathbb{R}^{d}\cap B_{R}(0)\to\Omega_{i}\cap B_{R}(x)\) a coordinate chart as in (3.1). We will use \(\alpha\) to integrate over \(T_{\Omega_{i},x_{0}}\).
To simplify notation, we will use \(\hat{x}\) and \(\hat{y}\) to denote both \(\pi(x),\pi(y)\in\mathbb{R}^{N}\), and sometimes implicitly assume the projection \(i^{-1}\) such that \(\hat{x},\hat{y}\in\mathbb{R}^{d}\). The space in which these points lie should be clear from context.
Before making the coordinate change, we find bounds relating \(K(x,y)\) to \(K(x,\hat{y})\): We recall that \(K_{t}(x,y)=e^{\frac{-|x-y|^{2}}{t}}\), and from the triangle inequality we get
\[e^{-\frac{|x-\hat{y}|^{2}}{t}-\frac{|y-\hat{y}|^{2}}{t}}\leq e^{-\frac{|x-y|^ {2}}{t}}\leq e^{-\frac{|x-\hat{y}|^{2}}{t}+\frac{|y-\hat{y}|^{2}}{t}}. \tag{4.15}\]
Since \(\Omega_{i}\) is \((L,2R)\)-regular, we use (3.3) and the fact that \(y\in B_{2R}(x)\) to conclude
\[\|y-\hat{y}\|\leq L\left\|x_{0}-\hat{y}\right\|^{2}\leq L4R^{2},\]
which together with (4.15) yields
\[e^{-(L4R^{2})^{2}}K_{t}(x,\hat{y})\leq K_{t}(x,y)\leq e^{(L4R^{2})^{2}}K_{t}( x,\hat{y}).\]
Furthermore, since \(L4R^{2}\leq\frac{1}{2}\) we have the bounds
\[e^{(L4R^{2})^{2}}\leq 1+(L4R^{2})^{2}\quad\text{and}\quad e^{-(L4R^{2})^{2}} \geq 1-(L4R^{2})^{2}.\]
Thus,
\[|K_{t}(x,y)-K_{t}(x,\hat{y})|\leq(L4R)^{2}K_{t}(x,\hat{y}). \tag{4.16}\]
Replacing \(K_{t}(x,y)\) with \(K_{t}(x,\hat{y})\) in \(I\) we get
\[I=\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,\hat{y})(f(x)-f(y))p\,\mathrm{d}y+E_{1}, \tag{4.17}\]
and using (4.16) it holds that
\[|E_{1}|\leq C_{L,R}\left|\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,\hat{y})(f(x)-f(y) )p\,\mathrm{d}y\right|. \tag{4.18}\]
We now decompose the integral in (4.17) as follows
\[\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,\hat{y})(f(x)-f(y))p\, \mathrm{d}y=\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,\hat{y})(f(x)-f(\hat{y}))p\, \mathrm{d}y\] \[\qquad+\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,\hat{y})(f(\hat{y})- f(y))p\,\mathrm{d}y=I_{1}+I_{2} \tag{4.19}\]
The quantity \(I_{2}\) will be treated like an error term. Using (3.3) we see that
\[|I_{2}|\leq\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,\hat{y})L\left\|\hat{y}-x_{0} \right\|^{2}p\,\mathrm{d}y.\]
Now we make a coordinate change with \(\alpha\) and use the bound on the volume form in (3.4) to get
\[\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(x,\hat{y})L\left\|\hat{y}-x_{0 }\right\|^{2}p\,\mathrm{d}y\] \[\qquad\leq LR^{2}\int_{T_{\Omega_{i},x_{0}}\cap B_{R}(x)}K_{t}(x, \hat{y})(1+L\left\|x_{0}-\hat{y}\right\|^{2})p\,\mathrm{d}\hat{y}\] \[\qquad\leq LR^{2}(1+L4R^{2})\int_{T_{\Omega_{i},x_{0}}\cap B_{R}( x)}K_{t}(x,\hat{y})p\,\mathrm{d}\hat{y}\] \[\qquad\leq C_{L,R}\int_{T_{\Omega_{i},x_{0}}\cap B_{R}(x)}K_{t}(x,\hat{y})p\,\mathrm{d}\hat{y}.\]
The RHS of the above display can be handled similarly to (4.6), which means
\[|I_{2}|\leq C_{L,R}\left|\mathbb{S}^{d-1}\right|t^{d/2}p\Gamma(d/2)=C_{L,R}t^{ d/2}2p\pi^{d/2}.\]
We proceed now with \(I_{1}\) from (4.19), which we want to estimate as accurately as possible. Using the coordinate change \(\alpha\) and (3.4) we write
\[I_{1} =e^{-r^{2}\sin^{2}\theta}\int_{\Omega_{i}\cap B_{R}(x)}K_{t}(\hat {x},\hat{y})(f(x)-f(\hat{y}))p\,\mathrm{d}y\] \[=e^{-r^{2}\sin^{2}\theta}\widehat{C}\int_{T_{\Omega_{i},x_{0}} \cap B_{R}(x)}K_{t}(\hat{x},\hat{y})(f(x)-f(\hat{y}))p\,\mathrm{d}\hat{y}, \tag{4.20}\]
where \(\widehat{C}(x)\) is such that \(|\widehat{C}-1|\leq C_{L,R}\).
The integral on the right in (4.20) is exactly \(II\) from (4.4), which we compute as in (4.6):
\[\int_{T_{\Omega_{i},x_{0}}\cap B_{R}(x)}K_{t}(\hat{x},\hat{y})(f(x)-f(\hat{y}) )p\,\mathrm{d}\hat{y}=A(d,r_{0}\theta)v_{\Omega_{i}}t^{d/2+1/2}r\sin\theta, \tag{4.21}\]
where \(A(d,r_{0},\theta)\) is as in (4.9). Now, from (4.19)-(4.21) we have
\[I_{1}+I_{2}=\widehat{C}A(d,r_{0},\theta)v_{\Omega_{i}}t^{d/2+1/2}r\sin\theta e ^{-r^{2}\sin^{2}(\theta)}+C_{L,R}t^{d/2}2p\pi^{d/2}.\]
This combined with the split in (4.17) and (4.18) gives us
\[I =I_{1}+I_{2}+E_{1}=(1+C_{L,R})(I_{1}+I_{2})\] \[=(1+C_{L,R})\left(\widehat{C}A(d,r_{0},\theta)v_{\Omega_{i}}t^{d /2+1/2}r\sin\theta e^{-r^{2}\sin^{2}(\theta)}+C_{L,R}t^{d/2}2p\pi^{d/2}\right)\]
Defining \(\widehat{A}(x)\coloneqq(1+C_{L,R})\widehat{C}A(d,r,\theta)\), and using that since \(C_{L,R}\leq 1\), \(C_{L,R}^{2}\leq C_{L,R}\), \(I\) can be written as
\[I=t^{d/2+1/2}\widehat{A}(x)r\sin\theta e^{-r^{2}\sin^{2}\theta}+t^{d/2}C_{L,R} 4p\pi^{d/2}. \tag{4.22}\]
Also, since \(|(1+C_{L,R})\widehat{C}|\leq 1+3C_{L,R}\), we see that
\[|A(d,r,\theta)-\widehat{A}(x)|\leq(1+3C_{L,R})A(d,r,\theta).\]
Finally then, the bounds in (4.22) and (4.14) give us
\[\int_{\Omega_{i}}K(x,y)(f(x)-f(y))\,\mathrm{d}y=I+II\] \[=t^{d/2+1/2}\widehat{A}(x)r\sin\theta e^{-r^{2}\sin^{2}\theta}+t^ {d/2}C_{L,R}4p\pi^{d/2}+e^{-r_{0}^{2}}D(x).\]
The next lemma gives useful bounds on \(L_{t}^{i}f(x)\) when \(x\) is non-singular.
**Lemma 4.3**.: _Given the conditions of Theorem 3 and the additional assumption that \(x\in\Omega_{i}\), we have that_
\[L_{t}^{i}f(x)=t^{d/2+1/2}\widehat{A}(x)8LR^{2}+t^{d/2}C_{L,R}(x)4p\pi^{d/2}+D( x)e^{-r_{0}^{2}},\]
Proof.: First applying Theorem 3\(L_{t}^{i}f(x)\), and then using the \((L,2R)\) regularity of \(\Omega_{i}\), we bound the expression \(r\sin\theta e^{-r^{2}\sin^{2}\theta}\) in the following way: First, by (3.3) we get
\[|r\sin\theta|\leq L\left\|x_{0}-\hat{x}\right\|^{2}\leq L4R^{2}.\]
Then, after the substitution \(x=r\sin\theta\), we want to bound a function of the form \(h(x)=xe^{-x^{2}}\). Taylor expansion of \(h(x)\) gives that
\[|h(x)|\leq|x+2x^{2}|\leq 2|x|,\]
for \(x\leq\frac{1}{2}\). Thus, for \(L4R^{2}\leq\frac{1}{2}\), we have that
\[\left|r\sin\theta e^{-r^{2}\sin^{2}\theta}\right|\leq 8LR^{2}.\]
The conclusion follows.
**Remark 4.4**.: The result in Lemma 4.3 can be used together with both Theorem 1 and Theorem 3 to analyze the behavior of the mapping \(x\to Lf(x)\) around intersections.
In the proof of the following corollary, the geometry is as in Section 4, projecting \(x\) specifically to the tangent plane \(T_{\Omega_{1},x_{0}}\).
**Corollary 4.5**.: _Let \(f(x)=v\cdot x\) for some vector \(x\in\mathbb{R}^{N}\) and assume that \(p\) is the uniform density over a \((L,2R)\)-regular manifold \(\Omega=\cup_{i=1}^{2}\Omega_{i}\). Let \(x_{0}\in\Omega_{1}\cap\Omega_{2}\) and assume that \(\partial\Omega_{i}\cap B_{2R}(x_{0})=\varnothing\) for \(i\in\{1,2\}\) and \(R=r_{0}\sqrt{t}\), where \(r_{0}>2\). If \(L4R^{2}\leq\frac{1}{2}\), \(t\leq\frac{R^{2}}{d/2+1}\) and \(d\geq 1\), then for \(x\in B_{R}(x_{0})\cap\Omega_{2}\) such that \(\|x-x_{0}\|=r\sqrt{t}\) for \(r<1\), we have that_
\[L_{t}f(x)=t^{d/2+1/2}\widehat{A}(x)v_{n,\Omega_{1}}r\sin\theta e^{-r^{2}\sin^ {2}\theta_{1}}+t^{d/2+1/2}\widehat{A}(x)8LR^{2}\]
\[+t^{d/2}C_{L,R}(x)8p\pi^{d/2}+2e^{-r_{0}^{2}}D(x)\]
_In the above, \(\theta\) and \(v_{n,\Omega_{1}}\) are as in Section 4, with \(\Omega_{i}=\Omega_{1}\). Functions \(\widehat{A},C_{L,R}\) and \(D\) are as in Theorem 3._
Proof.: We apply Theorem 3 to \(L_{1}^{t}f(x)\) and Lemma 4.3 to \(L_{t}^{2}(x)\).
\[L_{t}f(x) =L_{t}^{1}f(x)+L_{t}^{2}f(x)\] \[=t^{d/2+1/2}\widehat{A}(x)v_{n,\Omega_{1}}r\sin\theta e^{-r^{2} \sin^{2}\theta_{1}}+t^{d/2+1/2}\widehat{A}(x)8LR^{2}\] \[\qquad+t^{d/2}C_{L,R}(x)8p\pi^{d/2}+2e^{-r_{0}^{2}}D(x)\]
### Manifolds with noise
In the previous results, we assumed that the samples used to evaluate \(L_{n,t}f(x)\) are taken directly from \(\Omega\). However, in many applications it is more realistic to expect that the samples only approximately lie on some manifold.
One way to model this is to assume instead of the operator
\[L_{n,t}f(x)=\frac{1}{n}\sum_{j=1}^{n}K_{t}(x,X_{j})(f(x)-f(X_{j})),\]
we replace \(X_{j}\) by \(X_{j}+\epsilon_{j}\), where \(\epsilon_{j}\sim\mathcal{N}(0,\sigma^{2}I)\):
\[L_{n,t,\epsilon}f(x)=\frac{1}{n}\sum_{j=1}^{n}K_{t}(x,X_{j}+\epsilon_{j})(f(x )-f(X_{j}+\epsilon_{j})).\]
The following theorem gives us the expected value of this operator:
**Theorem 4** (Stochastic version).: _Let \(L_{n,t,\epsilon}\) be as above, and the operator \(\mathbb{E}_{\epsilon}[\,\cdot\,]=\mathbb{E}[\,\cdot\,|\,X_{1},\ldots,X_{N}]\) be expectation with regard to the random variables \((\epsilon_{1},\ldots,\epsilon_{n})\). Then_
\[\mathbb{E}_{\epsilon}L_{n,t,\epsilon}f(x)=\frac{2t^{N/2+1}}{(2\sigma^{2}+t)^{ N/2+1}}\frac{1}{n}\sum_{j=1}^{n}K_{2\sigma^{2}+t}\left(x,X_{j}\right)(f(x)-f(X_{j})).\]
Proof.: To simplify notation, let \(h_{j}=x-X_{j}\).
\[\mathbb{E}_{\epsilon}L_{n,t}f(x) =\frac{1}{n}\sum_{j=1}^{n}\mathbb{E}_{\epsilon}K_{t}(x,X_{j})(f(x )-f(X_{j}+\epsilon_{j}))\] \[=\frac{1}{n}\sum_{j=1}^{n}\mathbb{E}_{\epsilon}e^{-\|h_{j}- \epsilon_{j}\|^{2}/t}v\cdot(h_{j}-\epsilon_{j}) \tag{4.23}\]
Let us compute a single term in the sum in (4.23): Since the expectation is w.r.t \(\epsilon\) we can treat \(h=h_{j}\) as fixed, and then algebraic manipulations give us for \(z\sim\mathcal{N}(0,\sigma^{2}I)\)
\[\mathbb{E}_{z} e^{-\|h+z\|^{2}/t}(h+z)\] \[=(2\pi\sigma^{2})^{-N/2}\int_{\mathbb{R}^{N}}e^{-\|h+z\|^{2}/t}e^{ -\|z\|^{2}/(2\sigma^{2})}v\cdot(h+z)\,\mathrm{d}z\] \[=(2\pi\sigma^{2})^{-N/2}\int_{\mathbb{R}^{N}}e^{-\left(\|h\|^{2}/ t+2(h,z)/t+\|z\|^{2}/t+\|z\|^{2}/(2\sigma^{2})\right)}v\cdot(h+z)\,\mathrm{d}z\] \[=(2\pi\sigma^{2})^{-N/2}e^{-\frac{\|h\|^{2}}{2\sigma^{2}+t}}\int _{\mathbb{R}^{N}}e^{-\frac{1}{kt}\left\|z+kh\right\|^{2}}v\cdot(h+z)\,\mathrm{ d}z.\]
In the second to last step we completed the square and used \(k=\frac{2\sigma^{2}}{2\sigma^{2}+t}\). This last integral can be viewed as the expectation
\[(\pi kt)^{-N/2}\int_{\mathbb{R}^{N}}e^{-\frac{1}{kt}\left\|z+kh\right\|^{2}}v \cdot(h+z)\,\mathrm{d}z=\mathbb{E}_{X}[(h+X)]=(1-k)v\cdot h\]
where \(X\sim N(-kh,I\frac{kt}{2})\). Then we can conclude that
\[\mathbb{E}_{z}e^{-\left\|h+z\right\|^{2}/t}v\cdot(h+z)=v\cdot h\frac{t^{N/2+1} }{(2\sigma^{2}+t)^{N/2+1}}e^{-\frac{\left\|h\right\|^{2}}{2\sigma^{2}+t}}.\]
The above theorem implies that if \(t^{\prime}=t+\sigma^{2}\) then, up to normalization, \(L_{n,t,\epsilon}\) and \(L_{n,t^{\prime}}\) are the same in expectation. This also shows the relationship between the limit operators of \(\mathbb{E}_{\epsilon}L_{n,t,\epsilon}\) and \(L_{n,t}\), namely that:
\[\lim_{n\to\infty}\mathbb{E}_{\epsilon}L_{n,t,\epsilon}f(x)=\lim_{n\to\infty} L_{n,t^{\prime}}f(x)=L_{t^{\prime}}f(x)=L_{t+\sigma^{2}}f(x).\]
### Finite sample bounds
Our next result is a finite-sample bound based on Hoeffding's inequality. This bound quantifies the maximal error of the operator \(L_{n,t}\) with respect to the limit operator \(L_{t}\) over the entire manifold, when the operator is evaluated only at the known data points. We assume that the \(X_{i}\) has a uniform density \(p\) over \(\Omega\).
**Theorem 5**.: _Let \(f(x)=v\cdot x\) for \(x\in\Omega\), where \(\Omega\) is flat. Then_
\[P\left(\max_{i}\left|L_{n,t}f(X_{i})-\frac{n-1}{n}L_{t}f(X_{i})\right|> \epsilon\right)\leq 2n\exp\left(-\frac{2n\epsilon^{2}}{(1+\pi^{d/2}t^{d/2})^{2}M^ {2}}\right)\]
_where \(M=\sup_{x,y\in\Omega}\left\|v\cdot(x-y)\right\|\)._
Proof.: Using the union bound we get
\[\begin{split} P\bigg{(}&\max_{i}\left|L_{n,t}f(X_{i} )-\frac{n-1}{n}L_{t}f(X_{i})\right|>\epsilon\bigg{)}\\ &\leq\sum\limits_{i=1}^{n}P\left(\left|L_{n,t}f(X_{i})-\frac{n-1 }{n}L_{t}f(X_{i})\right|>\epsilon\right).\end{split} \tag{4.24}\]
Using the definitions of \(L_{n,t}\) and \(L_{t}\), see (3.5) and (3.6), and using that the random variables \(X_{1},\ldots,X_{N}\) are i.i.d., we can replace each \(X_{i}\) by \(X_{1}\) in each term in the summand of (4.24). Let \(Z\) be an independent copy of \(X_{1}\). Then each summand in (4.24) equals
\[\begin{split} P\bigg{(}&\left|\frac{1}{n}\sum \limits_{j=1}^{n}K_{t}(X_{1},X_{j})(f(X_{1})-f(X_{j}))\right.\\ &\left.-\frac{n-1}{n}\mathbb{E}_{Z}[K_{t}(X_{1},Z)(f(X_{1})-f(Z)) ]\right|>\epsilon\bigg{)}.\end{split} \tag{4.25}\]
To simplify notation, we denote
\[W_{i}(x)=K_{t}(x,X_{i})(f(x)-f(X_{i}))\quad\text{and}\quad Y_{i}(x)=W_{i}(x)- \mathbb{E}_{X_{i}}[W_{i}(x)].\]
We now rewrite (4.25) as
\[P\left(\left|\frac{1}{n-1}\sum\limits_{i=2}^{n}Y_{i}(X_{1})\right|>\frac{n}{n- 1}\epsilon\right).\]
Now by the tower property we have that
\[P\left(\left|\frac{1}{n-1}\sum_{i=2}^{n}Y_{i}(X_{1})\right|>\frac{n}{n-1}\epsilon \right)=\mathbb{E}\left[P\left(\left|\frac{1}{n-1}\sum_{i=2}^{n}Y_{i}(X_{1}) \right|>\frac{n}{n-1}\epsilon\mid X_{1}\right)\right]\]
In order to use Hoeffding's inequality we need to show that \(Y_{i}(x)\) is a bounded random variable for all \(x\in\Omega\). First
\[|Y_{i}|\leq|W_{i}|+|\mathbb{E}[W_{i}]|\leq M+\left|\int_{\Omega}K_{t}(x,x_{i}) (f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|\leq(1+\pi^{d/2}t^{d/2}p)M, \tag{4.26}\]
where \(M=\sup_{x,y\in\Omega}\left\|v\cdot(x-y)\right\|\). Now Hoeffdings inequality states that (where \(C_{n}=\frac{n}{n-1}\))
\[\mathbb{P}\left(\left|\frac{1}{n-1}\sum_{i=2}^{n}Y_{i}\right|>C_{n}\epsilon \mid X_{1}\right)\leq 2\exp\left(-\frac{2(n-1)C_{n}^{2}\epsilon^{2}}{(1+\pi^{d/2}t^ {d/2}p)^{2}M^{2}}\right)\]
and the proof is complete after taking expectations.
Next is an extension to a more general type of manifold.
**Corollary 4.6**.: _Let \(\Omega\) be a \(d\)-dimensional \((L,R)\)-regular manifold,_
\[\{z_{1},z_{2},\ldots,z_{K}\}\subset\Omega,\]
_and \(\mathbf{B}=\{B_{R}(z_{i})\}_{i=1}^{K}\) be a set of open balls in \(\mathbb{R}^{N}\) such that_
\[\cup_{i=1}^{K}B_{R}(x_{i})\cap\Omega=\Omega.\]
_Then the following inequality holds:_
\[P\left(\max_{i}\left|L_{n,t}f(X_{i})-\frac{n-1}{n}L_{t}f(X_{i}) \right|>\epsilon\right)\] \[\leq 2n\exp\left(-\frac{2n\epsilon^{2}}{(1+K(1+LR^{2})SMt^{d/2} \pi^{d/2}p}\right),\]
_where \(M=\sup_{x,y\in\Omega}\left\|v\cdot(x-y)\right\|\)._
Proof.: We begin proving a simple inequality: Since \(\pi_{z_{i}}\) is a projection to a plane, implying \(\hat{x}-x\) and \(\hat{y}-y\) are parallel and perpendicular to \(\hat{x}-\hat{y}\), we have that
\[\left\|x-y\right\|^{2}=\left\|\hat{x}-\hat{y}\right\|^{2}+\left\|x-\hat{x}-(y -\hat{y})\right\|^{2}.\]
This implies that \(\left\|x-y\right\|^{2}\geq\left\|\hat{x}-\hat{y}\right\|^{2}\), and thus \(e^{-\left\|x-y\right\|^{2}}\leq e^{-\left\|\hat{x}-\hat{y}\right\|^{2}}\). The last inequality will be used later.
Now we just need to adapt the proof of Theorem 5, and the only part that we need to change is the upper bound of
\[\left|\int_{\Omega}K_{t}(x,x_{i})(f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|\]
in (4.26). We let \(\pi_{z_{i}}\) be the projection from \(B_{R}(z_{i})\cap\Omega\) to \(T_{\Omega,z_{i}}\) and denote \(\hat{x}\coloneqq\pi(x)\).
\[\left|\int_{\Omega}K_{t}(x,x_{i})(f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|\] \[\qquad=\left|\sum_{i=1}^{K}\int_{B_{R}(z_{i})\cap\Omega}K_{t}(x,x _{i})(f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|\] \[\qquad\leq\sum_{i=1}^{K}\left|\int_{B_{R}(z_{i})\cap\Omega}K_{t}( x,x_{i})(f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|.\]
Focusing now on one term, we use (3.4) and that \(e^{-\left\|x-y\right\|^{2}}\leq e^{\left\|\hat{x}-\hat{y}\right\|}\) to conclude
\[\left|\int_{B_{R}(z_{i})\cap\Omega}K_{t}(x,x_{i})(f(x)-f(x_{i}))p \,\mathrm{d}x_{i}\right|\] \[\qquad\leq(1+LR^{2})\left|\int_{B_{R}(z_{i})\cap T_{\Omega,z_{i} }}K_{t}(x,x_{i})(f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|\] \[\qquad\leq(1+LR^{2})\left|\int_{B_{R}(z_{i})\cap T_{\Omega,z_{i} }}K_{t}(\hat{x},\hat{x}_{i})(f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|\] \[\qquad\leq(1+LR^{2})M\left|\int_{B_{R}(z_{i})\cap T_{\Omega,z_{i} }}K_{t}(\hat{x},\hat{x}_{i})p\,\mathrm{d}x_{i}\right|\] \[\qquad\leq(1+LR^{2})M\left|\int_{\mathbb{R}^{d}}K_{t}(\hat{x}, \hat{x}_{i})p\,\mathrm{d}x_{i}\right|\] \[\qquad\leq(1+LR^{2})Mt^{d/2}\pi^{d/2}p.\]
Thus,
\[\left|\int_{\Omega}K_{t}(x,x_{i})(f(x)-f(x_{i}))p\,\mathrm{d}x_{i}\right|\leq K (1+LR^{2})Mt^{d/2}\pi^{d/2}p.\]
The result now follows by the same reasoning as in Theorem 5.
## 5. Numerical Experiments
### Estimating singularities
In the following experiments we demonstrate how to estimate the point of intersection and intersecting angle \(\theta\) of a union of manifolds \(\Omega=\Omega_{1}\cup\Omega_{2}\). We will assume that we both have a set of samples \(X\subset\Omega\) distributed according to the associated density on \(\Omega\), and an additional set of points \(Y\) from curve \(\Gamma\subset\Omega_{i}\), for some \(i\in\{1,2\}\). The curve \(\Gamma\) intersects \(\Omega_{1}\cap\Omega_{2}\), and we assume that no other singularity is very close, which is a situation like in Fig. 5 and Fig. 8.
#### 5.1.1. Outline of experiments and choice of estimators
Given the set of \(m\) points \(Y=\{y_{1},\ldots,y_{m}\}\), where each \(y_{i}\in\Gamma\), we evaluate \(L_{n,t}f\) on \(Y\). This gives us a set of values \(P=\{p_{1},\ldots,p_{m}\ :\ p_{i}=L_{n,t}f(y_{i}),\ p_{i}\in\Gamma\}\). We know that these, with enough samples, will be close to
\[L_{t}f(x_{i})=t^{d/2+1/2}\left(A(d,r_{0},\theta)v_{n,\Omega_{i}}\sin\theta_{i} r_{i}e^{-\sin^{2}\theta_{i}r_{i}^{2}}\right)+\mathrm{Error}(x_{i},r_{0},L), \tag{5.1}\]
where the error term, which depends on \(x_{i}\),\(r_{0}\) and \(L\), can be quantified with the bounds in Section 4.1 and Section 4.2. We can always, by choosing \(t\) small enough, make \(r_{0}\) however large we want to, and the function \(A(d,r_{0},\theta)\)
can be made arbitrarily close to \(2\pi^{d/2}\). The constant \(L\) is an upper bound on how much curvature there is in \(\Omega\).
**Remark 5.1**.: In Theorem 3 we have a function \(\widehat{A}(x)\) instead of \(A(d,r_{0},\theta)\), but \(\widehat{A}(x)\) can be made arbitrarily close to \(A(d,r_{0},\theta)\) by choosing \(t\) or \(L\) small enough.
The right-hand side of (5.1) depends on \(r_{i}=\left\|x_{i}-x_{0}\right\|\), and the angle \(\theta_{i}=\angle\left(x_{i}-x_{0},\hat{x}_{i}-x_{0}\right)\). Here \(\hat{x}_{i}\) is the projection of \(x_{i}\) onto either a manifold \(\Omega_{i}\) or a tangent plane, as in Theorem 3, for some point \(x_{0}\). Thus, one can say firstly that if \(\left|L_{t}f(x_{i})\right|>\) Error term, then there must be a point \(x_{0}\) nearby such that \(\left|v_{n,\Omega_{i}}\sin\theta_{i}\right|>0\). This in itself does not allow us to see the difference between if \(\Omega_{1}\) and \(\Omega_{2}\) are just close together, or if there really is an intersection. But if we can find points \(x_{i},x_{j}\) such that \(L_{t}f(x_{i})>\left|\text{Error term}\right|\) and \(L_{t}f(x_{j})<-\left|\text{Error term}\right|\), then we can. This is because \(v_{n,\Omega}\sin\theta_{i}\) can only change sign on \(\Gamma\) when passing through an intersection.
Further, looking at \(g(r,\theta)=v_{n,\Omega}r\sin\theta e^{-r^{2}\sin^{2}\theta}\), we notice that \(g\) only depends on \(x=r\sin\theta\) (up to the sign of \(v_{n,\Omega_{i}}\)). With some abuse of notation \(g(r,\theta)=g(x)\), which is a rescaled (and possibly flipped) version of the function \(h(x)=xe^{-x^{2}}\). See Fig. 4 for the graph of \(h\).
One easily sees that the minimal and maximal value of \(h\) are the points \(z_{1}=-\frac{1}{\sqrt{2}}\) and \(z_{2}=\frac{1}{\sqrt{2}}\). The point of intersection will correspond to the midpoint of these two points. In general then, we can estimate the point \(s\) where \(\Gamma\) intersects \(\Omega_{1}\cap\Omega_{2}\) by the midpoint of the maximum and minimum value of the set \(P\), as in
\[\hat{s}=\frac{\arg\max_{x_{i}}(P)+\arg\min_{x_{i}}(P)}{2}\]
We can also get an estimator \(\hat{\theta}\) of \(\theta\). First we let \(\hat{r}_{\max}\) be an estimate of the scaled distance from \(0\) that maximizes \(g(r,\theta)\), namely \(\hat{r}_{max}=\left\|\hat{s}-\arg\max_{x_{i}}P\right\|/\sqrt{t}\). Then, since \(\max_{r}g(r,\theta)=\frac{1}{\sqrt{2}}\), \(\hat{r}_{\max}\sin\theta\approx\frac{1}{\sqrt{2}}\), we
can estimate \(\theta\) with
\[\hat{\theta}=\arcsin\left(\frac{1}{\sqrt{2}\hat{r}_{\max}}\right).\]
In the following we test these methods of estimation on angles between hypersurfaces in \(\mathbb{R}^{3}\) given by \(\theta\in\{\pi/2,\pi/4,\pi/8,\pi/16\}\), and with \(t=10^{-3}\).
**Remark 5.2**.: Performing these experiments inside \(\mathbb{R}^{3}\) is mainly a convenience here for visualization purposes. The reason for this is that we are only working with points on the space \(\Omega\), which has an intrinsically low dimension. However, choosing a function \(f(x)=v\cdot x\) for \(L_{t}^{i}\) to act on becomes more difficult when the dimension is increased. This is because it is harder to orient \(v\) in a way which \(v_{n,\Omega_{i}}\) makes large: which is especially true of you choose \(v\) randomly, as we do here.
For both flat and curved manifolds we perform \(100\) runs with random choices of \(f(x)=v\cdot x\), and we sample \(v\) using the uniform distribution on \(\mathbb{S}^{2}\). For each such run we sample \(2\times 10^{4}\) points from both \(\Omega_{1}\) and \(\Omega_{2}\), from a bounded region near the intersection, and evaluate \(L_{n,t}f\) on \(10^{3}\) uniformly sampled points of \(\Gamma\). These last evaluations give us our set \(P\).
#### 5.1.2. Flat manifolds
Here we test these methods in the case that \(\Omega_{1},\Omega_{2}\) are flat. Since here we are integrating over two flat manifolds,
\[L_{t}f(x)=\sum_{i=1}^{2}\int_{\Omega_{i}}K_{t}(x,y)(f(x)-f(y))\,\mathrm{d}y.\]
Using Theorem 1 together with Lemma 4.3 we get that for \(x_{i}\in\Gamma\)
\[L_{t}f(x_{i})=t^{d/2+1/2}\left(A(d,r_{0},\theta_{i})v_{n,\Omega_{i}}\sin \theta_{i}r_{i}e^{-\sin^{2}\theta_{i}r_{i}^{2}}+2B(x_{i})e^{-r_{0}^{2}}\right),\]
where \(\theta_{i}\) and \(r_{i}\) are in relation to \(x_{i}\) and \(\Omega_{i}\) as explained in Section 4. For example in Fig. 5, \(\theta_{i}\) is the angle of the red and green planes.
**Remark 5.3**.: There is some slight abuse of notation here in that we rather have two different functions \(B_{1}(x_{i})\) and \(B_{2}(x_{i})\), one for each manifold \(\Omega_{1},\Omega_{2}\), and we have implicitly defined a new function \(B(x_{i})\coloneqq\frac{B_{1}(x_{i})+B_{2}(x_{i})}{2}\).
Let us first notice that since the manifolds are flat, the angle \(\theta_{i}=\theta\), where \(\theta\in[0,\frac{\pi}{2}]\) is fixed. Then, since \(|B(x)|\leq 2^{\frac{d+1}{2}}r_{0}^{d}\left|\mathbb{S}^{d-1}\right|\), it is sufficient that
\[\max_{i}L_{n,t}f(x_{i})>2t^{d/2+1/2}2^{\frac{d+1}{2}}r_{0}^{d}\left|\mathbb{S} ^{d-1}\right|e^{-r_{0}^{2}}\]
and
\[\min_{i}L_{n,t}f(x_{i})<-2t^{d/2+1/2}2^{\frac{d+1}{2}}r_{0}^{d}\left|\mathbb{S} ^{d-1}\right|e^{-r_{0}^{2}},\]
to be able to say, with some probability that \(\Gamma\) intersects \(\Omega_{1}\cap\Omega_{2}\).
In Fig. 5 we see our samples of \(\Omega\) and \(\Gamma\), and in Fig. 6 we see an example of the values we get in \(P\). Finally, in Fig. 7 we see how well this approach works in trying to learn both \(\theta\) and \(r\).
Figure 6. \(L_{n,t}f\) evaluated on \(\Gamma\). Flat manifolds.
#### 5.1.3. Curved manifolds
Here we test these methods in the case that \(\Omega=\Omega_{1}\cup\Omega_{2}\) is \((L,2R)\)-regular, with \(L=0.5\) and \(R\) having no upper bound. The setup is the same as what we see in Fig. 8.
Using Corollary 4.5 we have that
\[L_{t}f(x_{i})=t^{d/2+1/2}\widehat{A}(x)v_{n,\Omega_{1}}r\sin\theta e ^{-r^{2}\sin^{2}\theta_{1}}+t^{d/2+1/2}\widehat{A}(x)8LR^{2}\] \[+t^{d/2}C_{L,R}(x)8p\pi^{d/2}+2e^{-r_{0}^{2}}D(x).\]
Let us denote \(C\coloneqq LR^{2}(1+4LR^{2})+(4LR^{2})^{2}\) and \(D\coloneqq\operatorname{diam}(\Omega)\). Then since \(\widehat{A}\leq 2\pi^{d/2}\), \(C_{L,R}\leq C\) and \(\widehat{A}(x)\leq(1+3C)2\pi^{d/2}\), we need that
\[\max_{i}L_{n,t}f(x_{i})>2\left(t^{d/2+1/2}(1+3C)8LR^{2}+t^{d/2}C8p\pi^{d/2}+2e ^{-r_{0}^{2}}\right)\]
and
\[\max_{i}L_{n,t}f(x_{i})<-2\left(t^{d/2+1/2}(1+3C)8LR^{2}+t^{d/2}C8p\pi^{d/2}+2 e^{-r_{0}^{2}}\right)\]
to be able to say, with some probability that \(\Gamma\) intersects \(\Omega_{1}\cap\Omega_{2}\). Each term above can be made arbitrarily by making \(L\) small and \(r_{0}\) large enough.
**Remark 5.4**.: Since there is curvature, we cannot expect \(\theta_{i}=\theta\) for every \(i=1,\ldots,n\), or even between any pair of them. However, we can still estimate the location of the intersection as before, and estimating \(\theta\) in this way provides some information about the intersection, even if it is not as strong as in the case without curvature. The range of possible values for \(\theta\), due to curvature, can be bounded by knowing the curvature constant \(L\).
In Fig. 8 we see our samples of \(\Omega\) and \(\Gamma\), in Fig. 9 we see an example of the values we get in \(P\). Finally, in Fig. 10 we see how well this approach works in trying to learn both \(\theta\) and \(r\).
Figure 7. Estimates of \(\theta\) and \(s\) on flat manifolds.
Figure 8. Samples of \(\Omega=\Omega_{1}\cup\Omega_{2}\) and \(\Gamma\) (blue), where \(\Omega_{1}\) (green) and \(\Omega_{2}\) (red) have curvature.
Figure 9. \(L_{n,t}f\) evaluated on \(\Gamma\). Curved manifolds.
## 6. Final remarks
In this paper we built upon the work of [2] and developed explicit versions of their asymptotic analysis of \(x\to L_{t}f(x)\). Our results are the strongest and most useful in the case of flat manifolds, and the motivation to focus on this scenario comes partly from Remark 4.1.
While the bounds in Theorem 3 are weaker, our numerical experiments suggest that this approach can be useful for gaining geometric information about the union of more general manifolds \(\Omega=\cup_{i}\Omega_{i}\). In [2], the authors mainly considered sets \(\Omega=\cup_{i=1}^{n}\Omega_{i}\) with \(n\leq 2\). Our approach of splitting \(L_{t}\) into components \(L_{t}^{i}\) makes it easy to directly apply our theorems for \(n\geq 2\), allowing us to consider a wider range of singularities. For example, we can extend the framework to examine points that are both of Type 1 and Type 2, or to study intersections of more than two manifolds. A drawback of this approach is that the error terms are compounded when they are just added together for each \(L_{i}^{t}\), but whether this is a problem will depend on the specific application.
In our numerical experiments, we assumed that \(\Omega=\cup_{i=1}^{2}\Omega_{i}\) and had access to samples of a continuous curve \(\Gamma\), which allowed us to estimate geometric properties near intersections. Future work could involve extending our framework to other types of singularities and developing similar tests and estimators. It would also be interesting to explore methods that do not rely on direct access to such curves.
Similar theorems can be proven for other kernels besides the Gaussian one, as many ideas used in our proofs are not specific to the Gaussian case, but rather rely mainly on symmetries of \(K_{t}\). Investigating the use of other kernels and comparing their performance in different scenarios is a promising direction for future research.
## Acknowledgments
The first author was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The second author was supported by the Swedish Research Council grant dnr: 2019-04098.
|
2309.15748 | Quantization of Yang-Mills Theory in de Sitter Spacetime | In this paper, we analyze the quantization of Yang-Mills theory in the de
Sitter spacetime. It is observed that the Faddeev--Popov ghost propagator is
divergent in this spacetime. However, this divergence is removed by using an
effective propagator, which is suitable for perturbation theory. To show that
the quantization of Yang--Mills theory in the de Sitter is consistent, we
quantize it using first-class constraints in the temporal gauge. We also
demonstrate that this is equivalent to quantizing the theory in the Lorentz
gauge. | Aasiya Shaikh, Mir Faizal, Naveed Ahmad Shah | 2023-09-27T16:04:51Z | http://arxiv.org/abs/2309.15748v2 | # Quantization of Yang-Mills Theory on de Sitter Spacetime
###### Abstract
In this paper, we will analyze the quantization of Yang-Mills theory on de Sitter spacetime. It will be observed that the Faddeev-Popov ghost propagator is divergent in de Sitter spacetime. However, this divergence will be removed in an effective propagator, which will be suitable for the perturbation theory. To show that the quantization of Yang-Mills theory in de Sitter is consistent, we also quantize it using first-class constraints. We will be able to demonstrate that this is equivalent to quantizing the theory in the Lorentz gauge.
\({}^{1}\)Raja Ramanna Centre for Advanced Technology, Indore-452013, Madhya Pradesh, India.
\({}^{2,3}\)Canadian Quantum Research Center 204-3002 32 Ave Vernon, BC V1T 2L7 Canada.
\({}^{2}\)Irving K. Barber School of Arts and Sciences, University of British Columbia - Okanagan, Kelowna, BC V1V 1V7, Canada.
## 1 Introduction
It is expected that our universe will asymptotically approach de Sitter spacetime, due to the observations from type I supernovae [1, 3, 4, 5, 6]. However, at such large scales, the quantum effects can be neglected, and the system can be described using classical cosmology. However, quantum field theory in de Sitter spacetime becomes important in inflationary cosmology [7, 8, 9, 10]. The de Sitter spacetime is important in various interesting models of inflation. It is used in brane-antibrane models [11, 12] and \(D3/D7\) systems [13, 14], where the inflaton field corresponds to an open string. The de Sitter spacetime is also important in Kahler moduli [15, 16] and fibre inflation [17], where inflaton fields correspond to closed strings. In these models the realization of inflation depends crucially on the uplifting mechanism for de Sitter moduli stabilization [18]. So, de Sitter spacetime is important in most model of inflationary cosmology.
It may be noted that Yang-Mills theory has also been used to model inflationary cosmology [19]. In this analysis, a ten-dimensional Einstein-Yang-Mills theory has been used to study cosmological solutions. It has been demonstrated that in such a theory, it is possible to obtain a cosmological solutions with static extra dimensions. Furthermore, for such cosmological models, the scale factor of the four-dimensional Friedmann-Lemaitre-Robertson-Walker metric is an exponential function of time. Thus, it has been argued that such model can be used for analyzing inflation, and so it is important to study Yang-Mills theories in de Sitter spacetime. The Yang-Mills fields theory in de Sitter spacetime is also
important for analyzing physical systems using the gauge/gravity duality. In the usual gauge/gravity, the quantum field theory in AdS spacetime is dual to a CFT a flat spacetime [20]. It is important to analyze the cosmological singularities using this duality, but it has been observed that the boundary theory dual to such singularities also becomes singular [21, 22]. However, it is possible to analyze such singularities, if the dual field theory is taken as a gauge theory on de Sitter spacetime [23, 24]. This makes it important to analyze the Yang-Mills theory in de Sitter spacetime.
Even though it is important to study quantum field theory in de Sitter spacetime, there are several issues with the consistency of the quantum field theory in de Sitter spacetime. There is a problem with the linearization instabilities in de Sitter spacetime [25]. There are also problems with certain values of gauge parameters for perturbative quantum gravity [26, 27]. There are several other problems relating to infrared divergence, and it has been argued that it would not be possible to integrate by parts due to such infrared divergences [28, 29, 30, 31]. Even though there are problems with perturbative quantum gravity, it is possible to analyze the perturbative quantum gravity in the first order formalism as a gauge theory of spin connection [32]. In fact, for higher curvature terms, this theory resembles the Yang-Mills theory, with the gauge group being the Lorentz group. The inflation has also been studied using gravity with higher order curvature terms [33]. Thus, such a system can also be analyzed a Yang-Mills theory, and consistent results can be obtained using such a system.
However, there are certain issues even with the Yang-Mills theories in de Sitter spacetime. When Yang-Mills theory is quantized using the path integral approach ghost fields are introduced into the theory. It has been shown [34] that the ghost propagators are infrared divergent in de Sitter space-time. This issue can be resolved by introducing a mass that is taken to zero at the end of the perturbative calculation. This mass term breaks BRST invariance but it has been shown [35] that the zero modes, which cause the infrared divergences can be removed in a BRST invariant way producing a theory equivalent to the one obtained by adding a mass term. However, integration by parts was used to obtain this result, but it has also been argued that it might not be possible to use integration by parts in perturbative calculations in de Sitter space [28, 29, 30, 31]. So, it is important to understand if Yang-Mills theories can be consistently quantized in de Sitter spacetime. This motivates us to use constraint quantization [36, 37] to quantize Yang-Mills theory in de Sitter spacetime. In fact, as we will be able to demonstrate that the theory can be consistently quantized using constraint quantization, it also implies that this theory can be consistently quantized using BRST invariant formalism. This is because it has been demonstrated that the BRST quantization is equivalent to the constraint quantization [46]. So, we will be able to demonstrate that it is possible to quantize a Yang-Mills theory, without imposing a gauge, using this formalism. Thus, it is possible to obtain a consistent quantum Yang-Mills theory in de Sitter spacetime, and so any infrared divergent in Yang-Mills theory cannot be physical.
Effective Propagator
The de Sitter spacetime has the topology \(S^{N}\times R\). So, it is possible to take a \(N\) dimensional space-like slice \(\Sigma=S^{N}\) though the space-time. It may be noted that for analyzing Yang-Mills theory using constraint analysis, it would not be required to restrict the space-like slice to \(S^{N}\), and the analysis will hold for any spacetime with the topology \(\Sigma\times R\). So, the metric for such a spacetime can be written as
\[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=-dt^{2}+h_{ij}(t,x)dx^{i}dx^{j}\;. \tag{1}\]
The action for the Yang-Mills theory is given by
\[S=\int d^{d+1}x\sqrt{h}L=-\int d^{d+1}x\frac{1}{4}\sqrt{h}F^{\mu\nu}_{A}F^{A}_{ \mu\nu}\;, \tag{2}\]
where
\[F^{A}_{\mu\nu}=\partial_{\mu}A^{A}_{\nu}-\partial_{\nu}A^{A}_{\mu}+gC^{A}_{\; \;BC}A^{B}_{\mu}A^{C}_{\nu}\;, \tag{3}\]
\(C^{A}_{\;\;BC}\) are the structure constants of the group defined by
\[[t_{A},t_{B}]=iC^{C}_{\;\;AB}t_{C}\;, \tag{4}\]
\(t_{A}\) are the generators of the group and \(g\) is the coupling constant.
This theory can be quantized using the path integral by first choosing a gauge, for example the Lorentz gauge \(\nabla^{\mu}A^{A}_{\mu}=0\). This gauge fixing condition can be implemented at a quantum level by adding a gauge fixing term and a ghost term to the original action. Thus, the total effective action for Yang-Mills theory can be written as
\[S_{gf}+S_{gh}=\int d^{d+1}x\sqrt{h}\left[B_{A}\nabla^{\mu}A^{\mu}+\frac{\alpha }{2}B^{A}B_{A}+\nabla^{\mu}\bar{c}_{A}D_{\mu}c^{A}\right], \tag{5}\]
where \(c^{A}\) and \(\bar{c}^{A}\) is the ghost and anti-ghost fields. The gauge conditions are implemented using the auxiliary field \(B_{A}\). Now using the total action, which is given by the sum of the original action, the gauge fixing term and the ghost term, \(S_{T}=S+S_{gf}+S_{gh}\), the path integral can be defined as
\[Z=\int DA\;Dc\;D\bar{c}\;DBe^{iS_{T}}. \tag{6}\]
The correction function can be calculated from this path integral version using the usual methods. There propagator for the gauge fields can be directly calculated in de Sitter spacetime. However, in de Sitter spacetime, there is a problem in quantizing the theory in this way, as it has been demonstrated that in four dimensions the ghost propagator are infrared divergent [34]. Here we will generalized this result to \(d+1\) dimensions, and we will demonstrated that the ghost propagator are infrared divergent in \(d+1\) dimensions. However, it is possible to obtain an effective propagator in \(d+1\) dimensions by adding a mass term, and then setting the mass term to zero at the end of the calculations. Thus, we first write the equation of motion for the ghost and anti ghost fields,
\[\nabla^{\mu}\nabla_{\mu}c^{A}=0,\qquad\nabla^{\mu}\nabla_{\mu}\bar{c}^{A}=0. \tag{7}\]
Now we can Euclideanize this system from \(S^{N}\times R\) to \(S^{N+1}\), and choose a Euclidean vacuum state for this theory. Then it is possible to write
\[\langle 0|T[c^{A}(x)\bar{c}^{B}(x^{\prime})]|0\rangle=i\delta^{AB}D_{0} (x,x^{\prime}), \tag{8}\]
where \(D_{0}(x,x^{\prime})\) would satisfy
\[\nabla^{\mu}\nabla_{\mu}D_{0}(x,x^{\prime})=-\delta(x,x^{\prime}). \tag{9}\]
Now for the spherical harmonics in \(S^{N+1}\), we have
\[-\nabla^{\mu}\nabla_{\mu}Y^{L\sigma}=L(L+N)Y^{L\sigma}, \tag{10}\]
here \(\sigma\) denotes all the labels other than \(L\). We can write the \(\delta(x,x^{\prime})\) and \(D_{0}(x,x^{\prime})\) on \(S^{N+1}\) as
\[\delta(x,x^{\prime}) = \sum_{L=0}^{\infty}\sum_{\sigma}Y^{L\sigma}(x)Y^{L\sigma}(x^{ \prime}),\] \[D_{0}(x,x^{\prime}) = \sum_{L=0}^{\infty}\sum_{\sigma}k_{L}Y^{L\sigma}(x)Y^{L\sigma}(x ^{\prime}), \tag{11}\]
where \(k_{L}\) is a constant, and it is equal to \(k_{L}=L(L+N)\). However, for \(L=0\), this propagator is not well defined. So, we regulate this propagator by adding a small mass \(m^{2}\), and obtain
\[D_{m^{2}}(x,x^{\prime}) = \sum_{L=0}^{\infty}\sum_{\sigma}\frac{Y^{L\sigma}(x)Y^{L\sigma}(x ^{\prime})}{L(L+N)+m^{2}} \tag{12}\] \[= \frac{1}{\sqrt{V}m^{2}}+\sum_{L=1}^{\infty}\sum_{\sigma}\frac{Y^ {L\sigma}(x)Y^{L\sigma}(x^{\prime})}{L(L+N)+m^{2}},\]
where \(V\) is the volume of a \(S^{N+1}\). This propagator diverges in the zero mass limit, and the divergence come from a constant mode. However, as these fields couple to the gauge fields though a derivative coupling, \(-igC^{C}_{AB}\nabla^{\mu}\bar{c}^{C}A^{A}_{\mu}c^{B}\), the constants modes do not contribute to the perturbative calculations. This is similar to the four dimensional case [34]. Thus, we can write an effective propagator by subtracting the constant mode, and then taking the zero mass limit,
\[D_{0}^{eff}(x,x^{\prime})=\lim_{m^{2}\to 0}\left[D_{m^{2}}(x,x^{\prime})- \frac{1}{\sqrt{V}m^{2}}\right]. \tag{13}\]
This propagator is finite in the zero mass limit and can be used to perform the pertubative calculations. However, the zero mode can appear again in the full theory due to the BRST symmetry. It might be possible to generalize the analysis done to remove such modes in a BRST invariant way [35], but this analysis depends on integration by parts. However, it can be argued that there are problems with performing integration by parts in de Sitter spacetime [28, 29, 30, 31]. So, it is important to understand if it is possible to quantize the Yang-Mills theory in a consistent way in de Sitter spacetime. This can be done by quantizing the Yang-Mills theory in de Sitter spacetime using an alternative approach. So, in this paper, we will quantize the Yang-Mills theory in de Sitter spacetime using constraint quantization.
Constraints
In this section, we will again analyze the Yang-Mills theory on a spacetime with topology \(S^{N}\times R\). So, first we will review this theory as a classical field theory, and analyze its constraints. The canonical momenta \(\Pi^{\mu}_{A}=\partial L/\partial\dot{A}^{A}_{\mu}\) are given by
\[\Pi^{\mu}_{A}=\sqrt{h}F^{\mu t}_{A}\;. \tag{14}\]
There is therefore a set of primary constraints \(\phi_{A}\) given by
\[\phi_{A}=\Pi^{t}_{A}\approx 0\;, \tag{15}\]
where \(\approx\) denotes a weak equality which can be imposed only after the Poisson brackets have been evaluated. The canonical Hamiltonian is given by
\[H_{c}=\int d^{d}x\left[\frac{\Pi^{k}_{A}\Pi^{A}_{k}}{2\sqrt{h}}+\frac{1}{4} \sqrt{h}F^{ij}_{A}F^{A}_{ij}+gC^{A}_{\phantom{A}BC}A^{B}_{k}\Pi^{k}_{A}A^{C}_{ t}-A^{A}_{t}\partial_{k}\Pi^{k}_{A}\right]\;. \tag{16}\]
Consistency of the primary constraints requires that \(\dot{\phi_{A}}=\{\phi_{A},H_{T}\}\approx 0\), where \(\{\quad,\quad\}\) denotes the Poisson bracket, \(H_{T}\) is the total Hamiltonian defined by \(H_{T}=H_{C}+u^{A}\phi_{A}\) and \(u^{A}\) are arbitrary parameters. This gives the set of secondary constraints
\[\chi_{A}=D_{k}\Pi^{k}_{A}=\partial_{k}\Pi^{k}_{A}-gC^{B}_{\phantom{B}CA}A^{C}_ {k}\Pi^{k}_{B}\approx 0\;, \tag{17}\]
where \(D_{k}\Pi^{k}_{A}\) is the gauge covariant derivative of \(\Pi^{k}_{A}\). It can be shown that \(\dot{\chi}_{A}=gC^{B}_{\phantom{B}CA}A^{C}_{t}\chi_{B}\approx 0\), so that there are no more constraints. The constraints \(\phi_{A}\) and \(\chi_{A}\) are first class constraints since \(\{\phi_{A}(x),\phi_{B}(y)\}=0\), \(\{\phi_{A}(x),\chi_{B}(y)\}=0\) and \(\{\chi_{A}(x),\chi_{B}(y)\}=gC^{C}_{\phantom{B}AB}\chi_{C}(x)\delta(x,y)\approx 0\). The canonical Hamiltonian can now be written as
\[H_{c}=\int d^{d}x\left[\frac{\Pi^{k}_{A}\Pi^{A}_{k}}{2\sqrt{h}}+\frac{1}{4} \sqrt{h}F^{ij}_{A}F^{A}_{ij}-A^{A}_{t}\chi_{A}\right]\;. \tag{18}\]
In this section the classical Yang-Mills theory will be quantized using Dirac's method for quantizing theories with first class constraints. In this approach the classical dynamical variables are promoted to operators that satisfy the standard commutation relations
\[[A^{A}_{\mu}(x),\Pi^{\nu}_{B}(y)]=i\delta^{\nu}_{\mu}\delta^{A}_{B}\delta^{3}( x,y) \tag{19}\]
and
\[[A^{A}_{\mu}(x),A^{B}_{\nu}(y)]=[\Pi^{\mu}_{A}(x),\Pi^{\nu}_{B}(y)]=0\;. \tag{20}\]
A state vector \(|\Psi>\) is introduced that satisfies the Schrodinger equation
\[i\frac{d}{dt}|\Psi>=H|\Psi> \tag{21}\]
where
\[H=\int d^{d}x\left[\frac{\Pi^{k}_{A}\Pi^{A}_{k}}{2\sqrt{h}}+\frac{1}{4}\sqrt{ h}F^{ij}_{A}F^{A}_{ij}\right] \tag{22}\]
and the constraints are imposed on the state vector
\[\phi_{A}|\Psi>=0\qquad and\qquad\chi_{A}|\Psi>=0\;. \tag{23}\]
These conditions can also be written as \(\phi_{A}\approx 0\) and \(\chi_{A}\approx 0\). The constraints satisfy
\[[\phi_{A}(x),\phi_{B}(y)]=[\phi_{A}(x),\chi_{B}(y)]=0 \tag{24}\]
and
\[[\chi_{A}(x),\chi_{B}(y)]=igC^{C}_{\ AB}\chi_{C}(x)\delta(x,y)\;. \tag{25}\]
This shows that these commutators vanish or weakly vanish, which is necessary for consistency.
In this approach a gauge condition has not been imposed, but constraints follow directly from the formalism. The constraint \(\Pi^{t}|\Psi>=0\) gives
\[\frac{\delta}{\delta A_{t}}|\Psi>=0\;, \tag{26}\]
which implies that the wave functional is independent of \(A_{t}\). The second constraint \(D_{k}\Pi^{k}_{A}||\Psi>=0\) implies that the wave functional is invariant under time independent gauge transformations.
Physically observable quantities must weakly commute with the constraints implying that they can depend on \(A^{A}_{t}\) only through functions of \(A^{A}_{t}\) that involve the constraints. If we neglect such operators physically observable quantities will be independent of \(A^{A}_{t}\). Dynamical variables that depend on \(\Pi^{t}_{A}\) will annihilate the state vector so we can restrict ourselves to dynamical variable that do not depend on \(A^{A}_{t}\) and \(\Pi^{t}_{A}\). This theory is identical to the theory obtained by quantizing in the temporal gauge where one sets \(A^{A}_{t}=0\) and imposes the constraint \(\chi_{A}|\Psi>=0\) (see [43] and [44] for a discussion of the quantization of Yang-Mills theory in the temporal gauge in flat space-time). One can, in fact, impose the constraint \(A^{A}_{t}=0\) as an additional constraint in Dirac's approach. In this case we have the first class constraints \(\chi_{A}\approx 0\) and two sets of second class constraints \(\Pi^{t}_{A}\approx 0\) and \(A^{A}_{t}\approx 0\). The first class constraints are imposed on the state vector, as before. The second class constraints become operator constraints, \(A^{A}_{t}=0\) and \(\Pi^{t}_{A}=0\), and the commutator is replaced by the Dirac bracket \([\;\;,\;]_{D}\) where
\[[A^{A}_{k}(x),\Pi^{l}_{B}(y)]_{D}=[A^{A}_{k}(x),\Pi^{l}_{B}(y)]=i\delta^{l}_{ k}\delta^{A}_{B}\delta^{3}(x,y)\;, \tag{27}\]
\[[A^{A}_{k}(x),A^{B}_{l}(y)]_{D}=[A^{A}_{k}(x),A^{B}_{l}(y)]=0\;, \tag{28}\]
and
\[[\Pi^{k}_{A}(x),\Pi^{l}_{B}(y)]_{D}=[\Pi^{k}_{A}(x),\Pi^{l}_{B}(y)]=0\;. \tag{29}\]
Therefore, imposing the gauge constraint \(A^{A}_{t}=0\) does not effect the theory.
To complete the theory an inner product needs to be defined on the state space. If the inner product is defined as an integral over all of the \(A^{A}_{k}\) it will be divergent [44]. Instead one defines it as an integral over physical degrees of freedom [44, 45]. It is also possible to use the refined algebraic quantization to define the inner product in this system [46, 47, 48, 49]. This can be done by first representing all constraints in this system by \(\tilde{\Lambda}_{a}\) (\(\tilde{\Lambda}^{+}_{a}=\tilde{\Lambda}_{a}\)), such that they satisfy \([\tilde{\Lambda}_{a};\tilde{\Lambda}_{b}]=if^{c}_{ab}\tilde{\Lambda}_{c}\), for some structure constants \(f^{c}_{ab}\). Now \(L_{a}\), \(a=\overline{1,M}\) are the generators of the Lie algebra, such that \([L_{a},L_{b}]=if^{c}_{ab}L_{c}\). So, it is possible to define \(\mu^{a}L_{a}\rightarrow\exp(i\mu^{a}L_{a})\), for the corresponding Lie group \(G\). As \(\tilde{\Lambda}_{a}\) form a representation of the Lie algebra, \(\exp(i\mu^{a}\tilde{\Lambda}_{a})\) will form a representation of group \(\tilde{T}(\exp(i\mu^{a}L_{a}))=\exp(i\mu^{a}\tilde{\Lambda}_{a})\). The adjoint representation of the Lie
algebra can be defined as \(Ad(L_{a})\), and so \((Ad(L_{a})\rho)^{c}=if^{c}_{ab}\rho^{b}\), while \(Ad\{g\}\) is the adjoint representation of the group \((Ad\{g\}\rho)^{c}=(\exp(A))^{c}_{b}\rho^{b}\) with \(A^{c}_{b}=-\mu^{a}f^{c}_{ab}\), \(g=\exp(i\mu^{a}L_{a})\). The inner product can now be expressed using the integral over gauge group [46, 47, 48, 49]
\[\int d_{R}g(detAd\{g\})^{-1/2}(\Phi,\bar{T}(g)\Phi), \tag{30}\]
where \(d_{R}g\) is the right-invariant Haar measure on the group. This has been done using the Giulini-Marolf group averaging formula. Thus, it is possible to use refined algebraic quantization for defining an inner product in this system.
## 4 Quantization in the Lorentz Gauge
In this section, we will demonstrate that the constraint obtained in the Lorentz gauge are the same as the temporal gauge. In the previous section, it was shown how the temporal gauge can be imposed as an additional constraint in Dirac's approach with the Lagrangian given by (2). However, this cannot be done with the Lorentz gauge because it contains \(\dot{A}^{A}_{t}\), which cannot be written in terms of the canonical momenta (14). However, as the theory is quantized in temporal gauge, so it is consistent to quantize this theory. Hence, we can use the argument used in [35] and define the BRST transformations for this theory. In fact, the BRST transformations can also developed in Hamiltonian formalism using the BFV approach [38, 39]. Using these BRST transformations, it is possible to define finite field BRST transformations, which are a symmetry of the action, but not a symmetry of the generating functional of the theory [40, 41]. Hence, it would be possible to use the FFBRST transformations to go from the theory in temporal gauge to the theory in Lorentz gauge. However, here we will take a more direct approach and directly demonstrate that the constraints obtained in the Lorentz gauge are the same as the constraints obtained in the temporal gauge. It may be noted that the relation between the first and second class constraints has also been obtained using the FFBRST transformations [42].
Now we can again choose the Lorentz gauge as the gauge fixing condition
\[\nabla^{\mu}A^{A}_{\mu}=0\;. \tag{31}\]
To quantize in the Lorentz gauge we will consider starting with the gauge fixed Lagrangian
\[L_{GF}=-\frac{1}{4}\sqrt{h}\;Tr\left(F^{\mu\nu}F_{\mu\nu}\right)-\frac{1}{2} \sqrt{h}\left(\nabla^{\mu}A^{A}_{\mu}\right)\left(\nabla^{\nu}A_{A\nu}\right)\;. \tag{32}\]
It may be noted that this term can be related to the term used in the Feynman path integrals by integrating away the auxiliary field and also choosing a suitable value of \(\alpha\). For the theory that follows from this Lagrangian to be equivalent to Yang-Mills theory the constraint \(\nabla_{\mu}A^{\mu}_{A}\approx 0\) must be imposed. The canonical momenta are given by
\[\Pi^{\mu}_{A}=\sqrt{h}\left[F^{\mu 0}_{A}-g^{\mu t}\nabla^{\nu}A_{A\nu}\right]\;. \tag{33}\]
Now
\[\Pi^{t}_{A}=\sqrt{h}\nabla^{\nu}A_{A\nu}\;, \tag{34}\]
so there is a set of primary constraints \(\phi_{A}\) given by
\[\phi_{A}=\Pi_{A}^{t}\approx 0\;. \tag{35}\]
The canonical Hamiltonian is given by
\[H_{c} = \int d^{d}x\left\{\frac{\Pi_{A}^{k}\Pi_{k}^{A}}{2\sqrt{h}}+\frac{1 }{4}\sqrt{h}F_{A}^{ij}F_{ij}^{A}-A_{0}^{A}\chi_{A}\right. \tag{36}\] \[\left.+\left[\frac{\Pi_{t}^{A}}{2\sqrt{h}}+^{(3)}\nabla^{k}A_{k} ^{A}-h^{ij}\dot{h}_{ij}A_{t}^{A}\right]\Pi_{A}^{t}\right\}\;.\]
where \(\chi_{A}=\partial_{k}\Pi_{A}^{k}-gC_{CA}^{B}A_{k}^{C}\Pi_{B}^{k}\) and \({}^{(3)}\nabla\) is the covariant derivative on the three dimensional surfaces defined by \(t=\)constant. Requiring that \(\dot{\phi_{A}}=\left\{\phi_{A},H_{T}\right\}\approx 0\) gives the secondary constraint
\[\chi_{A}\approx 0\;. \tag{37}\]
Thus, the constraints obtained by imposing the Lorentz gauge are the same as those obtained when no gauge condition is imposed.
Now consider quantizing the theory. The constraints \(\phi_{A}\) and \(\chi_{A}\) annihilate the state vector and the state vector satisfies the Schrodinger equation with the Hamiltonian given by (22). There is an ordering ambiguity in the last term in the Hamiltonian (36) since it involves \(A_{t}^{A}\) and \(\Pi_{A}^{t}\) which do not commute. We have taken the ordering as given in (36) so that \(\Pi_{A}^{t}\) appears on the right and annihilates the state vector. Thus, quantizing the theory in the Lorentz gauge gives the same quantum theory as quantizing the theory without imposing a gauge condition.
## 5 Conclusion
In this paper, we have analyzed the quantization of Yang-Mills theory in de Sitter spacetime. We first generalized the previous work on the effective propagators for Yang-Mills theory in de Sitter spacetime. Then we analyzed this theory using Dirac constraint quantization. In fact, we did not impose any gauge, but analyzed the theory as a system of first class constraints. It was demonstrated that this analysis is consistent with quantizing this theory in Lorentz gauge. This analysis was performed in \(d+1\) dimensional spacetime.
It would be interesting to use the results obtained in this paper to analyze different physical systems. It has been demonstrated that constraint quantization and the calculations done using the Feynman diagrams produce similar results [50]. This was done by using a systematic expansion of all constraint equations in canonical quantum gravity. In fact, it was demonstrated that this method generates the conventional Feynman diagrammatic technique for graviton loops. It would be interesting to use this correspondence and analyze the Yang-Mills theory for different interesting physical processes. It was observed that the constraints obtained by imposing the Lorentz gauge are the same as those obtained when no gauge condition is imposed. This result was expected because the theory quantized without imposing a gauge is equivalent to the theory quantized in the temporal gauge. However, if the theory can be consistently quantized in any gauge, then it can be transformed into a different gauge
using the gaugeon formalism [51, 52]. As the theory was demonstrated to be consistent in temporal gauge, it could be converted into the Lorentz gauge by using the quantum gauge transformations in the gaugeon formalism. Thus, it would be interesting to analyze this system using the gaugeon formalism.
It may also be noted that the results of this paper can be used to study other interesting gauge theories on de Sitter spacetime, and more general geometries which have the topology \(\Sigma\times R\). It would be interesting to analyze the quantization of Chern-Simons-matter theories using this method. It may be noted that Chern-Simons-matter theories have been studied using the BRST transformations in different gauge [53, 54]. It would also be interesting to quantize the Chern-Simons-matter theory without imposing a gauge. It is expected that this will again be equivalent to quantizing the theory in temporal gauge. It can also be possible to quantize the Chern-Simons-matter theory in Lorentz gauge, and demonstrating that it is equivalent to temporal gauge.
## Acknowledgments
I would like to thank D. N. Vollick for useful discussions.
|
2303.18237 | Aerostack2: A Software Framework for Developing Multi-robot Aerial
Systems | The development of autonomous aerial systems, particularly for multi-robot
configurations, is a complex challenge requiring multidisciplinary expertise.
Unlike ground robotics, aerial robotics has seen limited standardization,
leading to fragmented development efforts. To address this gap, we introduce
Aerostack2, a comprehensive, open-source ROS 2 based framework designed for
creating versatile and robust multi-robot aerial systems. Aerostack2 features
platform independence, a modular plugin architecture, and behavior-based
mission control, enabling easy customization and integration across various
platforms. In this paper, we detail the full architecture of Aerostack2, which
has been tested with several platforms in both simulation and real flights. We
demonstrate its effectiveness through multiple validation scenarios,
highlighting its potential to accelerate innovation and enhance collaboration
in the aerial robotics community. | Miguel Fernandez-Cortizas, Martin Molina, Pedro Arias-Perez, Rafael Perez-Segui, David Perez-Saura, Pascual Campoy | 2023-03-31T17:52:51Z | http://arxiv.org/abs/2303.18237v2 | # Aerostack2: A Software Framework for Developing Multi-robot Aerial Systems
###### Abstract
In recent years, the robotics community has witnessed the development of several software stacks for ground and articulated robots, such as Navigation2 and MoveIt. However, the same level of collaboration and standardization is yet to be achieved in the field of aerial robotics, where each research group has developed their own frameworks. This work presents Aerostack2, a framework for the development of autonomous aerial robotics systems that aims to address the lack of standardization and fragmentation of efforts in the field. Built on ROS 2 middleware and featuring an efficient modular software architecture and multi-robot orientation, Aerostack2 is a versatile and platform-independent environment that covers a wide range of robot capabilities for autonomous operation. Its major contributions include providing a logical level for specifying missions, reusing components and sub-systems for aerial robotics, and enabling the development of complete control architectures. All major contributions have been tested in simulation and real flights with multiple heterogeneous swarms. Aerostack2 is open source and community oriented, democratizing the access to its technology by autonomous drone systems developers.
**Source code:**
[https://github.com/aerostack2/aerostack2](https://github.com/aerostack2/aerostack2)
**Documentation:**
[https://aerostack2.github.io/](https://aerostack2.github.io/)
## I Introduction
In recent years, the robotics community has witnessed the development of several software stacks focused on the control and guidance of ground robots and articulated robots. Navigation2 [1] and MoveIt [2] are two examples of such that have gained widespread adoption. However, the same level of collaboration and standardization has not been observed in the field of aerial robotics. Research groups have tended to develop their own frameworks, resulting in isolated efforts that are difficult to integrate.
Furthermore, even when frameworks have been developed, they often have a narrow focus, such as low-level control, which limits their usefulness in more comprehensive applications. This fragmentation of efforts can make it challenging to take advantage of the strengths of each framework in a common application.
To address these challenges, this paper proposes a collaborative framework for aerial robotics that brings together users and developers to work towards a common goal. Building on the work of others, we aim to enhance research in aerial robotics and accelerate their application in industry fields. This paper presents an overview of our proposed framework and discusses its potential impact in the field.
The presented software framework Aerostack2 is an evolution of a former one called Aerostack [3] which was developed and successfully used in our research lab for more than six years, not only for research purposes, but also for industrial projects and international robotics competitions like MBZIRC 2020 or IMAV 2017.
Our framework, presented in this paper, incorporates important changes and improvements to enable more efficient and effective robotic systems. Specifically, the framework operates using ROS 2 (Robotics Operating System 2 [4]), a widely used middleware for robotics that provides valuable tools to create modular and distributed robotic systems. Additionally, the framework is built on a more efficient modular software architecture and enables multi-robot orientation. Moreover, the framework aims to foster community creation within the ROS 2 ecosystem, as seen in other widely used software frameworks in the Robotics Community.
### Contributions
Compared to other existing solutions in the drone sector, Aerostack2 is an original tool mainly because of the following characteristics:
* _Platform Independence_. Aerostack2 is a general environment, not dedicated to specific aerial platforms, so it can be used with different types of drones. For example, Aerostack has been used on drones with Pixhawk controllers or with commercial platforms (e.g., DJI).
* _Versatility_. Aerostack2 covers a wide range of robot capabilities for autonomous operation with different types of mechanism related to flight control, spatial localization, aerial navigation planning methods, forms of communication between drones, etc.
* _Easy mission specification_. Aerostack2 provides a logical level that helps developers formulate the tasks to be done by autonomous aerial robots. Compared to the level provided by ROS2 programming, this logical level abstracts details (e.g., using the notion of robot behaviors) and simplifies the specification of missions. In addition, an existing robotic system developed using Aerostack2 may be used by operators to formulate aerial misions with the help of mission specification tools (e.g., user interfaces, behavior trees, etc.).
* _Simplify the engineering process in aerial robots systems_. Aerostack2 provides pre-programmed components that provide autonomous operation capabilities, encapsulating specialized algorithms (computer vision, sensor data fusion, automatic planning, motion controllers, etc.). These components are designed in a general and adaptable way to be used in the construction of multiple aerial robots. Individual Aerostack2 components (e.g., a flight motion controller) can be reused to build specific aerial robotic applications. In this case, the developer does not use the complete framework, but only separate components for specific functionalities. Component reuse may be done at different levels of granularity (e.g., simple algorithms, complex robot behaviors, etc.). For example, a developer who designs a new algorithm for a certain aerial robot capability (e.g., a flight motion controller) may test such an algorithm by reusing a partial architecture provided by Aerostack2.
* _Open-source_. The Aerostack2 environment is open source and free of charge, which facilitates universal access to this technology by autonomous drone developers. Aerostack is offered with a BSD-3-Clause license that allows free distribution and modification of the software.
## II Related Work
There are numerous flight systems of different nature that allow UAVs to fly. A common division in the literature is between low-level control systems, typically the flight controller, and high-level control systems, flight, or aerial stacks.
Low-level control systems vary from open source hardware (OSH) and open source software (OSS) to proprietary commercial controllers. In 2018 Ebeid et al. presented a survey of open-source hardware and software comparing their main features [5]. Table I shows a comparison of relevant flight controllers.
Several high-level control systems has been published in the recent years. Table II compares existing aerial stacks with the proposed solution of this publication.
Aerostack [3] is a software framework that helps developers design and build the complete control architecture of aerial robotic systems, integrating multiple heterogeneous computational solutions (e.g., computer vision algorithms, motion controllers, self-localization and mapping methods, motion planning algorithms, etc.). Aerostack was developed in our research laboratory using ROS. The experience in its use and the development of successive versions has been an important antecedent for the creation of the new framework presented in this paper.
AerialCore [6] is an aerial system built using ROS Noetic and meant to be executed entirely onboard. It can be deployed on any multi-rotor vehicle, given it is equipped with a PX4-compatible flight controller, for both indoor and outdoor. It supports multi-robot experiments using Nimbro network communication and provides both: agile flying and robust control.
Agilicious [7] is a co-designed hardware and software framework tailored to autonomous and agile quadrotor flight, which has been developed and used since 2016 at the Robotics and Perception Group (RPG) of the University of Zurich. It is completely open-source and open-hardware and supports both model-based and neural network-based controllers. Also, it provides high thrust-to-weight and torque-to-inertia ratios for agility, onboard vision sensors, GPU-accelerated compute hardware for real-time perception and neural-network inference, a real-time flight controller, and a versatile software stack.
KumarRobotics flight stack [8] allows a quadrotor to navigate autonomously in cluttered and GPS-denied environments. It consists of a set of modules that work together to allow fast autonomous navigation of an aerial robot through an unknown environment. The system has been designed so that all sensing and computation occurs onboard the robot. Once the robot has been launched, there is no human interaction necessary for the robot to navigate to the goal.
CrazyChoir [9] is a ROS 2 toolbox that allows users to run simulations and experiments on swarms of Crazyflie nano-quadrotors. CrazyChoiR implements several tools to model swarms of Crazyflie nano-quadrotors and to run distributed, complex tasks both in simulation and experiments.
UAV Abstraction Layer (UAL) [10] is a software layer to abstract users of unmanned aerial vehicles from the specific hardware of the platform and the autopilot interfaces. Its main objective is to simplify the development and testing of higher-level algorithms in aerial robotics by trying to standardize and simplify the interfaces with unmanned aerial vehicles. Unmanned aerial vehicle abstraction layer supports operation with PX4 and DJI autopilots (among others),
\begin{table}
\begin{tabular}{|l|l|l|c|} \hline
**Flight Controller** & **Open Source** & **Simulation** & **Rate Input** \\ \hline \hline Pixhawk/PX4 & OSH, OSS & SITL, HITL & ✓ \\ \hline Ardupilot & OSS & SITL, HITL & ✓ \\ \hline Paparazzi & OSH, OSS & SITL, HITL & ✓ \\ \hline Crazyflie & OSH, OSS & SITL, HITL & ✓ \\ \hline DJI Matrice & Propietary & HITL & ✓ \\ \hline Parrot & Propietary & SITL & ✗ \\ \hline Skydio & Propietary & - & ✗ \\ \hline \end{tabular}
\begin{tabular}{|l|l|c|} \hline
**OSH: Open Source Hardware, OSS: Open Source Software, SITL:**
Software In The Loop, HITL: Hardware In The Loop.
\end{table} TABLE I: Comparison of revelant low-level control systems.
which are current leading manufacturers. Besides, unmanned aerial vehicle abstraction layer can work seamlessly with simulated or real platforms and provides calls to issue standard commands such as taking off, landing or pose, and velocity controls.
XTDrone [11] is a UAV simulation platform based on PX4, ROS, and Gazebo. XTDrone supports mulitrotors (including quadrotors and hexarotors), fixed wings, VTOLs (including quadplanes, tailsitters, and tiltrotors), and other unmanned systems (such as UGVs, USVs, and robotic arms). It is convenient to deploy the algorithm to real UAVs after testing and debugging on the simulation platform.
RotorS [12] is a modular Micro Aerial Vehicle (MAV) simulation framework, which allows a quick start to perform research on MAVs. The simulator was designed in a modular way so that different controllers and state estimators can be used interchangeably, while incorporating new MAVs is reduced to a few steps. The provided controllers can be adapted to a custom vehicle by simply changing a parameter file. Different controllers and state estimators can be compared with the provided evaluation framework. All components were designed to be analogous to their real-world counterparts. This allows the usage of the same controllers and state estimators, including their parameters, in the simulation as on the real MAV.
Generalized Autonomy Aviation System (GAAS) [13] is an open-source program designed for fully autonomous VTOL and drones. GAAS provides a fully autonomous flight platform based on lidar, HD-map relocalization, path planning, and other modules for aircraft. In contrast to the autopilot technology previously available only for consumer-grade drones, GAAS aims for robust fully autonomous flight for human-carrying and can be easily combined with national air traffic control. The whole framework is loosely coupled, so you can customize your own modules and easily add them to GAAS.
Of the nine high-level control systems analyzed, it is observed that (1) all of them are open source; (2) six have a modular structure; (3) four systems have been tested in a non-laboratory environment, two on a real robot and three only in simulation; (4) only one of the systems, CrazyChoir, uses ROS 2 as middleware, compared to ROS used by the rest; (5) six systems have undergone an update in the last six months; (6) only two, AerialCore and UAL, support references in different frames; (7) seven of the systems support acro control of the aircraft; (8) three systems have a multi-agent approach; (9) three other systems support more than one different flight platform; and (10) only AerialCore has an architecture oriented towards the use of plugins.
## III A Stack of Software Components for Aerial Robotics
Aerostack2 framework is organized in the form of a software stack with components distributed in different hierarchical layers, as can be seen in Fig. 1. Components are organized hierarchically in several layers in such a way that components at one layer may use components of the lower layers (corresponding to less complex functionalities) but they do not use components of the higher layers. The layers are the following (from bottom to top):
* _Middleware_. At the lowest level is the software that supports the Aerostack2 environment, which consists of the Linux operating system and ROS (Robot Operating System), as well as general software libraries (e.g. OpenCV).
* _Inter-process communication_. It includes components to facilitate communication between processes that operate concurrently. These are message types for information exchange that define data structures (specific to aerial robotics) that are common to facilitate process interoperability.
* _Interfaces with platforms and sensors_. These are components that serve as interfaces with multiple kinds of aerial platforms and sensors. Aerostack2 has several interfaces that allow operating with both physical platforms (e.g., with Pixhawk or with DJI platforms) and simulated platforms (e.g., using simulated drones with the Gazebo environment) besides different types of sensor (e.g., USB cameras, RealSense depth camera).
* _Basic robotics functions_. Aerostack2 includes a set of software components that implement specialized algorithms corresponding to essential aerial robotics functions for autonomous operation such as state estimation,
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline
**Flight Stack** & **Open Source** & **Modular** & **Tested** & **Middleware** & **Soft. last update** & **Multi-frame** & **Rate output** & **Multi-agent** & **Multi-platform** & **Plugin oriented** \\ \hline \hline Aerostack [3] & ✓ & ✓ & S,RL\_RO & ROS & 10/2021 & ✗ & ✓ & ✓ & ✗ \\ \hline AerialCore [6] & ✓ & ✓ & S,RL\_RO & ROS & 03/2023 & ✓ & ✓ & ✓ & ✗ & ✓ \\ \hline Agilicus [7] & ✓ & ✓ & S,RL & ROS & 03/2023 & ✗ & ✓ & ✗ & ✗ & ✗ \\ \hline KumarRobotics [8] & ✓ & ✗ & S,RL\_RO & ROS & 12/2022 & ✗ & ✓ & ✗ & ✓ & ✗ \\ \hline CrazyChoir [9] & ✓ & ✗ & S,RL & ROS 2 & 02/2023 & ✗ & ✓ & ✓ & ✗ & ✗ \\ \hline UAL [10] & ✓ & ✗ & S,RL & ROS & 12/2022 & ✓ & ✗ & ✗ & ✓ & ✗ \\ \hline XTDrone [11] & ✓ & ✓ & S & ROS & 03/2023 & ✗ & ✓ & ✗ & ✗ & ✗ \\ \hline Rotory5 [12] & ✓ & ✓ & S & ROS & 07/2021 & ✗ & ✓ & ✗ & ✗ & ✗ \\ \hline GAAS [13] & ✓ & ✓ & S & ROS & 10/2021 & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline \hline
**Aerostack2 (Ours)** & ✓ & ✓ & S,RL\_RO & ROS 2 & 03/2023 & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
* S: Simulation, RL: real experiments in the lab, RO: real experiments outside the lab.
\end{table} TABLE II: Comparison of relevant open-sourced high level control systems.
motion control, and other basic functions (e.g., emergency handling, etc.).
* _Behaviors_. This level includes a set of components corresponding to different robot behaviors provided by Aerostack2 for autonomous operation. Each component encapsulates the algorithms used to implement a particular behavior (e.g., take off, hover, generate trajectory, etc.) together with mechanisms for execution monitoring to facilitate the specification of mission plans.
* _Mission control_. This level includes components that facilitate the specification of missions for autonomous drone operation. For example, behavior trees can be used to indicate and visualize with a hierarchical graphical structure the tasks performed by the drone. On the other hand, Aerostack2 also provides an API (application programming interface) that allows one to specify missions in a flexible way using the Python language. Additionally, Aerostack2 provides tools for the user to monitor and manually control the mission execution.
* _Applications_. The top level corresponds to the specific applications built with the components of the lower levels. Aerostack2 has examples of applications that can serve developers as a reference and guide on how to build drones that operate autonomously. In addition to applications with real drones, Aerostack2 has multiple applications in simulated environments with varying degrees of complexity to facilitate the learning of this technology.
It is important to note that this modular organization of components is open to be used at any layer. This means that developers who use Aerostack2 may use directly the top-level components but also may use separately any intermediate layer or other individual component (e.g., the state estimator) for building a particular application. The following sections describe in more detail each layer of the software stack.
### _Interprocess communication_
In order to organize each robot control architecture with multiple interacting processes operating concurrently, Aerostack2 has a standard data channel that is shared among the processes. To facilitate process interoperability, the content of the data channel follows conventions defined by standards that are applicable to aerial robotics. These standards are defined with respect to data structures and common names for communication mechanisms between processes (e.g., ROS 2 topics, services, and actions). Our framework follows some standards that have been defined by the ROS 2 community for aerial robots, and others (e.g., more specific message types or names of ROS 2 topics, services, and actions) have been defined specifically in Aerostack2. Table III illustrates some of the ROS topics used by Aerostack2 together with standard names and standard message types.
The content of the standard data channel is distributed in the following main groups:
* Sensor measurements. Sensor measurements are values corresponding to direct measurements recorded by sensors. This includes, for example, images from cameras, data from IMU, data from GPS, etc.
* Actuator commands. These messages correspond to commands that are directly understandable aerial platforms, such as thrust or, depending on the platform, values about the desired localization.
* Self localization. These values correspond to the robot localization in the environment together with kinematic
Fig. 1: Overview of the software components provided by the Aerostack2 environment.
values (e.g., speed) as they are believed by the robot. These values may be obtained, for example, by fusing sensor measurements with the help of extended Kalman filters (EKF).
* Motion reference. These messages are motion values that should be considered as goals by motion controllers. This includes, for example, desired values for pose, speed, trajectory, etc.
* Others: multi-robot communication messages, alert messages corresponding to emergency situations, operation mode of the aerial platform, etc.
### _Platforms and sensors interfaces_
The common data channel presented in the previous section includes sensor and actuator data represented in a generic way that is independent of the physical platform used. This makes it possible to make part of the control architecture independent of the different platforms to be used, which facilitates the reuse of its components. The way to connect the architecture with each platform is through the construction of interfaces between the platform and the communication channel.
Aerostack2 incorporates a _Platform_ abstraction class responsible for managing the capabilities associated with the direct integration of various aerial platforms into the framework. This abstraction facilitates the integration of new platforms into the framework by providing guidance on integration steps, overloading functions of the Platform class, and ensuring compatibility with the entire framework.
The proposed framework's interaction with the Platform interface facilitates the integration of both physical and simulated interfaces, without requiring the rest of the framework to distinguish between the two. This feature is a fundamental pillar that strengthens the sim2real capabilities of the framework.
The responsibility of this interface is to gather sensor measurements from the aircraft and transmit them to the Communication Layer. Additionally, it is tasked with receiving actuator commands and other requests from the various layers of the Aerostack2 framework and relaying them to the aircraft in a platform-specific manner.
Moreover, there may be instances where additional sensors or actuators are required for a specific application, such as controlling a mounted arm or manipulating a different sensor. To address this, Aerostack2 incorporates the Sensor abstraction class, which simplifies the management of external sensors. It should be noted that the Aerostack2 framework is fully compatible with all ROS 2 drivers, enabling easy integration with previous community efforts.
### _Basic Robotic Functions_
The Aerostack2 platform encompasses a collection of software components that perform fundamental robotic functions in aerial robotics. These components correspond to functions such as motion control, state estimation, and other essential functions (e.g., emergency handling, etc.) that support the autonomous operation of various types of aerial robots.
In general, Aerostack2 components are designed to be general and reusable with alternative algorithms that are implemented in the form of plug-ins and are selected based on each particular situation and aerial robotic application. But this way of structuring the components gains full sense when talking about basic robotic functions due to the importance of these modules within the functioning of the entire system. In this architecture, the basic functions are managed by a function manager, which is responsible for loading the plugins with each concrete algorithm (e.g., a PID controller) and managing how they interact within the rest of the framework. The plugin selector can also provide meta-control features, such as plugin replacement, plugin bypass (which occurs when the motion reference can be directly handled by the aerial platform), whereas the input and output adapters, adjusts the input or output of the function plugin to the rest of the Aerostack2 framework. In Fig. 2 an schema of the architecture of a basic function is described.
Specifically, the two core functions of the framework are the following, the connection of which can be shown in Fig. 4:
* Motion Control: This module listens to the motion reference commands generated from the top-level layers
Fig. 3: General scheme of how a basic robotic function is implemented in Aerostack2.
Fig. 2: Aerial platform scheme.
and converts them into actuator command signals that will be followed by the concrete aerial platform. Inside this module, the controllers process the references to generate control signals. The Controller Manager is responsible for ensuring a suitable combination of motion references, which have to be preprocessed with the current working plugin preferences (e.g. the motion references shall be expressed in the reference frame in which the controller expects), but also to adequate the actuation command signals and post-processing them for aerial platform desired format. Moreover, the plugin selector has the capability to load multiple controllers in the form of plugins, being able to be aware of the operation of each one and making the appropriate selection.
* State Estimator: This module combines the information received from different sensors to estimate the state of the aircraft, where the state refers to the position and speed of each aircraft over time. This module can load multiple state estimation algorithms in the form of plugins. The State Estimator Manager is responsible for generating the transformation trees (TF trees) that will be used by the rest of the framework. In addition, it is aware of the available plugins and is able to make the selection based on the environment. In addition, it is aware of the available plugins and is able to make the selection based on the environment.
In this category of components, there are also other components, such as emergency handlers, that implement emergency procedures to react quickly in the presence of unexpected situations, such as battery discharge or approaching forbidden areas. There are multiple agents that can trigger these emergencies, and depending on the severity of the emergency, the module corresponding to a low-level layer will handle it. For example, if the severity is low, the controller may be able to handle it, but if it is severe, the aerial platform itself should act.
### _Behaviors_
Aerostack2 uses a specialized type of component, called _behavior_, which provides a logical layer to formulate mission plans in a uniform and more simplified way (compared to the direct use of state estimators and actuator controllers). Each behavior corresponds to a specific robot skill related, for example, to flight motion, such as taking off, landing, hovering and following a path, or other abilities (e.g., video recording, communication with other agents, etc.).
Using behaviors, a mission plan is expressed as a controlled sequence of activations (or deactivations) of multiple behaviors that may operate concurrently. Each behavior activation initiates the execution of a particular task described with certain parameters (e.g., following a particular path described with a list of waypoints). The result of each behavior execution is described in terms of success or failure, which is useful to determine the next step to be done during the mission.
We distinguish between two types of robot behaviors, according to execution goals1: (1) _goal-based_ behaviors that are defined to reach a final state or attain a goal (for example, in an aerial robot, the behavior taking off), and (2) _recurrent_ behaviors that perform an activity recurrently or maintain a desired state (for example, a behavior for visual marker recognition).
Footnote 1: This division is also mentioned in the literature of robotics. For example, these two categories have been distinguished using the terms _servo_, for recurrent behaviors, and _ballistic_ for goal-based behaviors [14].
The notion of behavior in robotics has been used in the behavior-based paradigm [15][16] (which mainly corresponds to reactive behaviors) and in behavior-based systems [17] (with both reactive and deliberative behaviors). In particular, a behavior in Aerostack2 may correspond not only to reactive behaviors (e.g., hovering) but also to
Fig. 4: General scheme of basic robotics functions connection in Aerostack2.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**ROS topic name** & **Message type** & **Description** \\ \hline motion\_reference/pose & geometry\_msgs/PoseStamped & Actuator command for the multirotor specifying x, y, and z (m: meters) and roll, pitch and yaw (rad: radians). \\ \hline motion\_reference/twist & geometry\_msgs/TwistStamped & Actuator command for the multirotor specifying vx, vy and vz (m/s: meters per seconds) and roll rate, pitch rate and yaw rate (rad/s: radians per seconds). \\ \hline motion\_reference/trajectory & as2\_msgs/TrajectoryPoint & Reference trajectory point specifying x, y and z (m: meters), vx, y and z (m/s: meters per seconds), ax, and az (m/s2: meters per seconds squared) and yaw angle (rad: radians). \\ \hline sensor\_measurement/camera & sensor\_msgs/Image & Raw image data received from camera (a general camera). \\ \hline sensor\_measurement/imu & sensor\_msgs/imu & Inertial Measurement Unit data. \\ \hline sensor\_measurement/gps & sensor\_msgs/NavSatFix & GPS (Global Positioning System) coordinates. \\ \hline \end{tabular}
\end{table} TABLE III: Example ROS topics and messages used by Aerostack2 for interprocess communication.
skills similar to mental abilities that use complex internal representations (e.g., generating a trajectory to reach a destination).
The software components that implement behaviors in the Aerostack2 framework are specialized modules for robust plan execution following a distributed organization that facilitates maintainability and flexibility to add new behaviors in the future. Each behavior provides the following two main types of functions (Fig. 5):
* _Task refinement_. Robot behaviors execute tasks that cannot be executed directly by actuators because they are formulated in an abstract terminology as it is used to specify mission plans by the user. Robot behaviors refine these tasks into more detailed operations (e.g., references to controllers, etc.). This is done in an adaptive way in continuous interaction with the environment (e.g., waiting a certain amount of time to complete each step or selecting the most appropriate method according to the current situation of the environment).
* _Execution monitoring_. Each robot behavior observes and assesses how each task is executed (e.g., by contrasting observations with expectations), which is very important to facilitate a robust operation. In contrast to other centralized approaches, Aerostack2 distributes execution monitoring into behaviors. For example, each behavior checks locally whether the current situation of the environment satisfies the assumptions to operate correctly (e.g., a behavior for visual marker recognition checks that the lighting assumptions in the environment are satisfied). Execution monitoring also detects when the execution has reached a prefixed goal or when there is a failure in the execution. For example, in the case of a task to follow a certain path, a failure can be detected if the robot does not arrive at the destination point in a maximum expected time or if the robot moves in the opposite direction to the direction that is expected to follow towards the destination point.
Each software component that implements a behavior encapsulates the details of the algorithms used to execute the task, providing a uniform interface that is common for all behaviors. This interface is used to control the execution of the behavior with the following basic services: start, pause, resume and stop. In addition, a service called modify is used to change parameters of a behavior without the need to stop its execution. The behavior interface is also used to inform about the execution of the behavior with two separate outputs: the execution state of the behavior (e.g., idle, running, or paused), and a periodic feedback during the execution of behavior.
The manner in which the behaviors have been implemented is fully consistent with the standard ROS 2 actions. Each behavior is linked to its own ROS2 action, with added functionalities that enable actions to be paused, resumed, or even have their goals modified during execution.
### _Mission control and supervision_
Aerostack2 provides several mechanisms that help developers specify a mission plan and supervise its execution. The developer can write a mission plan to specify the set of tasks that a robot must perform on a particular mission. The specification of mission plans can be formulated in the form of robot behaviors. In this way, the user can specify the mission more easily by indicating complex tasks that the robot executes autonomously (e.g., following a certain path).
One of the solutions provided by Aerostack2 to specify missions is a Python API (Application Programming Interface). This is a convenient method for users familiar with computer programming languages and provides high flexibility for formulating plans with complex control regimes. Aerostack2 provides an API with a set of functions to activate/deactivate behaviors or to directly send command through motion reference messages. The next piece of code shows a simple example of a mission written in Python using the API provided by Aerostack2.
As an alternative to the Python API, the developer may use other mechanisms to specify mission plans. For example, mission plans may be formulated using behavior trees [18]. The modularity and hierarchical of behavior trees are useful during the mission plan design but also during mission execution thanks to a graphically monitoring.
Fig. 5: Characteristics of the robot behavior used in Aerostack2 to implement robot skills following a distributed and modular approach.
Aerostack2 incorporates custom behavior tree nodes that can activate and deactivate robot behaviors. Fig. 6 shows a mission sketched in a behavior tree.
```
1importrclpy
2fromas2_python_api.drone_interfaceimportDroneInterface
3
4rclpy_init()
5
6#Initializedroneinterface
7drone_interface=DroneInterface("drone_sim_0")
8
9Armandchangeintooffboardmode
10drone_interface.arm()
11drone_interface.offboard()
12
13#TakeOff
14drone_interface.takeoff(height-3.0,speed=1.0)
15
16#GoTo
17drone_interface.go_to(0,5,3,speed=1.0)
18
19#FollowPath
20drone_interface.follow_path(
21[5,0,3],
22[5,5,3],
23[0,0,3],
24speed=1.0)
25
26#Land
27drone_interface.land(speed=0.5)
28
29#Exit
30drone_interface.shutdown()
31relpy_shutdown()
32exit(0)
```
Listing 1: Python Mission Example
It is important to note that, using these previous methods to specify mission plans, the developer has to take into account additional operational aspects to coordinate the execution considering the relationships between concurrent behaviors (e.g., incompatibility or dependency). Such coordination mechanisms may be difficult to design and implement in complex autonomous robots. In these situations, additional solutions can be used based on automatic coordination of behaviors [19].
In addition to the tools mentioned for the specification of the mission plan, Aerostack2 also provides other tools to monitor execution. This includes two user interface tools that use information related to basic aerial robotic functions: (1) an alphanumeric viewer to monitor the state of specific variables (e.g., sensor measurements, values corresponding to state estimation, references for controllers) and (2) an interface for keyboard teleoperation which can be used by the user to operate manually an aerial robot with the help of the keyboard.
The alphanumeric viewer is a powerful tool to have a overview of the system's operation, being very useful for research and development of framework modules.
While the keyboard teleoperation is usefull tool to manipulate the drone in a simple way, sending position and speed commands, which allows one to check the system behavior or take the controll when the autonomous logic fails.
In this category of components, Aerostack2 also provides a graphical user interface for using the software framework through a web-based application. This type of tool is very convenient to facilitate rapid planning of a mission for one or several drones using graphical resources (e.g., a geographic map and graphical references), which can be done online or offline, allowing repeatability of missions, as shown in Fig. 8. This user interface is also useful to show graphically details of the mission execution, allowing its monitoring and modification in real time.
The schematic of this interface is shown in Fig. 7, where many users can connect through different devices and communicate with the agents through the network. This architecture provides the ability to plan and supervise missions from different devices, making the system robust and powerful to use. In addition, since it is a web interface, it can be used from any device, increasing its accessibility.
Fig. 6: Mission plan defined via a behavior tree.
Fig. 7: Aerostack2 Web-GUI system overview
## IV Experiments & Discussion
This section presents some experiments that show the capabilities of Aerostack2 performing drone swarming missions.
### _Sim2Real_
This experiment studies the ease of moving from an experiment performed in a simulation environment to the real world. In this scenario, two drones have to cross two gates in a coordinated way.
The simulation was performed using Gazebo Ignition. The mission was planned using the python API of Aerostack2. For this mission just the basic behaviors are used: platform behaviors for offloading and arming, and motion behaviors for taking off, flying, and landing. The localization of the drones and the gates are provided by the ground truth of the simulator, and for control, a PID controller with trajectory references is used.
The real experiment was carried out in the CAR Robotics Arena, an area of 60 m2 with a surrounding safety net. An Optitrack motion capture system (_mocap_) system provides the position of drones and gates within the capture area. The UAVs used for this experiment were two Bitcraze Crazyflie 2.1 with IR markers for mocap localization. Due to the limited payload of these micro-UAVs, the computing was performed by the ground station computer.
Table IV shows a comparison between the components used in simulation and real experiments. This shows that it
Fig. 11: Trajectories performed by the drones during the gate crossing experiments.
Fig. 8: Aerostack2 Web-GUI Mission Planning view.
Fig. 10: Gate crossing real environment.
Fig. 9: Gate crossing simulation environment.
is only needed to change the platform and the state estimation component (in this case, a plugin inside of it) to translate the experiment from simulation to the real world.
### _Heterogeneous swarm_
In this experiment, two different platforms are used at the same time to perform a cooperative mission. These platforms, shown in Fig. 12, were:
* Pixhawk F450: autopilot PX4 Pixhawk 4 mini on DJI F450 frame with Nvidia Jetson Xavier NX for onboard computing, GPS, and Rangefinder as an altimeter.
* DJI M210: DJI Matrice 210 RTK v2 with Nvidia Jetson AGX Xavier for onboard computing.
This scenario represents the use of a swarm consisting of two drones to perform a simple aerial inspection. The mission was planned using the Web GUI, generating a path of GPS waypoints for both drones from an aerial image of the flying area. Once the mission was generated, both drones took off at the same time, and each one traveled along the path, landing when they finished.
In this case, the use of two different models of drones does not make a difference for Aerostack2 when planning the mission and controlling the drones. The computing was performed onboard the drones, each drone having an instance of Aerostack2, and they communicated their position to the ground station for monitoring, allowing a high level of autonomy.
### _Use cases_
Aerostack2 has been used for several applications:
* **Wind turbine inspection:** in the context of an industrial project that aims to increase the autonomy of drones in wind turbine inspection, some experiments have been carried out. Fig. 14 shows a drone controlled by Aerostack2 in a simulated environment following the movement of the wind turbine during inspection. Subsequently, this simulated wind turbine was used to give references to a real drone during a flight to simulate an inspection in the real world, as shown in Fig. 15. This shows the capability of Aerostack2 to be used to mix real-world flying with simulated information.
* **Photovoltaic plant inspection:** as part of the development of an industrial project, the simulation of an inspection of a photovoltaic plant was performed with a swarm of drones. This simulation was made using Gazebo and the photorealistic simulator Flightmare [20]. The mission was planned with the Web GUI, defining the limits of the inspection area. Then, a path
\begin{table}
\begin{tabular}{|c|c|} \hline
**Module** & **Component** \\ \hline \hline Mission Control & Web GUI \\ \hline Behaviors & MB, PB \\ \hline State Estimation & GPS \\ \hline Motion Control & PID controller \\ \hline Platforms & DJI matrice, PX4 \\ \hline \end{tabular}
\end{table} TABLE V: Aerostack2 modules used in Gate crossing simulation experiment. (MB) Motion behaviors, (PB) Platform behaviors.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Module** & **Simulation** & **Real** \\ \hline \hline Mission Control & Python API & Python API \\ \hline Behaviors & MB, PB, TGB & MB, PB, TGB \\ \hline State Estimation & Ground Truth & Motion Capture System \\ \hline Motion Control & PID controller & PID controller \\ \hline Platforms & Gazebo Simulator & Crazyflie \\ \hline \end{tabular}
\end{table} TABLE IV: Comparison between Aerostack2 modules used in simulation and real experiments. (MB) Motion behaviors, (PB) Platform behaviors, (TGB) Trajectory Generator behavior.
Fig. 12: Aerial platform used during the heterogeneous swarm experiment.
Fig. 13: Screen of the UI web monitoring the swarm during an outdoor flight. DJI M210 in red and Pixhawk F450 in blue.
Fig. 14: Wind turbine inspection in simulation.
planning algorithm generates the path over the lines of panels that must be taken to inspect the area based on the desired distance between survey points. After this, another algorithm is responsible for dividing the route into each path that each drone must follow [21]. This shows that Aerostack2 can be used for the development of industrial projects using a swarm of drones. Fig. 16 shows images taken during this experiment.
* **Gate crossing:** AI Nanocopter Challenge within IMAV'22 consists of flying one Bitcraze Crazyflie drone inside a small arena populated with obstacles and drone racing gates. Aerostack2 was able to pilot the UAV using wifi communication, and with a segmentation color algorithm, passing through several gates, achieving the record of the highest number of gates passed by all participants. Fig. 16(a) shows the drone during this competition.
* **Package delivery:** IMAV'22 also proposes a package delivery challenge consisting of flying a UAV carrying a package and delivering it to a certain place. The maximum weight of the UAV with the payload was 5 kg. This involved the construction of an ad hoc drone using a Pixhawk autopilot on a F450 frame with a package delivery device. In this scenario, Aerostack2 was able to plan the mission with the Web GUI and fly the drone in an outdoor environment using GPS references, as shown in Fig. 16(b).
* **Maritime Operation:** MBZIRC'23 competition proposes a maritime operation of search and evidence collection. This challenge consists of several drones searching for a specific boat in a vast maritime area, coordinated with an unmanned surface vehicle (USV) to take some packages from this boat. The main difficulties of this scenario were the absolute lack of GPS signal, the communication distances, and the coordinated maneuvers required to transport the packages. For this phase, the scenario was recreated in a realistic simulation environment made on Gazebo. Aerostack2 served as the framework for the trials of this competition, showing the capabilities to fly several coordinated drones, using custom localization and communication algorithms, and implement neural networks for object detection, as shown in Fig. 18.
Fig. 16: Photovoltaic plant swarm simulated inspection.
Fig. 17: Using Aerostack2 during IMAV’22 in different challenges: AI nanocopter (left) and Package Delivery (right).
Fig. 15: Wind turbine simulated inspection in a real environment. References from a simulated moving wind turbine (left) were used to move a real drone (right).
The use of Aerostack2 in these applications shows the capability of this framework to be used in very different scenarios, with completely different requirements.
## V Conclusions and Future Work
This paper has presented an innovative Open Source framework for the development of aerial robot systems. The framework's key capabilities, including multirobot orientation, platform independence, versatility, and modularity, have been demonstrated through a series of experiments conducted in diverse scenarios, both in simulation and in the real world.
Future work will involve the continued development of new behaviors, metacontrol capabilities, and the expansion of the number of behaviors, controllers, state estimators, and platforms supported by the system through collaboration with the aerial robotics community.
## Acknowledgments
**Funding.** This work has been supported by the project COPILOT ref. Y2020\(\backslash\)EMT6368 "Control, Monitoring and Operation of Photovoltaic Solar Power Plants by means of synergic integration of Drones, IoT and advanced communication technologies", funded by Madrid Government under the R&D Synergic Projects Program.
We acknowledge the support of the European Union through the Horizon Europe Project No. 101070254 CORESENSE.
This work has also been supported by the project INSERTION ref. ID2021-127648OBC32, "UAV Perception, Control and Operation in Harsh Environments", funded by the Spanish Ministry of Science and Innovation under the program "Projects for Knowledge Generating". The work of the third author is supported by the Grant FPU20/07198 of the Spanish Ministry for Universities. The work of the fifth author is supported by the Spanish Ministry of Science and Innovation under its Program for Technical Assistants PTA2021-020671.
**Data and Materials.** All materials are open-source accesible at [https://github.com/aerostack2/aerostack2](https://github.com/aerostack2/aerostack2) under BSD-3-Clause license.
|
2310.20690 | A direct proof for the positive definiteness of four point metric spaces | We provide a direct and elementary proof for the fact that every four point
metric space is positive definite, which was first proved by Meckes based on
some embedding theorems of metric spaces. As an outcome of the direct proof, we
also provide a condition for the magnitude of a finite metric space to obey the
inclusion-exclusion principle with respect to a specific choice of subspaces. | Kiyonori Gomi | 2023-10-31T17:53:52Z | http://arxiv.org/abs/2310.20690v2 | # A direct proof for the positive definiteness of four point metric spaces
###### Abstract.
We provide a direct and elementary proof for the fact that every four point metric space is positive definite, which was first proved by Meckes based on some embedding theorems of metric spaces. As an outcome of the direct proof, we also provide a condition for the magnitude of a finite metric space to obey the inclusion-exclusion principle with respect to a specific choice of subspaces.
Key words and phrases:metric space, positive definite, magnitude, inclusion-exclusion principle 2010 Mathematics Subject Classification: 51F99, 54E35
###### Contents
* 1 Introduction
* 2 The proof
* 2.1 The positive definiteness
* 2.2 Preliminary
* 2.3 Proof
* 3 The inclusion-exclusion principle
* 3.1 Magnitude
* 3.2 A generalization of the key formula
* 3.3 The inclusion-exclusion principle
* 3.4 Comparison of conditions
## 1. Introduction
Let \(X\) be a finite set, and \(d\) a metric on \(X\). The finite metric space \((X,d)\) is said to be _positive definite_[5] if the _similarity matrix_ or the _zeta matrix_ of \((X,d)\)
\[\zeta_{X}=(e^{-d(i,j)})_{i,j\in X}\]
is positive definite as a symmetric matrix. This property stems from the study of the _magnitude_[5] of \((X,d)\). Intuitively, this is a numerical invariant which counts the number of points in \(X\) taking the effect of the metric \(d\). The magnitude of a positive definite metric space behaves nicely, and various conditions for the positive definiteness have been studied in [5, 7].
It is clear that the \(1\)-point metric space is positive definite. In view of Sylvester's criterion, it is also clear that every \(2\)-point metric space is positive definite, because
the determinant of the zeta matrix for \(X_{2}=\{1,2\}\) is
\[\det\zeta_{X_{2}}=\left|\begin{array}{cc}1&Z_{12}\\ Z_{21}&1\end{array}\right|=1-Z_{12}^{2},\]
where \(Z_{ij}=e^{-d(i,j)}\) satisfies \(1-Z_{ij}>0\) if \(i\neq j\). We can see that \(3\)-point metric spaces \(X_{3}=\{1,2,3\}\) are also positive definite in the same way using the following expression of the determinant [5] (Proposition 2.4.15)
\[\det\zeta_{X_{3}} =1-Z_{12}^{2}-Z_{13}^{2}-Z_{23}^{2}+2Z_{12}Z_{13}Z_{23}\] \[=(1-Z_{12})(1-Z_{13})(1-Z_{23})+(1-Z_{12})(Z_{12}-Z_{13}Z_{23})\] \[\quad+(1-Z_{13})(Z_{13}-Z_{12}Z_{23})+(1-Z_{23})(Z_{23}-Z_{12}Z_{1 3}).\]
Note that the triangle inequality is equivalent to \(Z_{ij}-Z_{ik}Z_{kj}\geq 0\) for \(i,j,k\in X\). There exists a \(5\)-point metric space which is not positive definite [5] (Example 2.2.7). Hence \(n\)-point metric spaces are generally not positive definite if \(n\geq 5\). For \(4\)-point metric spaces, their positive definiteness is first established by Meckes [7] (Theorem 3.6 (4)), where the method of the proof is to embed \(4\)-point metric spaces into a positive definite normed space.
It is plausible that one can show the positive definiteness of \(4\)-point metric spaces more directly without invoking an embedding theorem. However, such a proof seems to be not yet available in the literature. Then the purpose of this paper is to provide such a direct and elementary proof.
The key to our proof is the formula for the determinant of the zeta matrix of the metric space \(X_{4}=\{1,2,3,4\}\) given by completing the square
\[\det\zeta_{X_{4}}=-(1-Z_{34}^{2})(Z_{12}-b_{0})^{2}+\frac{\Delta_{134}\Delta_{ 234}}{1-Z_{34}^{2}},\]
where \(b_{0}\) is given by
\[b_{0}=\frac{Z_{13}Z_{23}+Z_{14}Z_{24}-Z_{14}Z_{23}Z_{34}-Z_{13}Z_{24}Z_{34}}{ 1-Z_{34}^{2}},\]
and the determinants of the zeta matrices of the subspaces \(\{1,3,4\}\) and \(\{2,3,4\}\) are denoted by \(\Delta_{134}\) and \(\Delta_{234}\), respectively. Once the expression above is recognized, the elementary method can be applied to proving the positivity of \(\det\zeta_{X_{4}}\), which leads to the positive definiteness of \(X_{4}\).
The key formula of \(\det\zeta_{X_{4}}\) above seems to single out a particular metric on \(X_{4}\), namely, one satisfying the equation \(Z_{12}=b_{0}\). By a direct calculation, the magnitude \(\operatorname{Mag}(X_{4})\) of the metric space \(X_{4}\) subject to \(Z_{12}=b_{0}\) turns out to satisfy the so-called _inclusion-exclusion principle_ with respect to the subspaces \(A=\{1,3,4\}\) and \(B=\{2,3,4\}\):
\[\operatorname{Mag}(X_{4})=\operatorname{Mag}(A)+\operatorname{Mag}(B)- \operatorname{Mag}(A\cap B).\]
Furthermore, as will be established in Theorem 3.9, this can be generalized to the \(n\)-point metric space \(X_{n}=\{1,2,\ldots,n\}\) with \(n\geq 3\) and its subspaces \(A=\{1,3,4,\ldots,n\}\) and \(B=\{2,3,4,\ldots,n\}\). In addition, our condition \(Z_{12}=b_{0}\) is generally not covered by a condition for the inclusion-exclusion principle which has been widely known [4, 5] (See Definition 3.1).
It is natural to ask whether one can generalize the condition \(Z_{12}=b_{0}\) for the inclusion-exclusion principle so as to be applicable to any choice of subspaces. We hope this problem to be solved in a future work.
The paper is organized as follows: In SS2 is presented our direct proof for the positive definiteness of four point metric spaces, after a few notations are introduced for the sake of clarity. SS3 is devoted to the inclusion-exclusion principle under the condition \(Z_{12}=b_{0}\). We start with a brief review of the magnitude of a finite metric space and the inclusion-exclusion principle widely known so far. Then the key formula is generalized, and our version of the inclusion-exclusion principle is established. The converse implication is also studied. Finally, the conditions for the inclusion-exclusion principle are compared.
## 2. The proof
### The positive definiteness
The goal is to provide a direct proof of:
**Theorem 2.1** ([7]).: _Every \(4\)-point metric space is positive definite._
By Sylvester's criterion and the positive definiteness of \(1\)- \(2\)- and \(3\)-point metric spaces, Theorem 2.1 will follow from:
**Theorem 2.2**.: _For any \(4\)-point metric space, the determinant of its zeta matrix takes a positive value._
We shall prove Theorem 2.2 in the remainder of this section.
### Preliminary
We realize a \(4\)-point set as \(X_{4}=\{1,2,3,4\}\). Let \(\mathcal{M}_{4}\) be the set of metrics \(d\) on \(X_{4}\). Through the map
\[d\mapsto(e^{-d(1,2)},e^{-d(1,3)},e^{-d(1,4)}e^{-d(2,3)},e^{-d(2,4)},e^{-d(3,4 )}),\]
this set \(\mathcal{M}_{4}\) is bijective to the following subset in the open cube \((0,1)^{6}\subset\mathbb{R}^{6}\)
\[M_{4}=\big{\{}(Z_{ij})_{1\leq i<j\leq 4}\in(0,1)^{6}\big{|}\ Z_{ij}-Z_{ik}Z_{kj} \geq 0\ (i,j,k\ \text{distinct})\big{\}},\]
where, for convenience, we put \(Z_{ji}=Z_{ij}\) for \(i<j\). It should be noted that we are considering genuine \(4\)-point metric spaces, so that the degeneracies, such as \(d(1,2)=0\), are excluded. The determinant of the zeta matrix of \((X_{4},d)\) is identified with the following polynomial function \(\Delta\) in \(Z=(Z_{ij})_{1\leq i<j\leq 4}\in M_{4}\)
\[\Delta= \left|\begin{array}{cccc}1&Z_{12}&Z_{13}&Z_{14}\\ Z_{12}&1&Z_{23}&Z_{24}\\ Z_{13}&Z_{23}&1&Z_{34}\\ Z_{14}&Z_{24}&Z_{34}&1\end{array}\right|\] \[= 1-Z_{12}^{2}-Z_{13}^{2}-Z_{14}^{2}-Z_{23}^{2}-Z_{24}^{2}-Z_{34}^ {2}\] \[+2Z_{12}Z_{13}Z_{23}+2Z_{12}Z_{14}Z_{24}+2Z_{13}Z_{14}Z_{34}+2Z_{23 }Z_{24}Z_{34}\] \[-2Z_{13}Z_{14}Z_{23}Z_{24}-2Z_{12}Z_{14}Z_{23}Z_{34}-2Z_{12}Z_{13} Z_{24}Z_{34}\] \[+Z_{12}^{2}Z_{34}^{2}+Z_{13}^{2}Z_{24}^{2}+Z_{14}^{2}Z_{23}^{2}.\]
Then Theorem 2.2 is equivalent to that \(\Delta>0\) on \(M_{4}\).
We introduce some functions in \(Z=(Z_{ij})_{1\leq i<j\leq 4}\in M_{4}\). For any \(i,j,k\in X_{4}\) such that \(i<j<k\), we define a polynomial function \(\Delta_{ijk}\) on \(M_{4}\) by
\[\Delta_{ijk}=1-Z_{ij}^{2}-Z_{ik}^{2}-Z_{jk}^{2}+2Z_{ij}Z_{ik}Z_{jk}.\]
This corresponds to the determinant of the zeta matrix of the 3-point set \(\{i,j,k\}\subset X_{4}\) with the induced metric. Hence \(\Delta_{ijk}>0\) on \(M_{4}\). For \(Z\in M_{4}\), we define \(b_{\pm}\) by
\[b_{-}=\max\{Z_{13}Z_{23},Z_{14}Z_{24}\},\qquad\quad b_{+}=\min\bigg{\{}\frac{Z_ {23}}{Z_{13}},\frac{Z_{13}}{Z_{23}},\frac{Z_{24}}{Z_{14}},\frac{Z_{14}}{Z_{24}} \bigg{\}}.\]
If \(Z\in M_{4}\), then we have
\[0<b_{-}\leq Z_{12}\leq b_{+}\leq 1.\]
For \(Z\in M_{4}\), we also define \(b_{0}\) by
\[b_{0}=\frac{Z_{13}Z_{23}+Z_{14}Z_{24}-Z_{14}Z_{23}Z_{34}-Z_{13}Z_{24}Z_{34}}{1 -Z_{34}^{2}}.\]
**Lemma 2.3**.: _For \(Z\in M_{4}\), we have \(b_{-}\leq b_{0}\leq b_{+}\)._
Proof.: The inequality \(b_{-}\leq b_{0}\) follows from the following expressions of \(b_{0}\)
\[b_{0} =Z_{13}Z_{23}+\frac{(Z_{14}-Z_{13}Z_{34})(Z_{24}-Z_{23}Z_{34})}{1 -Z_{34}^{2}}\] \[=Z_{14}Z_{24}+\frac{(Z_{13}-Z_{14}Z_{34})(Z_{23}-Z_{24}Z_{34})}{1 -Z_{34}^{2}}.\]
To show \(b_{0}\leq b_{+}\), we note that the following four cases can occur:
\[b_{+}=\frac{Z_{23}}{Z_{13}},\qquad\qquad b_{+}=\frac{Z_{13}}{Z_{23}},\qquad \qquad b_{+}=\frac{Z_{24}}{Z_{14}},\qquad\qquad b_{+}=\frac{Z_{14}}{Z_{24}}.\]
These cases are equivalent by permutations of points on \(X_{4}\). For instance, the case \(b_{+}=Z_{23}/Z_{13}\) is transformed into the case \(b_{+}=Z_{13}/Z_{13}\) by the permutation (12) exchanging the points \(1\) and \(2\). Similarly, the case \(b_{+}=Z_{23}/Z_{13}\) is transformed to the cases \(b_{+}=Z_{24}/Z_{14}\) and \(b_{+}=Z_{14}/Z_{24}\) by the permutations (34) and (12)(34), respectively. These permutations leave the function \(b_{0}\) on \(M_{4}\) invariant. Thus, it suffices to prove that \(b_{0}\leq Z_{23}/Z_{13}\) for any \(Z\in M_{4}\).
Now, we make use of the following formula valid for all \(Z\in M_{4}\)
\[\frac{Z_{23}}{Z_{13}}-b_{0}=\frac{Z_{23}(1-Z_{13}^{2})(1-Z_{34}^{2})-Z_{13}(Z_ {14}-Z_{13}Z_{34})(Z_{24}-Z_{23}Z_{34})}{Z_{13}(1-Z_{34}^{2})}.\]
This is linear as a function in \(Z_{24}\). If \(Z\in M_{4}\), then \(Z_{24}\) is subject to
\[Z_{24}\leq\min\bigg{\{}\frac{Z_{23}}{Z_{34}},\frac{Z_{34}}{Z_{23}}\bigg{\}}.\]
We put \(\tilde{b}_{+}=\min\{Z_{23}/Z_{34},Z_{34}/Z_{23}\}\). With these preliminaries, we shall show that the function \(f=Z_{23}/Z_{13}-b_{0}\) in \(Z\) is non-negative on the following subset of the open cube \((0,1)^{6}\)
\[\widetilde{M}_{4}=\bigg{\{}(Z_{ij})_{1\leq i<j\leq 4}\in(0,1)^{6}\bigg{|} \begin{array}{c}Z_{14}\geq Z_{13}Z_{34},\ Z_{34}\geq Z_{13}Z_{14},\\ Z_{23}\geq Z_{24}Z_{34},\ Z_{34}\geq Z_{23}Z_{24}\end{array}\bigg{\}},\]
which contains \(M_{4}\) as a subset. If \(Z\in\widetilde{M}_{4}\), then \(Z_{24}\leq\tilde{b}_{+}\). If \(Z\in\widetilde{M}_{4}\) satisfies \(Z_{24}=Z_{23}/Z_{34}\), then the value of \(f\) at this \(Z\) is
\[f(Z_{24}=\tfrac{Z_{23}}{Z_{34}})=\frac{Z_{23}(Z_{34}-Z_{13}Z_{14})}{Z_{13}Z_{34 }}\geq 0.\]
If \(Z\in\widetilde{M}_{4}\) satisfies \(Z_{24}=Z_{34}/Z_{23}\), then the value of \(f\) at this \(Z\) is
\[f(Z_{24}=\tfrac{Z_{34}}{Z_{23}})=\frac{Z_{23}(Z_{34}-Z_{13}Z_{14})}{Z_{13}Z_{34 }}+\frac{(Z_{23}^{2}-Z_{34}^{2})(Z_{14}-Z_{13}Z_{34})}{Z_{23}Z_{34}(1-Z_{34}^{2 })}.\]
This is non-negative, because \(1>Z_{24}=Z_{34}/Z_{23}\) implies \(Z_{23}>Z_{34}\). It follows that if \(Z\in\widetilde{M}_{4}\) satisfies \(Z_{24}=\tilde{b}_{+}\), then the value of \(f\) at this \(Z\) is non-negative. On \(\widetilde{M}_{4}\), the function \(f\) is decreasing in \(Z_{24}\). Thus, for any \(Z\in\widetilde{M}_{4}\), we have
\[Z^{\prime}=(Z_{12},Z_{13},Z_{14},Z_{23},\tilde{b}_{+},Z_{34})\in\widetilde{M}_ {4},\]
for which \(f(Z)\geq f(Z^{\prime})\geq 0\). Hence \(f=Z_{23}/Z_{13}-b_{0}\geq 0\) on \(M_{4}\subset\widetilde{M}_{4}\).
### Proof
We now prove that \(\Delta>0\) on \(M_{4}\). The polynomial function \(\Delta\) is quadratic in the variable \(Z_{12}\). Completing the square, one has the expression
\[\Delta=-(1-Z_{34}^{2})(Z_{12}-b_{0})^{2}+\frac{\Delta_{134}\Delta_{234}}{1-Z_{ 34}^{2}}.\]
Our method of proof is a case-by-case estimate of \(\Delta\) by using the expression above. For \(Z\in M_{4}\), the following two cases can occur:
1. \(b_{-}\leq Z_{12}\leq b_{0}\),
2. \(b_{0}\leq Z_{12}\leq b_{+}\).
The case \((L)\) is further divided into two cases:
1. \(b_{-}=Z_{13}Z_{23}\leq Z_{12}\leq b_{0}\),
2. \(b_{-}=Z_{14}Z_{24}\leq Z_{12}\leq b_{0}\),
and the case \((R)\) is divided into four cases:
1. \(b_{0}\leq Z_{12}\leq b_{+}=Z_{23}/Z_{13}\),
2. \(b_{0}\leq Z_{12}\leq b_{+}=Z_{13}/Z_{23}\),
3. \(b_{0}\leq Z_{12}\leq b_{+}=Z_{24}/Z_{14}\),
4. \(b_{0}\leq Z_{12}\leq b_{+}=Z_{14}/Z_{24}\).
The determinant of the zeta matrix is clearly invariant under the permutations of the points on \(X_{4}\). Thus, for example, \(\Delta>0\) in the case \((L_{1})\) and that in \((L_{2})\) are equivalent, by the permutation \((34)\) exchanging the points \(3\) and \(4\) in \(X_{4}\). As a result, it suffices to prove the claims \(\Delta>0\) in the cases \((L_{1})\) and \((R_{1})\) only. These positivity claims are respectively shown in Proposition 2.7 and Proposition 2.8 below, by which the proof of Theorem 2.2 will be completed.
To show Proposition 2.7 and Proposition 2.8, we prepare for lemmas.
**Lemma 2.4**.: _If \(Z=(Z_{ij})\in M_{4}\) satisfies \(Z_{14}=\frac{Z_{34}}{Z_{13}}\) and \(Z_{23}=\frac{Z_{12}}{Z_{13}}\), then the value of \(\Delta\) at this \(Z\) is positive:_
\[\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}}})>0\]
Proof.: Let \(M_{4}^{\prime}\subset M_{4}\) be the subset
\[M_{4}^{\prime}=\{Z\in M_{4}|\ Z_{13}Z_{14}=Z_{34},Z_{13}Z_{23}=Z_{12}\}.\]
The value of \(\Delta\) at \(Z\in M_{4}^{\prime}\) can be expressed as
\[\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}}})=(1-Z_{1 3}^{2})\bigg{(}-\bigg{(}Z_{24}-\frac{Z_{12}Z_{34}}{Z_{13}}\bigg{)}^{2}+\frac{( Z_{13}^{2}-Z_{12}^{2})(Z_{13}^{2}-Z_{34}^{2})}{Z_{13}^{4}}\bigg{)}.\]
For \(Z\in M_{4}^{\prime}\), we let \(b_{-}^{\prime}\) and \(b_{+}^{\prime}\) be given by
\[b_{-}^{\prime} =\max\{Z_{12}Z_{14},Z_{23}Z_{34}\}=\frac{Z_{12}Z_{34}}{Z_{13}},\] \[b_{+}^{\prime} =\min\left\{\frac{Z_{14}}{Z_{12}},\frac{Z_{12}}{Z_{14}},\frac{Z_{3 4}}{Z_{23}},\frac{Z_{23}}{Z_{34}}\right\}=\min\left\{\frac{Z_{34}}{Z_{12}Z_{13 }},\frac{Z_{12}Z_{13}}{Z_{34}},\frac{Z_{13}Z_{34}}{Z_{12}},\frac{Z_{12}}{Z_{13 }Z_{34}}\right\}\] \[=\min\left\{\frac{Z_{12}Z_{13}}{Z_{34}},\frac{Z_{13}Z_{34}}{Z_{12 }}\right\}.\]
If \(Z\in M_{4}^{\prime}\), then \(b_{-}^{\prime}\leq Z_{24}\leq b_{+}^{\prime}\). Hence \(\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}}})\) is decreasing as a function in \(Z_{24}\). As a result, for any \(Z\in M_{4}^{\prime}\), we have an element
\[Z^{\prime}=(Z_{12},Z_{13},Z_{14},Z_{23},b_{+}^{\prime},Z_{34})\in M_{4}^{ \prime},\]
for which we have
\[\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}}})=\Delta( Z)\geq\Delta(Z^{\prime})=\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{ Z_{13}},Z_{24}=b_{+}^{\prime}}).\]
Now, the proof will be completed by showing:
1. In the case that \(b_{+}^{\prime}=\frac{Z_{12}Z_{13}}{Z_{13}}\), we have \(\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}},Z_{24}=b_ {+}^{\prime}})>0\).
2. In the case that \(b_{+}^{\prime}=\frac{Z_{12}Z_{34}}{Z_{12}}\), we have \(\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}},Z_{24}=b_ {+}^{\prime}})>0\).
These two cases turn out to be equivalent, because the permutation \((13)(24)\) on \(X_{4}\) induces the following transformation on \(M_{4}^{\prime}\subset M_{4}\)
\[(Z_{12},Z_{13},Z_{14},Z_{23},Z_{24},Z_{34})\leftrightarrow(Z_{34},Z_{13},Z_{2 3},Z_{14},Z_{24},Z_{12}).\]
Hence it suffices to prove only (a): In the case that \(b_{+}^{\prime}=Z_{12}Z_{13}/Z_{34}\), we have
\[\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}},Z_{24}= \frac{Z_{12}Z_{13}}{Z_{34}}})\\ =\frac{(1-Z_{13}^{2})(Z_{13}^{2}-Z_{34}^{2})}{Z_{13}^{4}Z_{34}^{ 2}}(Z_{34}^{2}(Z_{13}^{2}-Z_{12}^{2})-Z_{12}^{2}Z_{13}^{2}(Z_{13}^{2}-Z_{34}^ {2})).\]
From \(Z_{34}/Z_{13}=Z_{14}<1\) and \(Z_{12}/Z_{13}=Z_{23}<1\), we respectively get
\[Z_{34}<Z_{13},\hskip 56.905512ptZ_{12}<Z_{13}.\]
From \(Z_{12}Z_{13}/Z_{34}=b_{+}^{\prime}\leq Z_{13}Z_{34}/Z_{12}\), we get \(Z_{12}^{2}\leq Z_{34}^{2}\). Using the expression
\[Z_{34}^{2}(Z_{13}^{2}-Z_{12}^{2})-Z_{12}^{2}Z_{13}^{2}(Z_{13}^{2}-Z_{34}^{2}) \\ =(Z_{34}^{2}-Z_{12}^{2})(Z_{13}^{2}-Z_{12}^{2}+Z_{12}^{2}Z_{13}^{ 2})+Z_{12}^{2}(1-Z_{13}^{2})(Z_{13}^{2}-Z_{12}^{2}),\]
we find that \(\Delta({}_{Z_{14}=\frac{Z_{34}}{Z_{13}},Z_{23}=\frac{Z_{12}}{Z_{13}},Z_{24}= \frac{Z_{12}Z_{13}}{Z_{34}}})>0\).
**Lemma 2.5**.: _If \(Z=(Z_{ij})\in M_{4}\) satisfies \(Z_{13}=\frac{Z_{12}}{Z_{23}}\) and \(Z_{14}=\frac{Z_{12}}{Z_{24}}\), then the value of \(\Delta\) at this \(Z\) is positive:_
\[\Delta({}_{Z_{13}=\frac{Z_{12}}{Z_{23}},Z_{14}=\frac{Z_{12}}{Z_{24}}})>0.\]
Proof.: To begin with, we notice that the inequality
\[Z_{12}<\min\{Z_{13},Z_{23},Z_{14},Z_{24}\}\]
holds true if \(Z\in M_{4}\) is subject to \(Z_{13}=Z_{12}/Z_{23}\) and \(Z_{14}=Z_{12}/Z_{24}\). We prove the present lemma by using the following expression valid for general \(Z\in M_{4}\)
\[\Delta=-(1-Z_{12}^{2})(Z_{34}-c_{0})^{2}+\frac{\Delta_{123}\Delta_{124}}{1-Z_{1 2}^{2}},\]
where \(c_{0}\) is defined by
\[c_{0}=\frac{Z_{13}Z_{14}+Z_{23}Z_{24}-Z_{12}Z_{13}Z_{24}-Z_{12}Z_{14}Z_{23}}{1-Z_{ 12}^{2}}.\]
We define \(c_{\pm}\) by
\[c_{-}=\max\{Z_{13}Z_{14},Z_{23}Z_{24}\},\qquad\quad c_{+}=\min\bigg{\{}\frac{Z_ {14}}{Z_{13}},\frac{Z_{13}}{Z_{14}},\frac{Z_{24}}{Z_{23}},\frac{Z_{23}}{Z_{24}} \bigg{\}}.\]
If \(Z\in M_{4}\), then \(c_{-}\leq Z_{34}\leq c_{+}\). If \(Z\in M_{4}\) satisfies \(Z_{13}=Z_{12}/Z_{23}\) and \(Z_{14}=Z_{12}/Z_{24}\), then we get
\[c_{-}=\max\bigg{\{}\frac{Z_{12}^{2}}{Z_{23}Z_{24}},Z_{23}Z_{24}\bigg{\}},\qquad \qquad c_{+}=\min\bigg{\{}\frac{Z_{24}}{Z_{23}},\frac{Z_{23}}{Z_{24}}\bigg{\}}.\]
Since \(\Delta\) is quadratic in \(Z_{34}\), if \(Z\in M_{4}\) satisfies \(c_{0}\leq Z_{34}\leq c_{+}\), then we have
\[Z^{\prime}=(Z_{12},Z_{13},Z_{14},Z_{23},Z_{24},c_{+})\in M_{4},\]
for which \(\Delta=\Delta(Z)\geq\Delta(Z^{\prime})=\Delta(z_{34}=c_{+})\). Also, If \(Z\in M_{4}\) satisfies \(c_{-}\leq Z_{34}\leq c_{0}\), then \(\Delta\geq\Delta(z_{34}=c_{-})\). Imposing the constraints, we have:
1. If \(Z\in M_{4}\) satisfies \(c_{0}\leq Z_{34}\leq c_{+}\), \(Z_{13}=Z_{12}/Z_{23}\) and \(Z_{14}=Z_{12}/Z_{24}\), then \[\Delta(z_{13}{=}\tfrac{z_{12}}{z_{23}},Z_{14}{=}\tfrac{z_{12}}{z_{24}})\geq \Delta(z_{13}{=}\tfrac{z_{12}}{z_{23}},Z_{14}{=}\tfrac{z_{12}}{z_{24}},Z_{34} {=}c_{+}).\]
2. If \(Z\in M_{4}\) satisfies \(c_{-}\leq Z_{34}\leq c_{0}\), \(Z_{13}=Z_{12}/Z_{23}\) and \(Z_{14}=Z_{12}/Z_{24}\), then \[\Delta(z_{13}{=}\tfrac{z_{12}}{z_{23}},Z_{14}{=}\tfrac{z_{12}}{z_{24}})\geq \Delta(z_{13}{=}\tfrac{z_{12}}{z_{23}},Z_{14}{=}\tfrac{z_{12}}{z_{24}},Z_{34} {=}c_{-}).\]
The proof will be completed by showing the positivity of the last terms.
(a) Suppose that \(Z\in M_{4}\) satisfies \(c_{0}\leq Z_{34}\leq c_{+}\), \(Z_{13}=Z_{12}/Z_{23}\) and \(Z_{14}=Z_{12}/Z_{24}\). Then there are two cases: \(c_{+}=Z_{24}/Z_{23}\) and \(c_{+}=Z_{23}/Z_{24}\). These two cases are equivalent by the permutation (34) of the points \(3\) and \(4\) in \(X_{4}\). Hence it suffices to study the case that \(c_{+}=Z_{24}/Z_{23}\). Then, we have
\[\Delta(z_{13}{=}\tfrac{z_{12}}{z_{23}},Z_{14}{=}\tfrac{z_{12}}{z_ {24}},Z_{34}{=}c_{+}) =\Delta(z_{13}{=}\tfrac{z_{12}}{z_{23}},Z_{14}{=}\tfrac{z_{12}}{z_ {24}},Z_{34}{=}\tfrac{z_{24}}{z_{23}})\] \[=\frac{(1-Z_{23}^{2})(Z_{24}^{2}-Z_{12}^{2})(Z_{23}^{2}-Z_{24}^{2 })}{Z_{23}^{2}Z_{24}^{2}}>0,\]
because \(Z_{12}/Z_{24}=Z_{14}<1\) and \(Z_{24}/Z_{23}=Z_{34}<1\).
(b) Suppose that \(Z\in M_{4}\) satisfies \(c_{-}\leq Z_{34}\leq c_{0}\), \(Z_{13}=Z_{12}/Z_{23}\) and \(Z_{14}=Z_{12}/Z_{24}\). In the case that \(c_{-}=Z_{23}Z_{24}\), we have
\[\Delta(z_{13}{=}\tfrac{z_{12}}{z_{23}},Z_{14}{=}\tfrac{z_{12}}{z_ {24}},Z_{34}{=}Z_{23}Z_{24})\\ =\frac{(1-Z_{23}^{2})(1-Z_{24}^{2})}{Z_{23}^{2}Z_{24}^{2}}\bigg{(}Z _{23}^{2}Z_{24}^{2}-Z_{12}^{2}Z_{23}^{2}-Z_{12}^{2}Z_{24}^{2}+Z_{12}^{2}Z_{23}^ {2}Z_{24}^{2}\bigg{)}.\]
From \(c_{-}=Z_{23}Z_{24}\), one has \(Z_{12}\leq Z_{23}Z_{24}\). By the expression
\[Z_{23}^{2}Z_{24}^{2}-Z_{12}^{2}Z_{23}^{3}-Z_{12}^{2}Z_{24}^{2}+Z_ {12}^{2}Z_{23}^{2}Z_{24}^{2}\\ =(1-(1-Z_{23}^{2})(1-Z_{24}^{2}))(Z_{23}^{2}Z_{24}^{2}-Z_{12}^{2 })+Z_{23}^{2}Z_{24}^{2}(1-Z_{23}^{2})(1-Z_{24}^{2}),\]
we see \(\Delta({{\it z}}_{13}{=}\frac{{{\it z}}_{12}}{{{\it z}}_{23}},{{\it Z}}_{14}{=} \frac{{{\it z}}_{12}}{{{\it Z}}_{24}},{{\it Z}}_{34}{=}Z_{23}Z_{24})>0\). In the case that \(c_{-}=Z_{12}^{2}/(Z_{23}Z_{24})\), we have \(Z_{12}\geq Z_{23}Z_{24}\) from \(Z_{12}^{2}/(Z_{23}Z_{24})=c_{-}\geq Z_{23}Z_{24}\), and
\[\Delta({{\it z}}_{13}{=}\frac{{{\it z}}_{12}}{{{\it z}}_{23}},{{ \it Z}}_{14}{=}\frac{{{\it z}}_{12}}{{{\it z}}_{24}},{{\it Z}}_{34}{=}\frac{{{ \it z}}_{12}^{2}}{{{\it z}}_{23}}) =\frac{(Z_{23}^{2}-Z_{12}^{2})(Z_{24}^{2}-Z_{12}^{2})(1-Z_{23}^{2} -Z_{24}^{2}+Z_{12}^{2})}{Z_{23}^{2}Z_{24}^{2}}\] \[\geq\frac{(Z_{23}^{2}-Z_{12}^{2})(Z_{24}^{2}-Z_{12}^{2})(1-Z_{23}^ {2}-Z_{24}^{2}+Z_{23}^{2}Z_{24}^{2})}{Z_{23}^{2}Z_{24}^{2}}\] \[=\frac{(Z_{23}^{2}-Z_{12}^{2})(Z_{24}^{2}-Z_{12}^{2})(1-Z_{23}^{2} )(1-Z_{24}^{2})}{Z_{23}^{2}Z_{24}^{2}},\]
which is also positive.
**Lemma 2.6**.: _If \(Z=(Z_{ij})\in M_{4}\) satisfies \(Z_{12}=Z_{13}Z_{23}\), then the value of \(\Delta\) at this \(Z\) is positive:_
\[\Delta({{\it z}}_{12}{=}Z_{13}Z_{23})>0.\]
Proof.: We can express \(\Delta({{\it z}}_{12}{=}Z_{13}Z_{23})\) as follows
\[\Delta({{\it z}}_{12}{=}Z_{13}Z_{23})=-(1-Z_{23}^{2})(Z_{14}-Z_{13}Z_{34})^{2}+ (1-Z_{13}^{2})\Delta_{234},\]
which can be thought of as a quadratic function in \(Z_{14}\). If \(Z\in M_{4}\), then
\[\max\{Z_{12}Z_{24},Z_{13}Z_{34}\}\leq Z_{14}\leq\min\bigg{\{}\frac{Z_{24}}{Z_{ 12}},\frac{Z_{12}}{Z_{24}},\frac{Z_{34}}{Z_{13}},\frac{Z_{13}}{Z_{34}}\bigg{\}}.\]
Using \(Z_{12}=Z_{13}Z_{23}\) and \(Z_{23}Z_{34}\leq Z_{24}\), one has
\[\min\bigg{\{}\frac{Z_{24}}{Z_{12}},\frac{Z_{12}}{Z_{24}},\frac{Z_{ 34}}{Z_{13}},\frac{Z_{13}}{Z_{34}}\bigg{\}} =\min\bigg{\{}\frac{Z_{24}}{Z_{13}Z_{23}},\frac{Z_{13}Z_{23}}{Z_{ 24}},\frac{Z_{34}}{Z_{13}},\frac{Z_{13}}{Z_{34}}\bigg{\}}\] \[=\min\bigg{\{}\frac{Z_{34}}{Z_{13}},\frac{Z_{13}Z_{23}}{Z_{24}} \bigg{\}}.\]
Put \(c_{+}=\min\{Z_{34}/Z_{13},Z_{13}Z_{23}/Z_{24}\}\). Then the above expression of \(\Delta({{\it z}}_{12}{=}Z_{13}Z_{23})\) leads to
\[\Delta({{\it z}}_{12}{=}Z_{13}Z_{23})\geq\Delta({{\it z}}_{12}{=}Z_{13}Z_{23},Z _{14}{=}c_{+}).\]
Now, the proof will be completed by showing the positivity of the last term. In the case that \(c_{+}=Z_{34}/Z_{13}\), we have
\[\Delta({{\it z}}_{12}{=}Z_{13}Z_{23},Z_{14}{=}c_{+})=\Delta({{\it z}}_{12}{=}Z_ {13}Z_{23},Z_{14}{=}\frac{{{\it z}}_{24}}{{{\it z}}_{13}})=\Delta({{\it z}}_{14 }{=}\frac{{{\it z}}_{24}}{{{\it z}}_{13}},{{\it Z}}_{23}{=}\frac{{{\it z}}_{12}} {{{\it z}}_{13}}).\]
This is positive by Lemma 2.4. In the case that \(c_{+}=Z_{13}Z_{23}/Z_{24}\), we have
\[\Delta({{\it z}}_{12}{=}Z_{13}Z_{23},Z_{14}{=}c_{+})=\Delta({{\it z}}_{12}{=}Z_ {13}Z_{23},Z_{14}{=}\frac{{{\it z}}_{13}Z_{23}}{{{\it z}}_{24}})=\Delta({{\it z }}_{13}{=}\frac{{{\it z}}_{12}}{{{\it z}}_{23}},{{\it z}}_{14}{=}\frac{{{\it z} }_{12}}{{{\it z}}_{24}}).\]
This is positive by Lemma 2.5.
**Proposition 2.7** (\(L_{1}\)).: _We have \(\Delta>0\) if \(Z=(Z_{ij})\in M_{4}\) satisfies_
\[b_{-}=Z_{13}Z_{23}\leq Z_{12}\leq b_{0}.\]
Proof.: Under the assumption of the proposition, \(\Delta\) is increasing as a function in \(Z_{12}\). Thus, if \(Z\in M_{4}\), then we have
\[Z^{\prime}=(Z_{13}Z_{23},Z_{13},Z_{14},Z_{23},Z_{24},Z_{34})\in M_{4},\]
for which
\[\Delta=\Delta(Z)\geq\Delta(Z^{\prime})=\Delta({{\it z}}_{12}{=}Z_{13}Z_{23}).\]
The last term is positive by Lemma 2.6.
**Proposition 2.8** (\(R_{1}\)).: _We have \(\Delta>0\) if \(Z=(Z_{ij})\in M_{4}\) satisfies_
\[b_{0}\leq Z_{12}\leq b_{+}=\frac{Z_{23}}{Z_{13}}.\]
Proof.: Under the present assumption, the function \(\Delta\) is decreasing in \(Z_{12}\). Thus, if \(Z\in M_{4}\) satisfies \(b_{0}\leq Z_{12}\leq b_{+}=Z_{23}/Z_{13}\), then we have
\[Z^{\prime}=(Z_{23}/Z_{13},Z_{13},Z_{14},Z_{23},Z_{24},Z_{34})\in M_{4},\]
for which we have
\[\Delta=\Delta(Z)\geq\Delta(Z^{\prime})=\Delta(Z_{{}_{12}=\frac{Z_{23}}{Z_{13}}}).\]
Note that the permutation (13) exchanging the points \(1\) and \(3\) on \(X_{4}\) transforms the constraint \(Z_{23}=Z_{12}Z_{13}\) into \(Z_{12}=Z_{13}Z_{23}\). Thus, by Lemma 2.6, we see
\[\Delta(Z_{{}_{12}=\frac{Z_{23}}{Z_{13}}})=\Delta(Z_{{}_{23}=Z_{12}Z_{13}})>0,\]
and the proof is completed.
## 3. The inclusion-exclusion principle
### Magnitude
We here review the magnitude of a finite metric space, and its inclusion-exclusion principle [5].
Let \(X=(X,d)\) be a finite metric space. As in SS1, the zeta matrix of \((X,d)\) is defined by
\[\zeta_{X}=(e^{-d(i,j)})_{i,j\in X}=(Z_{ij})_{i,j\in X},\]
where \(Z_{ij}=e^{-d(i,j)}\). We write \(\Delta_{X}=\det\zeta_{X}\) for the determinant of the zeta matrix. A _weighting_ (or a _weight_ for short) on \(X\) is a vector \(\vec{w}_{X}\in\mathbb{R}^{n}\) such that \(\zeta_{X}\vec{w}_{X}=\vec{1}\), where \(n\) is the cardinality of \(X\), and \(\vec{1}\in\mathbb{R}^{n}\) denotes the vector whose entries are \(1\). In general, \(X\) may not have a weight. If \(\Delta_{X}\neq 0\), then \(\vec{w}_{X}=\zeta_{X}^{-1}\vec{1}\) is the unique weight on \(X\). When \(X\) admits a weight \(\vec{w}_{X}\), its magnitude \(\operatorname{Mag}(X)\) is defined as the sum of the entries of \(\vec{w}_{X}\). If \(\Delta_{X}\neq 0\), then \(\operatorname{Mag}(X)\) agrees with the sum of the entries of \(\zeta_{X}^{-1}\). Using the standard inner product \(\langle\,\ \rangle\) on \(\mathbb{R}^{n}\), we can express the magnitude as
\[\operatorname{Mag}(X)=\langle\vec{1},\vec{w}_{X}\rangle=\langle\vec{1},\zeta_ {X}^{-1}\vec{1}\rangle.\]
For example, we have
\[\operatorname{Mag}(\{1,2\})=\frac{2}{1+Z_{12}},\quad\operatorname{Mag}(\{1,2, 3\})=1+\frac{2(1-Z_{12})(1-Z_{13})(1-Z_{23})}{\Delta_{123}},\]
where \(\Delta_{123}=\Delta_{\{1,2,3\}}\).
For the magnitude of \(X\) to satisfy the inclusion-exclusion principle with respect to subspaces \(A,B\subset X\), a condition has been known [5]. For clarity, we reformulate the condition in [5] as follows:
**Definition 3.1**.: Let \(A,B\subset X\) be subspaces. We define conditions (C1) and (C2) as follows:
1. For any \(a\in A\) and \(b\in B\), there exists \(c\in A\cap B\) such that \[d(a,b)=d(a,c)+d(c,b).\]
2. Either of the following holds true:
* Any \(a\in A\) admits \(\pi(a)\in A\cap B\) such that: \[d(a,c)=d(a,\pi(a))+d(\pi(a),c)\] for all \(c\in A\cap B\).
* Any \(b\in A\) admits \(\pi(b)\in A\cap B\) such that: \[d(a,c)=d(b,\pi(b))+d(\pi(b),c)\] for all \(c\in A\cap B\).
The condition (C2\({}_{a}\)) is termed "\(A\) projects to \(A\cap B\)" in [4], and "\(A\cap B\) is gated in \(A\)" in [1, 2]. The condition "\(A\) projects to \(B\)" in [5] is equivalent to (C1) and (C2\({}_{a}\)).
Now, we state the inclusion-exclusion principle essentially due to Leinster:
**Theorem 3.2** ([4, 5]).: _Let \(A\) and \(B\) be subspaces of a finite metric space \(X\) such that \(A\cup B=X\) and \(A\cap B\neq\emptyset\). Suppose that:_
* \(\operatorname{Mag}(A)\) _and_ \(\operatorname{Mag}(B)\) _are defined; and_
* _(C1) and (C2) are satisfied._
_Then \(\operatorname{Mag}(A\cap B)\) and \(\operatorname{Mag}(X_{n})\) are defined, and_
\[\operatorname{Mag}(X_{n})=\operatorname{Mag}(A)+\operatorname{Mag}(B)- \operatorname{Mag}(A\cap B).\]
The statement above is not exactly the same as that in [5] (Proposition 2.3.2), but can be shown by applying arguments in [4, 5].
### A generalization of the key formula
Let us realize an \(n\)-point set as \(X_{n}=\{1,2,\ldots,n\}\), and consider a metric \(d\) on \(X_{n}\). The key formula for the direct proof in SS2 was the expression of \(\Delta_{X_{4}}\) given by completing the square. We here provide its generalization to \(\Delta_{X_{n}}\). For its description, we start with a lemma to be used repeatedly:
**Lemma 3.3**.: _Suppose \(n\geq 2\). For a real number \(x\in\mathbb{R}\), vectors \(\vec{a},\vec{b}\in\mathbb{R}^{n-1}\), and a symmetric matrix \(M\) of size \(n-1\), we consider a square matrix of size \(n\)_
\[\left(\begin{array}{cc}x&{}^{t}\vec{a}\\ \vec{b}&M\end{array}\right),\]
_where \({}^{t}(\ )\) stands for the transpose. Then the determinant of the above matrix is_
\[\left|\begin{array}{cc}x&{}^{t}\vec{a}\\ \vec{b}&M\end{array}\right|=x|M|-\langle\vec{a},\widetilde{M}\vec{b}\rangle,\]
_where \(\widetilde{M}\) is the adjugate matrix (the transpose of the cofactor) of \(M\). Thus, in particular, if \(|M|\neq 0\), then \(M^{-1}=\widetilde{M}/|M|\) and_
\[\left|\begin{array}{cc}x&{}^{t}\vec{a}\\ \vec{b}&M\end{array}\right|=|M|(x-\langle\vec{a},M^{-1}\vec{b}\rangle).\]
Proof.: By the linearity of the determinant in the first column, we get
\[\left|\begin{array}{cc}x&{}^{t}\vec{a}\\ \vec{b}&M\end{array}\right|=\left|\begin{array}{cc}x&{}^{t}\vec{a}\\ 0&M\end{array}\right|+\left|\begin{array}{cc}0&{}^{t}\vec{a}\\ \vec{b}&M\end{array}\right|=x|M|+\left|\begin{array}{cc}0&{}^{t}\vec{a}\\ \vec{b}&M\end{array}\right|\]
In view of Cramer's formula, the last term is identified with \(-\langle\vec{a},\widetilde{M}\vec{b}\rangle\)
**Definition 3.4**.: For \(n\geq 3\), let \(A,B\subset X_{n}=\{1,2,\ldots,n\}\) be the subspaces
\[A=\{1,3,4,\ldots,n\},\qquad\qquad\qquad B=\{2,3,4,\ldots,n\}.\]
Assuming \(\Delta_{A\cap B}\neq 0\), we define \(b_{-}\) and \(b_{0}\) by
\[b_{-} =\max\{Z_{1j}Z_{2j}|\ j=3,4,\ldots,n\},\] \[b_{0} =-\frac{1}{\Delta_{A\cap B}}\begin{vmatrix}0&{}^{t}\vec{a}\\ \vec{b}&\zeta_{A\cap B}\end{vmatrix}=-\frac{1}{\Delta_{A\cap B}}\begin{vmatrix} 0&Z_{13}&Z_{14}&Z_{15}&\cdots&Z_{1n}\\ Z_{23}&1&Z_{34}&Z_{35}&\cdots&Z_{3n}\\ Z_{24}&Z_{34}&1&Z_{45}&\cdots&Z_{4n}\\ Z_{25}&Z_{35}&Z_{45}&1&\cdots&Z_{5n}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ Z_{2n}&Z_{3n}&Z_{4n}&Z_{5n}&\cdots&1\end{vmatrix}\]
where vectors \(\vec{a},\vec{b}\in\mathbb{R}^{n-2}\) are given by
\[{}^{t}\vec{a}=(Z_{13},Z_{14},\ldots,Z_{1n}),\qquad\qquad\qquad{}^{t}\vec{b}= (Z_{23},Z_{24},\ldots,Z_{2n}).\]
By Lemma 3.3, we have
\[\Delta_{A}=\begin{vmatrix}1&{}^{t}\vec{a}\\ \vec{a}&\zeta_{A\cap B}\end{vmatrix}=\Delta_{A\cap B}-\langle\vec{a},\widetilde {\zeta}_{A\cap B}\vec{a}\rangle=\Delta_{A\cap B}(1-\langle\vec{a},\widetilde {\zeta}_{A\cap B}^{-1}\vec{a}\rangle),\]
where \(\Delta_{A\cap B}\neq 0\) is assumed for the last equality. Replacing \(\vec{a}\) in the above with \(\vec{b}\), we get the corresponding formula for \(\Delta_{B}\). We also have
\[b_{0}=\langle\vec{a},\zeta_{A\cap B}^{-1}\vec{b}\rangle.\]
**Lemma 3.5**.: _In the setup of Definition 3.4, we have_
\[\widetilde{\zeta}_{B}=\Delta_{A\cap B}\left(\begin{array}{cc}x&{}^{t}\vec{ y}\\ \vec{y}&N\end{array}\right).\]
Proof.: Using \(x\in\mathbb{R}\), \(\vec{y}\in\mathbb{R}^{n-2}\) and a symmetric matrix \(N\) of size \(n-2\), we can express \(\widetilde{\zeta}_{B}\) as
\[\widetilde{\zeta}_{B}=\left(\begin{array}{cc}x&{}^{t}\vec{y}\\ \vec{y}&N\end{array}\right).\]
The adjugate of \(\zeta_{B}\) is subject to \(\zeta_{B}\widetilde{\zeta}_{B}=\Delta_{B}E\), where \(E\) is the identity matrix. Under the assumption \(\Delta_{A\cap B}\neq 0\), this equation in \(x,\vec{y},N\) is uniquely solved.
**Proposition 3.6**.: _In the setup of Definition 3.4, we have_
\[\Delta_{X_{n}}=-\Delta_{A\cap B}(Z_{12}-b_{0})^{2}+\frac{\Delta_{A}\Delta_{B} }{\Delta_{A\cap B}}.\]
Proof.: By definition, we have
\[\Delta_{X_{n}}=\begin{vmatrix}1&Z_{12}&{}^{t}\vec{a}\\ Z_{12}&1&{}^{t}\vec{b}\\ \vec{a}&\vec{b}&\zeta_{A\cap B}\end{vmatrix}\]
Taking the derivative with respect to \(Z_{12}\), we readily see
\[\Delta_{X_{n}}=-\Delta_{A\cap B}(Z_{12}-b_{0})^{2}+\Delta_{A\cap B}b_{0}^{2} +\Delta(Z_{12}=0).\]
Therefore the proof will be completed by showing that \(\Delta_{A\cap B}b_{0}^{2}+\Delta(_{Z_{12}=0})\) agrees with \(\Delta_{A}\Delta_{B}/\Delta_{A\cap B}\). Lemma 3.3 leads to
\[\Delta(_{Z_{12}=0})=\left|\begin{array}{ccc}1&0&\overset{t}{a}\\ 0&1&\overset{t}{b}\\ \vec{a}&\vec{b}&\zeta_{A\cap B}\end{array}\right|=\Delta_{B}-\big{\langle} \left(\begin{array}{c}0\\ \vec{a}\end{array}\right),\widetilde{\zeta}_{B}\left(\begin{array}{c}0\\ \vec{a}\end{array}\right)\big{\rangle}.\]
Lemma 3.5 then allows us to have
\[\big{\langle}\left(\begin{array}{c}0\\ \vec{a}\end{array}\right),\widetilde{\zeta}_{B}\left(\begin{array}{c}0\\ \vec{a}\end{array}\right)\big{\rangle} =\big{\langle}\vec{a},\Delta_{B}\zeta_{A\cap B}^{-1}\vec{a}+ \Delta_{A\cap B}(\zeta_{A\cap B}^{-1}\vec{b})^{t}(\zeta_{A\cap B}^{-1}\vec{b} )\vec{a}\big{\rangle}\] \[=\Delta_{B}\langle\vec{a},\zeta_{A\cap B}^{-1}\vec{a}\rangle+ \Delta_{A\cap B}\big{\langle}\vec{a},\zeta_{A\cap B}^{-1}\vec{b},\vec{a}) \zeta_{A\cap B}^{-1}\vec{b}\big{\rangle}\] \[=\Delta_{B}\bigg{(}1-\frac{\Delta_{A}}{\Delta_{A\cap B}}\bigg{)}+ \Delta_{A\cap B}\langle\zeta_{A\cap B}^{-1}\vec{b},\vec{a}\rangle\langle\vec{a },\zeta_{A\cap B}^{-1}\vec{b}\rangle\] \[=\Delta_{B}-\frac{\Delta_{A}\Delta_{B}}{\Delta_{A\cap B}}+\Delta_ {A\cap B}b_{0}^{2}.\]
Hence \(\Delta(_{Z_{12}=0})=\Delta_{A}\Delta_{B}/\Delta_{A\cap B}-\Delta_{A\cap B}b_{0 }^{2}\), and the proof is completed.
### The inclusion-exclusion principle
We now show the inclusion-exclusion principle under the condition \(Z_{12}=b_{0}\).
**Lemma 3.7**.: _For \(n\geq 3\), let \(A,B\subset X_{n}=\{1,2,\ldots,n\}\) be the subspaces_
\[A=\{1,3,4,\ldots,n\},\qquad\qquad\qquad B=\{2,3,4,\ldots,n\}.\]
_Suppose that \(\Delta_{B}\Delta_{A\cap B}\neq 0\). Then the magnitude of \(B\) is expressed as_
\[\operatorname{Mag}(B)=\operatorname{Mag}(A\cap B)+\frac{\Delta_{A\cap B}}{ \Delta_{B}}(1-\langle\vec{w}_{A\cap B},\vec{b}\rangle)^{2},\]
_where \(\vec{w}_{A\cap B}\) is the weight on \(A\cap B\). Also the weight \(\vec{w}_{B}\) on \(B\) is expressed as_
\[\vec{w}_{B}=\left(\begin{array}{c}\beta\\ \vec{w}_{B}^{\prime}\end{array}\right),\qquad\qquad\qquad\left\{\begin{array}[] {l}\beta=\frac{\Delta_{A\cap B}}{\Delta_{B}}(1-\langle\vec{b},\vec{w}_{A\cap B }\rangle),\\ \vec{w}_{B}^{\prime}=\vec{w}_{A\cap B}-\beta\zeta_{A\cap B}^{-1}\vec{b}.\end{array}\right.\]
Proof.: The expressions directly follow from Lemma 3.5.
**Lemma 3.8**.: _For \(n\geq 3\), let \(A,B\subset X_{n}=\{1,2,\ldots,n\}\) be the subspaces_
\[A=\{1,3,4,\ldots,n\},\qquad\qquad\qquad B=\{2,3,4,\ldots,n\}.\]
_Suppose \(\Delta_{X_{n}}\Delta_{A}\Delta_{B}\Delta_{A\cap B}\neq 0\), and define \(\alpha,\beta\in\mathbb{R}\) by_
\[\alpha=\frac{\Delta_{A\cap B}}{\Delta_{A}}(1-\langle\vec{a},\vec{w}_{A\cap B} \rangle),\qquad\quad\beta=\frac{\Delta_{A\cap B}}{\Delta_{B}}(1-\langle\vec{b},\vec{w}_{A\cap B}\rangle),\]
_Then we have_
\[\operatorname{Mag}(X_{n})-\operatorname{Mag}(A)-\operatorname{Mag }(B)+\operatorname{Mag}(A\cap B)\] \[=\frac{b_{0}-Z_{12}}{\Delta_{X_{n}}}\bigg{(}(b_{0}-Z_{12})\Delta_ {A}\alpha^{2}+(b_{0}-Z_{12})\Delta_{B}\beta^{2}+2\frac{\Delta_{A}\Delta_{B}}{ \Delta_{A\cap B}}\alpha\beta\bigg{)}.\]
Proof.: By Lemma 3.5, we can express the inverse of \(\zeta_{X_{n}}\) as
\[\zeta_{X_{n}}^{-1}=\left(\begin{array}{ccc}x&s&\overset{t}{\vec{p}}\\ s&y&\overset{t}{\vec{q}}\\ \vec{p}&\vec{q}&N\end{array}\right),\]
where \(x,y,s\in\mathbb{R}\) are
\[x =\frac{\Delta_{B}}{\Delta_{X_{n}}}, y =\frac{\Delta_{A}}{\Delta_{X_{n}}}, s =-(Z_{12}-b_{0})\frac{\Delta_{A\cap B}}{\Delta_{X_{n}}},\]
the vectors \(\vec{p},\vec{q}\in\mathbb{R}^{n-2}\) and the symmetric matrix \(N\) of size \(n-2\) are
\[\vec{p} =-\frac{\Delta_{B}}{\Delta_{X_{n}}}\zeta_{A\cap B}^{-1}\vec{a}+(Z_ {12}-b_{0})\frac{\Delta_{A\cap B}}{\Delta_{X_{n}}}\zeta_{A\cap B}^{-1}\vec{b},\] \[\vec{q} =(Z_{12}-b_{0})\frac{\Delta_{A\cap B}}{\Delta_{X_{n}}}\zeta_{A \cap B}^{-1}\vec{a}-\frac{\Delta_{A}}{\Delta_{X_{n}}}\zeta_{A\cap B}^{-1}\vec{b},\] \[N =\zeta_{A\cap B}^{-1}+\frac{\Delta_{B}}{\Delta_{X_{n}}}(\zeta_{A \cap B}^{-1}\vec{a})^{t}(\zeta_{A\cap B}^{-1}\vec{a})+\frac{\Delta_{A}}{ \Delta_{X_{n}}}(\zeta_{A\cap B}^{-1}\vec{b})^{t}(\zeta_{A\cap B}^{-1}\vec{b})\] \[\quad-(Z_{12}-b_{0})\frac{\Delta_{A\cap B}}{\Delta_{X_{n}}}\big{(} (\zeta_{A\cap B}^{-1}\vec{a})^{t}(\zeta_{A\cap B}^{-1}\vec{b})+(\zeta_{A\cap B }^{-1}\vec{b})^{t}(\zeta_{A\cap B}^{-1}\vec{a})\big{)}.\]
Using the above expression and Lemma 3.7, we have
\[\operatorname{Mag}(X_{n}) =\operatorname{Mag}(A\cap B)+\frac{\Delta_{A}\Delta_{B}}{\Delta_{ X_{n}}\Delta_{A\cap B}}(\operatorname{Mag}(A)+\operatorname{Mag}(B)-2 \operatorname{Mag}(A\cap B))\] \[\quad-2(Z_{12}-b_{0})\frac{\Delta_{A\cap B}}{\Delta_{X_{n}}}(1- \langle\vec{w}_{A\cap B},\vec{a}\rangle)(1-\langle\vec{w}_{A\cap B},\vec{b} \rangle).\]
This formula and Proposition 3.6 lead to
\[\operatorname{Mag}(X_{n}) =\operatorname{Mag}(A)+\operatorname{Mag}(B)-\operatorname{Mag}( A\cap B)\] \[\quad+(Z_{12}-b_{0})^{2}\frac{\Delta_{A\cap B}}{\Delta_{X_{n}}}( \operatorname{Mag}(A)+\operatorname{Mag}(B)-2\operatorname{Mag}(A\cap B))\] \[\quad-2(Z_{12}-b_{0})\frac{\Delta_{A\cap B}}{\Delta_{X_{n}}}(1- \langle\vec{w}_{A\cap B},\vec{a}\rangle)(1-\langle\vec{w}_{A\cap B},\vec{b} \rangle).\]
Using Lemma 3.7 again, we complete the proof.
**Theorem 3.9**.: _For \(n\geq 3\), let \(A,B\subset X_{n}=\{1,2,\ldots,n\}\) be the subspaces_
\[A =\{1,3,4,\ldots,n\}, B =\{2,3,4,\ldots,n\}.\]
_Suppose that:_
* \(\Delta_{A}\Delta_{B}\Delta_{A\cap B}\neq 0\)_, and_
* \(Z_{12}=b_{0}\)_._
_Then \(\operatorname{Mag}(X_{n})\) is defined, and_
\[\operatorname{Mag}(X_{n})=\operatorname{Mag}(A)+\operatorname{Mag}(B)- \operatorname{Mag}(A\cap B).\]
Proof.: By Proposition 3.6, we have \(\Delta_{X_{n}}\neq 0\), and the magnitude of \(X_{n}\) is defined. Then Lemma 3.8 completes the proof.
Under some assumptions, we can also show the converse of Theorem 3.9:
**Theorem 3.10**.: _For \(n\geq 3\), let \(A,B\subset X_{n}=\{1,2,\ldots,n\}\) be the subspaces_
\[A =\{1,3,4,\ldots,n\}, B =\{2,3,4,\ldots,n\}.\]
_Suppose \(\Delta_{X_{n}}\Delta_{A}\Delta_{B}\Delta_{A\cap B}\neq 0\), and define \(\delta\in\mathbb{R}\) by_
\[\delta=\operatorname{Mag}(X_{n})-\operatorname{Mag}(A)-\operatorname{Mag}(B)+ \operatorname{Mag}(A\cap B).\]
* _In the case that_ \(n=3\)_,_ \(\delta=0\) _implies_ \(Z_{12}=b_{0}\)_._
* _In the case that_ \(n\geq 4\)_, we assume_
1. \(X_{n}\) _is positive definite;_
2. \(A\) _and_ \(B\) _are positive weighting; and_
3. \(Z_{12}\leq b_{0}\)_._
_Then \(\delta=0\) implies \(Z_{12}=b_{0}\)._
Note that a metric space is said to be _positive weighting_ if it admits a weight whose components are positive [5]. It is known that metric spaces consisting of less than or equal to three points are positive weighting [5] (Proposition 2.4.15). As reviewed in SS1, the metric space \(X_{n}\) is positive definite if \(n\leq 4\). Thus, in the case of \(n=4\), the assumptions (i) and (ii) are redundant.
Proof of Theorem 3.10.: (a) In the case that \(n=3\), we have an expression
\[\delta=-\frac{2(Z_{12}-b_{0})(\Delta_{X_{3}}-(1-Z_{12})(Z_{12}-Z_{13}Z_{23}))}{ (1+Z_{12})(1+Z_{13})\Delta_{X_{3}}}.\]
Then \(\delta=0\) implies \(Z_{12}=b_{0}\), since the following factor is positive:
\[\Delta_{X_{3}}-(1-Z_{12})(Z_{12}-Z_{13}Z_{23})\] \[=(1-Z_{12})(1-Z_{13})(1-Z_{23})\] \[\quad+(1-Z_{13})(Z_{13}-Z_{12}Z_{23})+(1-Z_{23})(Z_{23}-Z_{12}Z_{1 3}).\]
(b) In general, if a finite metric space is positive definite, then so is its subspace [5] (Lemma 2.4.2). Hence \(\Delta_{A}\), \(\Delta_{B}\) and \(\Delta_{A\cap B}\) are positive by (i). By Lemma 3.7 and (ii), we also see that \(\alpha\) and \(\beta\) are positive. Thus, further assuming (iii), we get the positivity of the factor
\[(b_{0}-Z_{12})\Delta_{A}\alpha^{2}+(b_{0}-Z_{12})\Delta_{B}\beta^{2}+2\frac{ \Delta_{A}\Delta_{B}}{\Delta_{A\cap B}}\alpha\beta\]
in the formula of \(\delta\) in Lemma 3.8. Therefore \(\delta=0\) implies \(Z_{12}=b_{0}\).
_Remark 3.11_.: Suppose \(n\geq 2\). For \(i=0,\ldots,n-1\), we define a subspace \(X^{(i)}\) of the \(n\)-point metric space \(X_{n}=\{1,2,\ldots,n\}\) by \(X^{(i)}=\{i+1,i+2,\ldots,n\}\). For \(i=1,\ldots,n-1\), we also define a vector \(\vec{x}_{i}\in\mathbb{R}^{n-i}\) by \({}^{t}\vec{x}_{i}=(Z_{ij})_{j=i+1}^{n}\). If \(\Delta_{X^{(i)}}\neq 0\) for \(i=0,1,\cdots\), then Lemma 3.7 leads to the following formula for the magnitude of \(X_{n}=X^{(0)}\)
\[\operatorname{Mag}(X^{(0)}) =\frac{\Delta_{X^{(1)}}}{\Delta_{X^{(0)}}}(1-\langle\vec{w}_{X^{( 1)}},\vec{x}_{1}\rangle)^{2}+\operatorname{Mag}(X^{(1)})\] \[=\sum_{i=1}^{n-1}\frac{\Delta_{X^{(i)}}}{\Delta_{X^{(i-1)}}}(1- \langle\vec{w}_{X^{(i)}},\vec{x}_{i}\rangle)^{2}+1,\]
where \(\vec{w}_{X^{(i)}}=\zeta_{X^{(i)}}^{-1}\vec{1}\in\mathbb{R}^{n-i}\) is the weight on \(X^{(i)}\). A consequence of the formula above is another proof for the inequalities \(\operatorname{Mag}(X)\geq\operatorname{Mag}(Y)\geq 1\) valid for any non-empty subspace \(Y\) in a positive definite finite metric space \(X\), which have been shown in [5] (Corollary 2.4.4, Corollary 2.4.5).
### Comparison of conditions
We compare the condition for the inclusion-exclusion principle in Theorem 3.9 with that in Theorem 3.2 applied to our choice of the subspaces.
**Lemma 3.12**.: _For \(n\geq 3\), let \(A\) and \(B\) be the following subspaces of \((X_{n},d)\)_
\[A=\{1,3,4,\ldots,n\},\hskip 56.905512ptB=\{2,3,4,\ldots,n\}.\]
1. _(C1) is equivalent to_ \(b_{-}=Z_{12}\)_._
2. _Suppose_ \(\Delta_{A\cap B}\neq 0\)_. Then (C2) implies_ \(b_{-}=b_{0}\)_._
Proof.: The equivalence in (a) is clear, so we prove (b). Note that \(b_{-}\) and \(b_{0}\) are invariant under the permutation (12) exchanging \(1,2\in X_{n}\), while this permutation exchanges the conditions (C1\({}_{a}\)) and (C1\({}_{b}\)). Therefore it suffices to show that (C1\({}_{a}\)) implies \(b_{-}=b_{0}\). Note also that \(b_{-}\) and \(b_{0}\) are also invariant under any permutations of \(A\cap B\). Thus, in (C1\({}_{a}\)), we can assume that \(\pi(1)=3\). In this case, (C1\({}_{a}\)) is equivalent to the triangle equalities \(Z_{1j}=Z_{13}Z_{3j}\) for \(4\leq j\leq n\). Then the triangle inequality \(Z_{23}\geq Z_{2j}Z_{3j}\) leads to
\[Z_{1j}Z_{2j}=Z_{13}Z_{3j}Z_{2j}\leq Z_{13}Z_{23}.\]
Therefore \(b_{-}=Z_{13}Z_{23}\). Now, noting the relation of vectors
\[(0,Z_{13},Z_{14},Z_{15},\dots,Z_{1n})=Z_{13}(0,1,Z_{34},Z_{35},\dots,Z_{3n}),\]
we can directly verify \(b_{0}=Z_{13}Z_{23}\).
In the case that \(n=3\), we always have \(b_{-}=b_{0}=Z_{13}Z_{23}\). Hence no difference arises. In the case that \(n=4\), we have \(b_{-}\leq b_{0}\) by Lemma 2.3. Therefore there exists a metric on \(X_{4}\) such that \(b_{-}<b_{0}=Z_{12}\). (An example is provided in Remark 3.15 below.) In this case, our inclusion-exclusion principle in Theorem 3.9 is not covered by that in Theorem 3.2.
_Remark 3.13_.: The converse of Lemma 3.12 (b) holds true if \(n=3,4\): This is clear in the case of \(n=3\). In the case of \(n=4\), we can see that \(b_{-}=b_{0}\) implies (C2) by using the expressions of \(b_{0}\) proving \(b_{-}\leq b_{0}\) in the proof of Lemma 2.3. In the case of \(n=5\), there exists an example such that the converse does not hold.
To describe the example, let \(S^{1}\subset\mathbb{R}^{2}\) be the unit circle centred at the origin. By the geodesic distance, \(S^{1}\) gives rise to a metric space. We write \(p(\theta)=(\cos\theta,\sin\theta)\in S^{1}\) for the point specified by \(\theta\in\mathbb{R}/2\pi\mathbb{Z}\). As in Figure 1 (left), let \(X_{5}=\{1,2,3,4,5\}\subset S^{1}\) be the subspace consisting of
\[1=p(0),\qquad 2=p(3\pi/4),\qquad 3=p(\pi/2),\qquad 4=p(-\pi/2),\qquad 5=p(\pi).\]
In the case of \(n=5\), we can generally express \(b_{0}\) as
\[b_{0}=Z_{13}Z_{23}+\frac{(Z_{14}-Z_{13}Z_{34})P+(Z_{15}-Z_{13}Z_{35})Q}{\Delta _{345}},\]
where \(P\) and \(Q\) are
\[P =(1-Z_{35}^{2})(Z_{24}-Z_{23}Z_{34})-(Z_{25}-Z_{23}Z_{35})(Z_{45}- Z_{34}Z_{35})\] \[=(1-Z_{35}^{2})(Z_{24}-Z_{25}Z_{45})-(Z_{23}-Z_{25}Z_{35})(Z_{34}- Z_{35}Z_{45}),\] \[Q =(1-Z_{34}^{2})(Z_{25}-Z_{23}Z_{35})-(Z_{24}-Z_{23}Z_{34})(Z_{45}- Z_{34}Z_{35})\] \[=(1-Z_{34}^{2})(Z_{25}-Z_{24}Z_{45})-(Z_{23}-Z_{24}Z_{34})(Z_{35}- Z_{34}Z_{45}).\]
For the present example, we have \(Z_{12}=Z_{13}Z_{23}\), so that \(b_{-}=Z_{13}Z_{23}\). From \(Z_{24}=Z_{25}Z_{45}\) and \(Z_{34}=Z_{35}Z_{45}\), it follows that \(P=0\). Since \(Z_{15}=Z_{13}Z_{35}\), we find \(b_{0}=Z_{13}Z_{23}=b_{-}\). However, (C2) is not satisfied.
_Remark 3.14_.: We have \(b_{-}=b_{0}\) if \(n=3\), and \(b_{-}\leq b_{0}\) if \(n=4\) as shown in Lemma 2.3. However, in the case that \(n=5\), there exists an example such that \(b_{0}<b_{-}\)
Using the notations in Remark 3.13, we let \(X_{5}=\{1,2,3,4,5\}\subset S^{1}\) be the subspace consisting of
\[1=p(0),\qquad 2=p(\pi/2),\qquad 3=p(\pi/4),\qquad 4=p(-\pi/2),\qquad 5=p(\pi),\]
as illustrated in Figure 1 (right). We have \(b_{-}=Z_{13}Z_{23}\) because \(Z_{12}=Z_{13}Z_{23}\). We also have \(P<0\) by \(Z_{24}=Z_{23}Z_{34},Z_{25}>Z_{23}Z_{35}\) and \(Z_{45}>Z_{34}Z_{35}\). Finally, by \(Z_{14}>Z_{13}Z_{34}\) and \(Z_{15}=Z_{13}Z_{35}\), we see \(b_{0}<b_{-}\).
_Remark 3.15_.: _Magnitude homology [6]_ is a notion which categorifies the magnitude of a finite metric space. If a finite metric space \((X,d)\) satisfies (C1) and (C2), then its magnitude homology fits into a (splitting) _Mayer-Vietoris exact sequence_[1, 8]. Generally, the Mayer-Vietoris sequence for the magnitude homology implies the inclusion-exclusion formula for the magnitude [3, 6]. Therefore a natural question is whether the magnitude homology of a finite metric space subject to \(Z_{12}=b_{0}\) fits into the Mayer-Vietoris sequence. It turns out that \(Z_{12}=b_{0}\) does not generally lead to the Mayer-Vietoris sequence.
This is seen by an example: Consider the metric \(d\) on \(X_{4}=\{1,2,3,4\}\) such that:
* \(d(i,j)=1\) for distinct \(i,j\in A=\{1,3,4\}\), and also \(d(i,j)=1\) for distinct \(i,j\in B=\{2,3,4\}\).
* \(d(1,2)=\log\left(\frac{e^{2}+e}{2}\right)=1.6201\cdots\).
Note that \(b_{-}=e^{-2}<b_{0}=Z_{12}=2e^{-2}/(1+e^{-1})\), hence the inclusion-exclusion principle is satisfied. The subspaces \(A\), \(B\) and \(A\cap B\) can be identified with the metric spaces associated to the complete graphs. Thus, on the one hand, the magnitude homology groups \(MH_{n}^{\ell}(A)\), \(MH_{n}^{\ell}(B)\) and \(MH_{n}^{\ell}(A\cap B)\) of the subspaces are trivial for all \(n\in\mathbb{Z}\) provided that \(\ell=d(1,2)\). On the other hand, we readily see that \(MH_{1}^{\ell}(X_{4})\cong\mathbb{Z}^{2}\) for \(\ell=d(1,2)\). Therefore we just get a sequence which is not exact:
\[\cdots\to\underbrace{MH_{1}^{\ell}(A\cap B)}_{0}\to\underbrace{MH_{1}^{\ell }(A)}_{0}\oplus\underbrace{MH_{1}^{\ell}(B)}_{0}\to\underbrace{MH_{1}^{\ell} (X_{4})}_{\mathbb{Z}^{2}}\to\underbrace{MH_{0}^{\ell}(A\cap B)}_{0}\to\cdots.\]
|
2309.12027 | Precision in Building Extraction: Comparing Shallow and Deep Models
using LiDAR Data | Building segmentation is essential in infrastructure development, population
management, and geological observations. This article targets shallow models
due to their interpretable nature to assess the presence of LiDAR data for
supervised segmentation. The benchmark data used in this article are published
in NORA MapAI competition for deep learning model. Shallow models are compared
with deep learning models based on Intersection over Union (IoU) and Boundary
Intersection over Union (BIoU). In the proposed work, boundary masks from the
original mask are generated to improve the BIoU score, which relates to
building shapes' borderline. The influence of LiDAR data is tested by training
the model with only aerial images in task 1 and a combination of aerial and
LiDAR data in task 2 and then compared. shallow models outperform deep learning
models in IoU by 8% using aerial images (task 1) only and 2% in combined aerial
images and LiDAR data (task 2). In contrast, deep learning models show better
performance on BIoU scores. Boundary masks improve BIoU scores by 4% in both
tasks. Light Gradient-Boosting Machine (LightGBM) performs better than RF and
Extreme Gradient Boosting (XGBoost). | Muhammad Sulaiman, Mina Farmanbar, Ahmed Nabil Belbachir, Chunming Rong | 2023-09-21T12:43:11Z | http://arxiv.org/abs/2309.12027v1 | # Precision in Building Extraction: Comparing Shallow and Deep Models using LiDAR Data.
###### Abstract
Building segmentation is essential in infrastructure development, population management, and geological observations. This article targets shallow models due to their interpretable nature to assess the presence of LiDAR data for supervised segmentation. The benchmark data used in this article are published in NORA MapAI competition for deep learning model. Shallow models are compared with deep learning models based on Intersection over Union (IoU) and Boundary Intersection over Union (BIoU). In proposed work, boundary masks from the original mask are generated to improve the BIoU score, which relates to building shapes' borderline. The influence of LiDAR data is tested by training the model with only aerial images in task 1 and a combination of aerial and LiDAR data in task 2 and then compared. shallow models outperform deep learning models in IoU by 8% using aerial images (task 1) only and 2% in combined aerial images and LiDAR data (task 2). In contrast, deep learning models show better performance on BIoU scores. Boundary masks improve BIoU scores by 4% in both tasks. Light Gradient-Boosting Machine (LightGBM) performs better than RF and Extreme Gradient Boosting (XGBoost).
Building Extraction, Machine Learning, LiDAR data.
## I Introduction
Buildings play an essential role in planning policies related to infrastructures and provide data related to population, which helps in management [1]. geological observation technologies, such as satellites and drones provide high spatial resolution images and are used in building inspection [2]. Easy access to data repository makes it possible to work on different applications, like population aggregation [3, 4, 5], urban planning [6], building model reconstruction [7], mapping [8], emergency rescue [9], and pre-disaster building risk assessment [10]. Manual interpretation and vectorization were difficult and time-consuming and impossible for a big dataset of images. The rapid development of sensors such as Light Detection and Ranging (LiDAR) [5], Polarimetric Synthetic Aperture Radar (POLSAR) [11], and Synthetic aperture radar (SAR) [12] provides enriched data for the automatic extraction of buildings. Computer vision provides different methods like object detection and segmentation for automation in several applications like urban planning and disaster recovery [13]. Apart from data availability and image processing techniques, the high spectral and textural similarity between buildings, background objects and shadows of the buildings, and various shapes and colors in the building make this automation challenging.
Automatic building extraction from remote sensing information is essential to determine pre-disaster management in rural and urban areas. Researcher have tried different traditional and deep learning algorithms to improve building extraction [14, 15, 16, 17]. The building extraction methods rely on features such as building color [18], spectrum [19], texture [20], shape [21] and context [22, 23]. However buildings having diverse roof colors and textures, lighting, and shadow problem due to weather, still need to work on creating a stable model for generalized results [24]. LiDAR is independent of spectral and spatial information i.e., shadow, dark or lightening [25], and the depth information provided by LiDAR is quite handy to extract ground objects [12, 26] and improve building extraction results on remote sensing images. Furthermore combining the visual information of optical images and the depth information of LiDAR data can further improve the building extraction task as compared to individual optical images or LiDAR data. Fusion of optical image and LiDAR data requires sensor alignment, and data acquisition is usually costly as compared to single source data.
Pixel-based and object-oriented image classification are two common methods for building extractions [12]. Pixel-based methods can improve performance by combining spectral features and point cloud information [27]. Pixel-based image segmentation could be done using both conventional machine learning and deep learning methods. This article focused on conventional machine learning methods to extract buildings and find the significance of LiDAR data.
### _Dataset_
This work uses a dataset from NORA's competition named "MapAI: Precision in building segmentation" [28]. The dataset consists of real-world data having noise in
different forms, different quality images, and large class imbalances. The dataset consists of training, evaluation, and testing images from different regions in the form of aerial images, LiDAR data, and masks. A single image's resolution in aerial and LiDAR is 500X500 pixels. LiDAR data is preprocessed and converted to a matrix like an image, where each value represents the pixel's depth from the LiDAR. The training dataset consists of images from different locations in Denmark, and the test dataset consists of seven locations in Norway. The competition is based on two tasks: 1) Classify buildings from the ground in aerial images and 2) use aerial and LiDAR data. In the second task, the fusion of aerial images with LiDAR data is allowed. Figure 1 show first 100 images from the training set. NORA's competition score is divided into two task. Task 1 is "Aerial Image Segmentation Task" where only aerial images are allowed for training and Task 2 is "Laser Data Segmentation Task" where the model could be trained using LiDAR data with or without aerial images [28].
### _Evaluation Metrics_
Image segmentation could be evaluated using region-based and boundary-based metrics. Intersection over Union (IOU) or Jaccard Index (JI) could be used as region-based metrics, which measures the similarity between two binary images, one is ground truth \(I_{g}\), and the other is predicted mask \(I_{p}\) by dividing the intersection area by total area shown in Equation 1[29]. Boundary Intersection over Union (BIOU) is used as a metric for boundary-based evaluation. BIOU is the intersection over union of the edged ground truth and edged prediction mask, where d denotes the thickness of the edge from the contour line in Equation 2[30].
\[IoU=JI=\frac{Intersection}{Union}=\frac{|I_{g}\cap I_{p}|}{|I_{g}|+|I_{p}|-|I_{g }\cap I_{p}|} \tag{1}\]
\[BIOU=\frac{|(I_{g_{d}}\cap I_{g})\cap(I_{p_{d}}\cap I_{p})|}{|(I_{g_{d}}\cap I _{g})\cup(I_{p_{d}}\cap I_{p})|} \tag{2}\]
Dataset is trained and tested on shallow models and compared with deep learning to show the difference in performance. Different filters are interpreted using RF and XGBoost to find the best filters for the given dataset. Boundary masks are created to improve BIoU, and each model is trained with and without a boundary mask to compare the differences. Models are tested on data with and without LiDAR data to find the influence of LiDAR data.
## II Literature
The Random Forest (RF) algorithm, was first introduced by [31] and has now grown to a standard non-parametric classification and regression tool for constructing prediction rules based on various types of predictor variables without making any prior assumption on the form of their association with the response variable. Neural networks, the basis of Deep Learning (DL) algorithms, have been used in the remote sensing community for many years. Deep learning methods have the ability to retrieve complex patterns and informative features from satellite image data. However, before the development of DL, the remote-sensing community had shifted its focus from neural networks to Support Vector Machine (SVM) and ensemble classifiers, e.g., RF, for image classification and other tasks (e.g., change detection) [32]. Results from [33] agree with the previous studies [34, 35], which demonstrated that DNN was only slightly superior to SVM and RF in classification and estimation applications.
However, one of the main problems with deep learning approaches is their hidden layers and "black box" nature [36], which results in the loss of interpretability. Due to the black-box nature of deep learning, it is impossible to measure the significance of LiDAR data. In contrast, RF and XGBoost are interpretable in nature, and easy to assess the importance of LiDAR data for segmentation. Another limitation of deep learning models is that they are highly dependent on the availability of abundant high-quality ground truth data. On the other hand, recent research works show SVM and RF (i.e., relatively easily implementable methods) can handle learning tasks with a small amount of training dataset yet demonstrate competitive results with Convolutional Neural Networks (CNNs) [37]. Although there is an ongoing shift in the application of deep learning in remote sensing image classification, SVM and RF have still held the researchers' attention due to lower computational complexity and higher interpretability capabilities compared to deep learning models. RF in terms of classification accuracies, makes it the most popular machine learning classifier within shallow models for remote sensing community [38].
## III Methodology
In this work, RF, XGBoost, and LightGBM are used as pixel classifiers to employ segmentation and compare performance with deep learning models tested on the same dataset. RF is a collection of a bunch of decision trees. Decision trees are well known for interpretability and representability, as they mimic how the human brain makes decisions. Interpretability may reduce prediction accuracy. However, ensembles of decision trees overcome this problem and proposed a strong and robust model in the form of RF, and later some extensions in the form of XGBoost and LightGBM.
Bagging train multiple trees on different subsets of datasets having all features and then predict out the label using the average or majority vote of these trees. As an extension of bagging, along with a random subset of the dataset, RF used a random selection of features for each
tree, which helps interpretability. However, in RF, trees are independent which avoid the usage of knowledge from the previous learner or tree.
Boosting overcome this independency problem of trees in bagging and building an ensemble of consecutive trees where each tree uses a residual from the previous tree to minimize loss [39]. An extension of Boosting, Gradient Boosting use both gradient descent and boosting to optimize loss function. An extreme version of Gradient Boosting is XGBoost, which is more efficient, flexible, and portable due to advanced regularization which improve generalization.
LightGBM is another version of Gradient Boosting which more focus on computational efficiency and performance as compared to XGBoost. Light GBM reduces the number of splits by employing leaf-wise split rather than level-wise split [40]. The remaining part of the section divides into two subsections: Feature Extraction and Segmentation.
### _Feature Extraction_
Preprocessing steps are employed in this work to prepare data for training the model. The dataset consists of both aerial images and LiDAR data. In the first step, blue, green, and red channels are extracted from the aerial image along with the gray image as features of the image. LiDAR data in the dataset is a 2D array having a dimension (500X500) the same as the aerial image. LiDAR data is also fused with other features due to the same dimension, to exploit the presence of LiDAR data for segmentation.
In the second step, boundary masks are created to improve the BIOU metrics. 7X7 kernel is used as a structuring element to erode the original image, which reduces 3 pixels from each side of all shapes in the image. Figure 2 shows the procedure for creating a BIOU mask. In the first step, the image is eroded with a structuring element filled with 1's to erode shapes in the image equally from all sides, which results in an eroded image. In the second step eroded image is subtracted from the original image, which results in a BIOU mask.
In the third step, features are aligned to train the model. Blue, green, red, gray, and LiDAR features of the first image having 500X500 dimensions are Flattered and placed in the matrix with the original mask as a label. Hence the first tuple of the matrix consists of the first pixel values for blue, green, red, gray, LiDAR, and mask as a label. Same features of the first image are duplicated for the boundary mask. In this way, the model is trained with both the original mask and boundary mask to improve the BIOU metric along with IOU. Figure 3 shows the data preprocessing procedure for this work.
### _Segmentation_
To ensure how LiDAR data affect the results of pixel-wise segmentation, initially, only aerial images are used, and later LiDAR data along with aerial images are used in experiments. In the first task, only four features (blue, green, red, and gray) are used in training, validation, and
Fig. 1: Image resolution 5000x5000 pixels is divided into 100 images with equal resolution 500x500 pixels.
Fig. 2: BIOU Mask generation: First step Eroding, and Second step Subtracting eroded image from original image
testing. In the second task, LiDAR data is also used to exploit the presence of LiDAR data for segmentation. In Figure 4 step 1 presents feature extraction and preprocessing of the data for segmentation. In this step, features are extracted from the images and the image-based dataset is prepared to be trained on traditional machine learning algorithms.
In the second step, three classifier. RF, XGBoost and Light GBM are trained on the data. In the case of the deep learning model, a complete image is used as an input to the model for training and provides segment maps as output. This segment map consists of \(n\) number of channels, one for each label class. Contrary to the deep learning decision, the classifiers used in this work fed up with one-pixel information at each time for model training. Trained models are later stored for validation testing purposes.
In the third step, trained models are tested on testing data of the dataset. Feature of each image are extracted, preprocessed, and later pixel-wise tested on the stored models. Models predicted each pixel value either as a building or foreground. In fourth step, the output of testing are reassembled as a predicted mask. In Figure 4, prediction component represent predicted mask for RF, XGBoost and LightGBM.
In the fourth step, Evaluation performs with the help of IoU and BIoU by comparing the ground truth mask with the predicted mask. In IoU intersection of both mask is divided by the union of both masks, while in BIoU, the boundary intersection is divided by boundary union. IoU validate raw accuracy of the model, while BIoU validates how perfectly the contour of the building is segmented by the model. BIoU is more sensitive toward the shape of the building, which is more challenging as compared to IoU.
## IV Experiments
Table I show the parameter setup for the experiment done in this work. All features mentioned in the table are tested on the given dataset. Features are listed in the table according to models used for interpretability. The best performance was achieved using Blue, Green, Red, and Gray as shown with bold text. The remaining experiments are performed using only these best features. RF classifier tested on the mentioned estimator values and 10 show
Fig. 4: Segmentation: Preprocessing, Training, Testing, Prediction, and Performance Evaluation.
Fig. 3: Feature Extraction and Data Preprocessing
best score for the dataset. All experiments shown in Table II are performed with best estimator. Hyperparameter for XGBoost and LightGBM obtain from tuning on the given dataset using grid search CV. Further more learning rate form LightGBM test from 0.05 to 0.0009 but the impact on the dataset is very minimal as compared to computation increase for training.
RF is a slow classifier as compared to XGBoost and LightGBM; it requires more data for training. As the number of images increases, performance also improves, but after training on 2000 images, performance remains the same. Table II shows performance on 7000 is almost the same as 2000 images. In feature column (L), denote LiDAR data for task 2. Column BMask denotes boundary mask and RF also tested with boundary masks. IoU dropped 5% while BIoU increased 22% in task 1 and almost the same pattern was shown in task 2. Comparison analysis of tasks 1 and 2 shows that including LiDAR data improves both IoU and BIoU scores.
As compared to RF, XGBoost required fewer data for training. The best performance was achieved with 1000 images for training and was not improved further using all images. Table II shows better results for XGBoost when the gray channel is not used as a feature due to it less influence on the output label. Same to RF, the inclusion of boundary masks improves the performance score, and LiDAR data with aerial images in each experiment perform better as compared to only aerial images. The performance score of XGBoost is better on both metrics as compared to RF.
LightGBM can be quickly trained on fewer data as compared to RF and XGBoost. Table II, shows results for XGBoost on only 10 images with and without boundary mask, which is relatively better as compared to RF and XGBoost. As the number of images increases, BIoU also increases. In LightGBM, feature ranking is not possible, due to which the same four features for task one and five features for task 2 are used. The inclusion of boundary mask improves BIoU, same in RF and XGBoost. Same as other models, LiDAR data improve both IoU and BIoU.
Table III compares models used in this work. Total is the average of IoU and BIoU in both tasks. The score is the average of both totals in the table. The IoU score of XGBoost is better in task 1 and task 2, while BIoU score of LightGBM is significantly better as compared to others. LightGBM performs better in average scores for both tasks and also overall scores.
Table IV compares proposed work with the top 3 competitors from NORA MapAI competition. The proposed work is based on pixel-wise segmentation by neglecting the context of the pixel with its neighbors, which results in less score for BIoU metrics in both tasks. While proposed work claims a better IoU score as it focuses more on the values of each pixel rather than the context of the pixel relative to neighbors. The segmentation method of all top three competitors is based on a deep learning model which accounts the contextual segmentation rather than pixel-wise and hence shows a better BIoU score as compared to the proposed one.
Figure 5 shows generated masks for RF, XGBoost, and LightGBM. RF mask has more red and blue pixels as compared to others, and building roofs could be enhanced with a median filter which can improve IoU, but it will disturb true positive pixels at the edges, which will result in less BIoU score. XGBoost performs better as compared to Random; both false negative and false positive are decreased, and roofs illuminated by light are predicted better. LigthGBM further improves the prediction by reducing false positives as compared other two models.
Table V show the interpretability of RF and XGBoost model for both task, while LightGBM exhibits the interpretable nature same as the deep learning model. This table shows, how much significant (a high number show more significance and vice versa) these features are to predict precisely the output. In task 1, blue and red are the most significant in both RF and XGBoost. Gray show more significance in RF model as compared to the XGBoost model. The main aim of using shallow models in this work is to determine the significance of LiDAR data compared to aerial images. In task 2, LiDAR data is the most significant feature for both RF and XGBoost models leaving behind the aerial images features that are blue, red, green and gray channels of the image.
## V Discussion
In NORA competition, different deep learning models are tested on the MapAI dataset, though the problem belongs to binary segmentation. Due to the problem's simplicity and interpretability, this work targeted boosting models as shallow learning models to perform binary segmentation on the given dataset. It was expected to achieve the same, if not good, results as compared to the deep learning model, but due to the nature of different evaluation metrics, the proposed work outperformed deep learning in IoU. In contrast, deep learning models have better scores in BIoU.
This work focuses on pixel-wise segmentation rather than segmentation using a deep learning algorithm. Like
\begin{table}
\begin{tabular}{|c|c|} \hline Classifier & Parameter \\ \hline \multirow{3}{*}{Features} & **Blue, Green, Red, Gray**, Histogram Equalization, \\ & Morphological, Clahe Histogram Equalization, \\ & Gabor Filter, Canny Edge Detector \\ \hline RF & n\_estimators (3,4,5,6,8,10,12) \\ \hline \multirow{2}{*}{XGBoost} & closample\_before-0,9, gamma\_8.3, max\_depth:8, \\ & min\_child, weight:5, reg\_alpha:17, rreg\_lambda:0.04 \\ \hline \multirow{3}{*}{LightGBM} & learning\_rate:(0.05-0.0009), boosting\_type:gbdt, \\ & objective:binary, metric:[auc, binary\_logloss], \\ \cline{1-1} & num\_leaves:100, max\_depth:10 \\ \hline \end{tabular}
\end{table} TABLE I: Parameter setup for experiments.
the NORA MapAI competition, this work also uses IoU and BIoU to evaluate the models. proposed work results in the NORA MapAI competition, this work also uses IoU and BIoU to evaluate the models. proposed work results in the NORA MapAI competition, this work also uses IoU and BIoU to evaluate the models.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & & **RF** & \multicolumn{2}{c|}{**XGBoost**} \\ \hline & Blue & 0.368099 & Blue & 0.38609 \\ & Red & 0.299738 & Red & 0.30973 \\
**Task 1** & Green & 0.190407 & Green & 0.200707 \\ & Gray & 0.141755 & Gray & 0.102655 \\ \hline & LiDAR & 0.261568 & LiDAR & 0.233052 \\ & Blue & 0.209854 & Blue & 0.213057 \\
**Task 2** & Red & 0.197456 & Green & 0.207829 \\ & Green & 0.160032 & Red & 0.195888 \\ & Gray & 0.170462 & Gray & 0.150174 \\ \hline \end{tabular}
\end{table} TABLE V: Interpretability of RF and XGBoost
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & & & & **Task 1** & \multicolumn{2}{c|}{**Task 2**} \\ \hline
**Classifier** & **Features** & **Images** & **BMask** & **IoU** & **BIoU** & **IoU** & **BIoU** \\ \hline & BGRGray(L) & 10 & No & 0.8711 & 0.273 & 0.8812 & 0.3009 \\ & BGRGray(L) & 100 & No & 0.9037 & 0.3235 & 0.9190 & 0.3389 \\ RF & BGRGray(L) & 1000 & No & 0.9237 & 0.3300 & 0.9317 & 0.3445 \\ & BGRGray(L) & 2000 & No & 0.9235 & 0.3350 & 0.9318 & 0.3444 \\ & BGRGray(L) & 7000 & No & 0.9234 & 0.3348 & 0.9321 & 0.3436 \\ & BGRGray(L) & 2000 & Yes & 0.8562 & 0.5460 & 0.8763 & 0.5562 \\ \hline & BGRGray(L) & 100 & No & 0.9011 & 0.2876 & 0.8912 & 0.3010 \\ & BGRGray(L) & 500 & No & 0.8837 & 0.3335 & 0.8854 & 0.3589 \\ XGBoost & BGRGray(L) & 1000 & No & 0.8843 & 0.3501 & 0.8817 & 0.3545 \\ & BGRGray(L) & 7000 & No & 0.8823 & 0.3513 & 0.8821 & 0.3535 \\ & BGR(L) & 1000 & No & 0.8833 & 0.3654 & 0.8845 & 0.3876 \\ & BGR(L) & 1000 & Yes & 0.8802 & 0.5401 & 0.8799 & 0.5672 \\ \hline & BGRGray (L) & 10 & No & 0.8635 & 0.4050 & 0.8782 & 0.424 \\ & BGRGray(L) & 10 & Yes & 0.8933 & 0.443 & 0.8945 & 0.464 \\ LightGBM & BGRGray(L) & 100 & Yes & 0.8831 & 0.5221 & 0.8851 & 0.5331 \\ & BGRGray(L) & 1000 & Yes & 0.8763 & 0.5831 & 0.8783 & 0.5998 \\ \hline \end{tabular}
\end{table} TABLE II: Evaluation Analysis
Fig. 5: Classifiers Mask (a) RF (b) XGBoost and (c) LightGBM
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & & **Task 1** & \multicolumn{2}{c|}{**Task 2**} \\ \hline
**Classifier** & **IoU** & **BIoU** & **IoU** & **BIoU** \\ \hline FUNDATOR [41] & 0.7794 & 0.6115 & 0.8775 & **0.7857** \\ \hline HVL-ML [42] & 0.7879 & **0.6245** & 0.8711 & 0.7504 \\ \hline DEEPCROP [43] & 0.7902 & 0.6185 & 0.8506 & 0.7461 \\ \hline Proposed & **0.8763** & 0.5831 & **0.8783** & 0.5998 \\ \hline \end{tabular}
\end{table} TABLE IV: Comparison with top 3 competitors from MAPAI Competition
in better IoU than competitors in Nora competition as it focuses more on pixel information individually for segmentation. BIOU is worst compared to the competition because RF, XGBoost, and LightGBM are not considering the shapes of the building due to their nature. Table V shows that LiDAR data is more significant as compared to other features. However, to be interpretable, they pay a price in terms of prediction accuracy as compared to deep learning models. While the deep learning model extracts patterns from the image using convolution, each pattern consists of a group of pixels related to each other. Hence deep learning is better is segmenting objects and shape in the image, which result in better BIoU as compared to the proposed work.
## Acknowledgment
This work is supported by the project EMB3DCAM "Next Generation 3D Machine Vision with Embedded Visual Computing" and co-funded under the grant number 325748 of the Research Council of Norway.
|
2309.06052 | Saturn's Atmosphere in Northern Summer Revealed by JWST/MIRI | Saturn's northern summertime hemisphere was mapped by JWST/MIRI (4.9-27.9
$\mu$m) in November 2022, tracing the seasonal evolution of temperatures,
aerosols, and chemical species in the five years since the end of the Cassini
mission. The spectral region between reflected sunlight and thermal emission
(5.1-6.8 $\mu$m) is mapped for the first time, enabling retrievals of
phosphine, ammonia, and water, alongside a system of two aerosol layers (an
upper tropospheric haze $p<0.3$ bars, and a deeper cloud layer at 1-2 bars).
Ammonia displays substantial equatorial enrichment, suggesting similar
dynamical processes to those found in Jupiter's equatorial zone. Saturn's North
Polar Stratospheric Vortex has warmed since 2017, entrained by westward winds
at $p<10$ mbar, and exhibits localised enhancements in several hydrocarbons.
The strongest latitudinal temperature gradients are co-located with the peaks
of the zonal winds, implying wind decay with altitude. Reflectivity contrasts
at 5-6 $\mu$m compare favourably with albedo contrasts observed by Hubble, and
several discrete vortices are observed. A warm equatorial stratospheric band in
2022 is not consistent with a 15-year repeatability for the equatorial
oscillation. A stacked system of windshear zones dominates Saturn's equatorial
stratosphere, and implies a westward equatorial jet near 1-5 mbar at this
epoch. Lower stratospheric temperatures, and local minima in the distributions
of several hydrocarbons, imply low-latitude upwelling and a reversal of
Saturn's interhemispheric circulation since equinox. Latitudinal distributions
of stratospheric ethylene, benzene, methyl and carbon dioxide are presented for
the first time, and we report the first detection of propane bands in the 8-11
$\mu$m region. | Leigh N. Fletcher, Oliver R. T. King, Jake Harkett, Heidi B. Hammel, Michael T. Roman, Henrik Melin, Matthew M. Hedman, Julianne I. Moses, Sandrine Guerlet, Stefanie N. Milam, Matthew S. Tiscareno | 2023-09-12T08:39:53Z | http://arxiv.org/abs/2309.06052v1 | # Saturn's Atmosphere in Northern Summer Revealed by JWST/MIRI
###### Abstract
Astrochemistry Laboratory Code 691, NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt MD 20771, USA
## 1 Key Points:
* Saturn's northern summertime hemisphere was mapped by JWST/MIRI to study seasonal evolution of temperatures, aerosols, and composition.
* The data show evidence for changing temperatures and winds in the equatorial oscillation, polar vortices, and interhemispheric stratospheric circulation.
* MIRI spectral coverage and sensitivity enables mapping of several gases for the first time, particularly in ranges inaccessible to Cassini.
###### Abstract
Saturn's northern summertime hemisphere was mapped by JWST/MIRI (4.9-27.9 \(\mu\)m) in November 2022, tracing the seasonal evolution of temperatures, aerosols, and chemical species in the five years since the end of the Cassini mission. We provide algorithms to clean MRS spectral cubes, revealing Saturn's banded structure and discrete meteorological features. The transitional spectral region between reflected sunlight and thermal emission (5.1-6.8 \(\mu\)m) is mapped for the first time, enabling retrievals of phosphine, ammonia, and water, alongside a stacked system of two aerosol layers (an upper tropospheric haze \(p<0.3\) bars, and a deeper cloud layer at 1-2 bars). Ammonia displays substantial equatorial enrichment, suggesting similar dynamical processes to those found in Jupiter's equatorial zone. Saturn's North Polar Stratospheric Vortex has warmed since 2017, is entrained by westward winds at \(p<10\) mbar, and exhibits localised enhancements in several hydrocarbons. The strongest latitudinal temperature gradients are co-located with the peaks of the zonal winds, implying wind decay with altitude. Reflectivity contrasts at 5-6 \(\mu\)m compare favourably with albedo contrasts observed by Hubble, and several discrete vortices are observed. A warm equatorial stratospheric band in 2022 is not consistent with a 15-year repeatability for the equatorial oscillation. A stacked system of windshear zones dominates Saturn's equatorial stratosphere, and implies a westward equatorial jet near 1-5 mbar at this epoch. Lower stratospheric temperatures, and local minima in the distributions of several hydrocarbons, imply low-latitude upwelling and a reversal of Saturn's interhemispheric circulation since equinox. Latitudinal distributions of stratospheric ethylene, benzene, methyl and carbon dioxide are presented for the first time, and we report the first detection of new propane bands in the 8-11 \(\mu\)m region.
## Plain Language Summary
The Saturn system, with its seasonally-varying atmosphere, delicate rings, and myriad satellites, presented an ideal early target for JWST. Saturn's extended disc, rapid rotation, and infrared brightness provided a challenge for the small fields-of-view of the Mid-Infrared Instrument (MIRI), requiring a mosaic to map Saturn's northern summertime hemisphere. This exquisite dataset reveals Saturn's banded structure, discrete vortices, the warm polar vortices, and the continued evolution of an oscillatory pattern of warm and cool anomalies over Saturn's equator. We show evidence that a stratospheric circulation pattern detected by Cassini during northern winter has now fully reversed in northern summer, with the low-latitude stratosphere being cool and depleted in gases due to summertime upwelling. MIRI provides access to spectral regions that were not possible with the Cassini spacecraft, particularly in the 5-7 \(\mu\)m region where reflected sunlight and thermal emission blend together. Our measurements reveal a stacked system of aerosol layers modulating the infrared brightness. Ammonia and phosphine are enriched at Saturn's equator, suggesting strong mixing from the deeper troposphere. The high sensitivity of MIRI enables the first identification of previously unseen emission propane bands, along with the first measurements of the distribution of several gaseous species: tropospheric water, and stratospheric ethylene, benzene, methyl, and carbon dioxide, all of which provide insights into the circulation and chemistry shaping Saturn's seasonal atmosphere.
## 1 Introduction
Spectroscopic mid-infrared observations of the Saturn system by JWST (Gardner et al., 2023) were designed to build on the legacy of discoveries of the Cassini-Huygens mission (2004-2017), exploiting the unprecedented spectral coverage and sensitivity of the MIRI/MRS (Medium Resolution Spectrometer, 4.9-27.9 \(\mu\)m, Wright et al., 2023) integral field units. As part of a Guaranteed-Time programme for giant planet observations during JWST's first cycle of operations (L. Fletcher et al., 2021), Saturn provided an ideal test of the capabilities of this new facility. For example, Saturn's large angular size compared to
the small fields-of-view of MIRI/MRS presented a challenge for mapping extended, rotating, and moving sources. Saturn's spectrum has a large dynamic range, with some regions (e.g., near 6 \(\mu\)m) sufficiently dark as to require long integrations, but others (e.g., near 25 \(\mu\)m) so bright that they are close to the saturation limit of the sensitive MIRI/MRS detectors. Observations of Saturn's small satellites are challenged by scattered light from Saturn's atmosphere and rings. And given that Saturn's forest of molecular emission and absorption features were previously characterised in detail by Cassini, the Saturn observations provided a sensitive check on the calibration of JWST (e.g., wavelength and flux calibration, and the presence of instrumental artefacts).
In this work, we provide a comprehensive first assessment of JWST mid-infrared observations of Saturn in November 2022 as a baseline for a long-term seasonal legacy for MIRI. The Cassini record of Saturn's seasonal evolution came to an end in 2017 (L. N. Fletcher, Sromovsky, et al., 2020), shortly after Saturn passed northern summer sol
## 2 JWST MIRI Data Processing
### MIRI/MRS Observations
The MIRI/MRS instrument (Wells et al., 2015) consists of four integral field units (IFUs, channels 1-4) spanning the 4.9-27.9 \(\mu\)m range with spectral resolutions from \(R\sim 1330\) at 27.9 \(\mu\)m to \(R\sim 3710\) at 4.9 \(\mu\)m (Labiano et al., 2021). Each IFU has a different slice width (from 0.176-0.645") and pixel size (from 0.196-0.273"), and thus field of view, such that they provide different spatial coverage and sampling on Saturn's disc in Fig. 2. Although all four IFUs observe simultaneously (channels 1 and 2 on a SHORT detector, channels 3 and 4 on a LONG detector), a grating wheel with three different settings (A, B, C) is needed for full coverage, leading to a short delay between observations of adjacent portions of the spectra. The four IFUs and three grating positions provide 12 individual sub-bands, each with its own wavelength coverage and spectral resolution.
The Saturn system observations were part of the Solar System Guaranteed Time Observations (GTO) awarded to H. Hammel, and collated as programme 1247. The MRS observations were the first science observations to be executed after a brief hiatus of operations (2022-Aug-24 to 2022-Nov-12), when the MRS grating wheel was found to be experiencing increased friction when moving between short (A), medium (B) and long (C) wavelength settings. The Saturn observations were redesigned to be executed in reverse-wavelength order (C, to B, to A), with no detriment to the science, and were executed shortly before Saturn left JWST's field of regard in 2022 (i.e., moving below the 85\({}^{\circ}\) elongation angle from the Sun, as Saturn opposition was earlier in the year on 2022-08-14).
MIRI/MRS provided a full latitude scan from Saturn's equator to the north pole using three separate mosaic tiles, with a final tile capturing the western ring ansa, as shown in Fig. 1. Each tile covered the full 4.9-27.9 \(\mu\)m spectrum, with saturation encountered in the brightest hydrocarbon emission features. Saturn's angular diameter was 16.9" at the time of the observations (9.8 AU from JWST, moving away from the observer at 29 km/s), and the spatial coverage of each tile varies with wavelength, from \(3.2\times 3.7"\) at the shortest wavelength (4.9 \(\mu\)m) to \(6.6\times 7.7"\) at the longest wavelength (27.9 \(\mu\)m).
The MIRI/MRS observations required five separate pointings on 2022-Nov-13 and 2022-Nov-14, as shown in Figure 2. The western ring ansa was observed first (03:00-04:06UT), followed by an offset 90" north of Saturn (04:11-04:27UT) to determine instrumental artefacts in the MRS observations. JWST then pointed to Saturn's northern hemisphere, targeting 45\({}^{\circ}\)N (05:40-06:44UT), 15\({}^{\circ}\)N (06:50-07:55UT), but failed to re-acquire a guidestar to complete the final pointing towards Saturn's north pole. Given that Saturn was about to depart from JWST's field of regard, rapid instructions to re-execute the failed MRS footprint were uploaded to the observatory, enabling the final tile at 75\({}^{\circ}\)N on 2022-Nov-14 (21:58-23:05UT), around 36 hours after the skipped observation was reported.
With the exception of the 15\({}^{\circ}\)N observation, all MRS tiles used 5 groups (i.e., individual 2.8-second frames) with the FASTR1 readout pattern, 8 integrations, and a 4-point extended-source dither pattern (no dithers were used for the offset 'background' frame). The equatorial footprint used 4 groups and 10 integrations, to test the impact of using a smaller number of groups on ability to radiometrically calibrate MRS (no problems were identified).
### MIRI/MRS Data Processing
The MIRI/MRS observations were reduced using the JWST pipeline version 1.9.4 and calibration reference files 10461. These were applied to the stage-0 UNCAL raw data cubes
Figure 1: Montage of JWST MIRI/MRS observations of Saturn. Panel (a) shows RGB composites of the JWST observations (Saturn: R=10.3 μm, G=10.1 μm, B=11.6 μm; rings: R=15.5 μm, G=14.6 μm, B=13.5 μm) with an HST observation of Saturn in the background (Simon et al., 2023). Panels (b-e) show spatial structure on Saturn at a range of wavelengths as indicated by the grey shaded regions in panel (f). (f) shows the average spectrum of Saturn with specific spectral features labelled. Hubble image credit: NASA, ESA, and Amy Simon (NASA-GSFC); Image Processing: Alyssa Pagan (STScI).
downloaded from MAST. The final output of the pipeline (stage-3), when run automatically for the archive, are spectral images cubes for each MRS channel with individual dithers combined then rotated and interpolated into a sky reference frame. However, the Saturn data needed a significant amount of post-processing before being usable, such that the pipeline was run locally, applying all three data reduction stages separately to each dither position and tile. Stage 1 generates'slope images' (count rates) from the raw UNCAL data; stage 2 applies wavelength calibration and absolute flux calibration for each exposure; and stage 3 produces spectral image cubes from the input calibrated slope images - all steps are described in the JWST data processing manual2. The pipeline's ResidualFringeStep was used to minimise the effect of spectral fring, although this remained a challenge at longer wavelengths in Channels 3 and 4 (Wright et al., 2023). The default pipeline rotates and
Figure 2: Spatial coverage of MIRI observations relative to Saturn’s disc (a) and mapped to Saturn’s surface (b). Each dot represents the location of a single spaxel with the colour indicating the MIRI channel. The background observation was located 90” to the north of Saturn, so is not shown here (Saturn’s disc has a diameter of \(\sim 17\)”).
interpolates the final cubes into a sky reference frame, but given the significant artefacts from slice to slice (discussed below), we retained the final stage-3 data in the coordinate system of the IFUs (cube_build.coord_system = 'ifualign') to allow later correction of flat field effects. These corrections must be performed before any attempts to combine individual MRS dithers, otherwise artefacts are blended together in the final products. Finally, our pipeline included improved wavelength calibration solutions3(Argyriou et al., 2023), which were generated by fitting Jupiter and Saturn spectral models (this work) to the MRS data for each and every spaxel (the wavelength solution varied across the IFU), and estimating the required wavelength shift for each spaxel to align models and MRS data in the rest frame. These wavelength solutions were only possible below 15 \(\mu\)m where strong and well-resolved spectral lines were evident, but enabled a significant improvement in spectral fits over the original pipeline.
Footnote 3: Known as FLT-5, now available as the ‘specwcs’ files under calibration reference context jwst_1082.pmap at [https://jwst-crds.stsci.edu/context_table/jwst_1082.pmap](https://jwst-crds.stsci.edu/context_table/jwst_1082.pmap).
As shown in Figure 3, significant flat field artefacts and saturation remained in the pipeline output cubes in the IFU-aligned frame. Therefore, we developed custom desaturation and flat field correction routines (discussed below) to correct these effects and produce our final science cubes.
### Desaturation
In the brightest parts of Saturn's spectrum, the MRS detector becomes saturated, leading to a loss or corruption of data (e.g. Figure 3g). MIRI observations are split into a series of 'groups,' which each record the measured flux for a part of the whole exposure. The detector saturates when the integrated flux reaches a certain threshold, meaning that even if the full exposure is saturated, the first few groups in the exposure may still have useful data that was recorded before the detector saturated. Therefore, data reduced using different numbers of groups can be used to desaturate (i.e. 'fill in') saturated parts of the spectrum.
Our data processing routine modifies the UNCAL raw data cubes to create versions containing the full range of groups (i.e., for the observations which have 5 groups in total: 1, 2, 3, 4, and 5 group versions are created). These different versions are all run through the standard JWST pipeline, creating 5 different versions of each science cube, each of which effectively has a different integration time. These 5 cubes are then merged into a single desaturated cube by dynamically selecting the highest number of groups possible to maximise SNR while minimising saturation, as shown in Figure 4.
The desaturation routine operates by comparing the different versions of each spectrum (the coloured lines in Figure 4a). The routine works iteratively, starting with the largest number of groups \(n\), then replacing bad regions of the spectrum with the \(n-1\) group spectrum. This is repeated until none of the spectrum contains 'bad' data, or the 1-group spectrum is reached. The following regions of the spectrum are treated as bad data and replaced:
* Regions flagged as saturated by the JWST pipeline (marked as 'full saturation' in Figure 4a).
* Regions that appear partially saturated, but have not been flagged as saturated by the JWST pipeline. Regions are classed as partially saturated where \(n\) group data is \(<90\%\) of the brightness of the \(n-1\) group data.
* Regions where the \(n\) group data is \(>120\%\) of the brightness of the \(n-1\) group data are also flagged as outliers, likely caused by cosmic ray hits.
Figure 3: Example cube slices at different stages of our custom MIRI data reduction process. The first column shows the output of the standard JWST pipeline, which still contains significant flat field effects (a & d), saturation (g), and partial saturation (dark pixels in g & j). The second column shows the data after the desaturation step is applied, and the third column shows the data after the flat field correction is applied.
The desaturation routine is skipped for regions of the spectrum with a SNR \(<300\), as these would not be expected to experience saturation, and high noise levels could lead to false positives when flagging bad regions of the spectrum.
Initial testing of this routine found residual effects at the edge of spectral regions flagged as bad data, so any regions of the spectrum flagged as bad data are expanded by two spectral points. Additionally, the number of groups used for neighbouring spectral points is only allowed to change by 1 (i.e. a spectrum cannot immediately jump from 1 group at one wavelength to 5 groups at the next wavelength). A similar filter is applied in image space, so at a specific wavelength, neighbouring pixels can only vary by 1 group.
The parameters of the desaturation routine were selected by inspecting spectra (e.g. Figure 4) and images (e.g. Figure 3) at a wide range of wavelengths, including regions of the spectrum that do and do not experience saturation. Almost all the spectral range uses the full 5 groups (or 4 groups for the equatorial tile), with only the specific spectral and spatial regions (\(<10\%\) of all spectral points) that experience saturation replaced with fewer groups.
Figure 4: Example spectra showing the desaturation routine. (a) shows the spectra using different numbers of groups, where the 5 group data (in red) is the ‘standard’ pipeline output, which shows both saturation and partial saturation. The black dotted line in (a) shows the desaturated spectrum which is constructed using a varying number of groups, as shown in (b), to minimise saturation while maximising SNR. The majority of the spectral range uses the full 5 group data, and only narrow regions (such as this one, in the \(\nu_{9}\) ethane emission band) require desaturation.
### Flat Field Correction
After desaturation, significant flat field effects remained visible in the cubes, particularly at shorter wavelengths (e.g. Figure 3b,e). These effects typically appeared as regular banding patterns aligned along the IFU slices, stripes, and swirls that varied with wavelength, and in many cases completely obscured any detail on Saturn. As shown in Figure 5, these patterns remained fixed in location on the detector for different dithers (and tiles), demonstrating that they are clearly an instrumental artefact.
These observed flat field patterns may be caused by small discrepancies between the reference flat field images used in the pipeline and the 'true' flat field response of the detector. It is also possible that part of the observed apparent flat field is caused by residual artefacts remaining after the pipeline's stray light correction step. For simplicity, we refer to the entire observed pattern as the 'flat field effect', regardless of its origin.
To correct for these flat field effects, we used the Saturn observations themselves to create a flat field for each channel and band. We assumed that the flat field can be treated as a purely multiplicative effect, with a corrected cube created by dividing the observed cube by the synthetic flat cube. Note that the background observation, 90" north of Saturn, did not show the same artefacts, suggesting that the flat field is sensitive to how the target illuminates the detectors and IFU slices.
Our flat generation routine uses a set of four dithered observations to create a synthetic flat field image for each wavelength. We match pairs of pixels that observe (approximately) the same location on the surface of Saturn in different dithers, and assume that any variation in brightness between these pixels is caused by differences in the flat field for these pixels. The ratio of brightness values of all of the pairs of pixels can then be used to construct a flat field image at each wavelength:
1. The input data files for each dither are 'navigated' to calculate the latitude/longitude coordinates and illumination angles for each pixel. This navigation uses the WCS
Figure 5: Example images at 5.07 \(\mu\)m (channel 1-SHORT) before (a-d) and after (f-i) the synthetic flat field (e) is applied. The images are shown in the IFU frame, where north is to the top left. The flat field effects in a-e (such as the bright line across the centre of each image) are fixed in the IFU frame, whereas the observed spatial structure on Saturn varies in position on the detector with the different dithers, allowing flat field structure and real spatial structure to be differentiated.
metadata in the FITS headers (to convert from pixel to celestial coordinates), which is derived from JWST's pointing information. To validate the navigation, we compared the navigated and observed positions of Saturn's limb and rings, and found that no additional manual adjustments were needed.
2. The four dither input images are filtered to set extreme values (outlier pixels and pixels with emission angles \(>\)75\({}^{\circ}\)) to NaN to prevent any outliers contaminating the flat. Wavelengths where the average SNR \(<10\) are skipped as the constructed flat would be too contaminated with noise.
3. Corresponding pixels are identified by matching valid pixels that observe a similar location on Saturn in any of the four input dithers, using the navigated latitude/longitude coordinates for each pixel from step 1. An oval footprint is used around each pixel (in latitude/longitude space), and any pixels that fall within this footprint are treated as observing the same part of Saturn. The height (North-South direction) of the oval \(h\) is set at half of the average difference in latitude between neighbouring pixels, and the width (East-West direction) of the oval is set as \(w=4h\). This elongated oval footprint is used as Saturn has much more variation in the North-South direction than the East-West direction, so it matches more similar pixels than a simple circular footprint would.
4. For each pair of corresponding pixels, \(A\) and \(B\), we calculate the ratio of observed pixel fluxes \(R_{AB}=O_{A}/O_{B}\). Assuming that the flat field is a multiplicative effect, we can treat each pixel's observed flux \(O_{i}=S_{i}F_{i}\) as the 'true' flux from Saturn \(S_{i}\) multiplied by the flat field for the given pixel \(F_{i}\). As we define these corresponding pixels to observe (approximately) the same region of Saturn's surface, we assume that the original flux from Saturn is equal for both pixels, \(S_{A}=S_{B}\), allowing the calculated ratio to be reduced to \(R_{AB}=(S_{A}F_{A})/(S_{B}F_{B})=F_{A}/F_{B}\), giving the ratio of the flat field values for the two pixels.
5. The set of calculated pixel ratios are then used to construct the flat field image. A flat image is initialised with the central pixel value set to 1, and all other values set to NaN, and we then iteratively construct the flat using the calculated ratios to propagate values. At each iteration, every pixel value in the flat is updated to \(F_{A}^{\prime}=\mathrm{median}(R_{Ai}F_{i})\) where the \(F_{i}\) are all the non-NaN corresponding pixels. After all the updated values are calculated at each iteration, any pixels outside the range \(2/3<F_{i}<3/2\) are set to NaN to protect against outliers. After removing outliers, the flat is then divided by the mean pixel value. The routine is run for 50 iterations to allow the constructed flat to converge on a consistent solution. Figure 6 shows an example flat field at different generation steps.
6. This constructed flat image is slightly under-constrained, as all the pixels can be multiplied by an arbitrary scaling factor and still provide a self-consistent result. Therefore, we scale each flat image so that the mean value of the pixels is unity, i.e. \(\frac{1}{N}\sum_{i}^{N}F_{i}=1\). This ensures that the application of the flat does not change a spectrum calculated by averaging all the pixels in an entire cube.
Synthetic flat cubes were generated from the four dithers associated with the 15\({}^{\circ}\)N and 45\({}^{\circ}\)N tiles, and then averaged to produce the final flat cubes used to correct the data (the 75\({}^{\circ}\)N tile and ring observation included too much background sky to be useful). The algorithm parameters were refined by studying the flats generated from the different tiles and the quality of the flat corrected data.
Special care was taken to ensure the flats did not contain any features of Saturn's atmosphere, and to ensure that the flats from different tiles produced consistent results. Comparisons of sets of dithered images (e.g. Fig. 5) allows structure from the flat field (fixed in detector location, identical between tiles) and real structure on Saturn (variable in detector location, different in each tile) to be differentiated. Regions of the spectrum with and without significant spatial structure were studied in detail, as well as the entire spectral range using animations that compared sets of dithers at each wavelength (see supplementary
material). The 75\({}^{\circ}\)N and rings tiles also provided useful checks, as these tiles were well corrected, even though they were not used in generating the synthetic flats.
As shown in Figure 5, the synthetic flats are able to correct both small-scale (1-2 pixel) and large scale (\(>10\) pixel) variations in the sensitivity of the detector, including in regions with significant spatial structure on Saturn. The application of the flats helped to reveal spatial structure in the Saturn observations that was often completely obscured by the sensitivity variations, and prevented any spurious spatial variation being treated as real spatial variation on Saturn's surface.
### Zonal Averages
Zonal averages were calculated from the observed data using the following routine:
1. All observed pixels, from all tiles and dithers, are binned into 1\({}^{\circ}\) latitude bins.
2. Within each bin, the median spectrum is calculated from all spectra in the bin. The 1/3 of the spectra with the largest RMS relative to this median spectrum are then discarded. This ensures the final zonal averages are protected from the effect of outlier pixels. Median averaging is used here (rather than mean) to ensure any extreme outlier pixels do not cause 'good' spectra to be discarded.
3. The mean spectrum for each bin is calculated from the remaining 2/3 'good' spectra. This mean spectrum is used as the zonal average for each latitude bin, which is then used for spectral modelling in subsequent sections.
## 3 Dataset Overview
### Saturn's Spectrum
An average of MIRI/MRS observations of Saturn's atmosphere is shown at the bottom of Fig. 1 for 4.9-18.0 \(\mu\)m, omitting MRS data from the longest channel (17.7-27.9 \(\mu\)m) due to ongoing challenges with fringe removal and calibration. Below \(\sim 7.3\)\(\mu\)m, the spectrum is shaped by a combination of scattered reflected light from aerosols (notably within the deepest absorption bands and near 6 \(\mu\)m) and thermal emission. The 5-\(\mu\)m window is sculpted by PH\({}_{3}\) lines below 5.2 \(\mu\)m (\(\nu_{1}\) at 4.3 \(\mu\)m) and NH\({}_{3}\) above 5.2 \(\mu\)m (2\(\nu_{2}\) at 5.32 \(\mu\)m, \(\nu_{4}\) at 6.15 \(\mu\)m), along with narrow absorption bands of H\({}_{2}\)O (5.1-5.4 \(\mu\)m) and AsH\({}_{3}\) (4.9-5.0 \(\mu\)m). Bright emission from the 5-\(\mu\)m window implies low aerosol opacity, so cloud bands and small discrete features appear in silhouette in Fig. 1b-c against the bright background glow from Saturn's 4-6 bar region. Bright reflection near 6 \(\mu\)m provides a means of constraining upper tropospheric aerosols. Hydrocarbon emission from CH\({}_{4}\) (\(\nu_{2}\) at 6.5 \(\mu\)m) and C\({}_{2}\)H
Figure 6: Example flat field at 8.14 \(\mu\)m after the first 5 iterations of the flat field generation routine, and after the final 50th iteration. The flat is seeded with an initial value for the first iteration, which then uses the corresponding pixel ratios (derived from the dithered observations) to propagate the flat values and fill the image. White pixels have a value of NaN, and the NaN values in the final flat field are regions of the detector that do not contain any data.
(\(\nu_{8}\) at 6.8 \(\mu\)m; \(\nu_{6}\) at 7.3 \(\mu\)m) also contribute to this 4.9-7.3 \(\mu\)m range. This range is particularly noteworthy as it has only been previously observed by ISO/SWS in the disc-average (Encrenaz, 2003), and neither Cassini/VIMS (\(R\sim 300\) at 5 \(\mu\)m) nor Cassini/CIRS (\(R\sim 2800\) at 7.0 \(\mu\)m) could observe in the 5.1-6.9 \(\mu\)m range. Thus MIRI channel-1 provides access to tropospheric NH\({}_{3}\) and H\({}_{2}\)O, along with the properties of Saturnian aerosols, in this range for the first time, with spectral resolutions of \(R\sim 3100-3750\).
Aerosol contributions diminish at longer wavelengths in channels 2 and 3 (7.3-18.0 \(\mu\)m), which are dominated by the collision-induced absorption due to H\({}_{2}\) and He, with emission and absorption features superimposed. PH\({}_{3}\) (\(\nu_{2}\) at 10.08 \(\mu\)m, \(\nu_{4}\) at 8.94 \(\mu\)m) and NH\({}_{3}\) (\(\nu_{2}\) at 10.5 \(\mu\)m) provide absorption features that dominate the 8-12 \(\mu\)m range; with strong emission features from methane (CH\({}_{4}\)\(\nu_{4}\) at 7.7 \(\mu\)m), acetylene (C\({}_{2}\)H\({}_{2}\)\(\nu_{5}\) at 13.7 \(\mu\)m) and ethane (C\({}_{2}\)H\({}_{6}\)\(\nu_{9}\) at 12.2 \(\mu\)m); and weaker emission features from CO\({}_{2}\) (\(\nu_{2}\) at 14.9 \(\mu\)m), diacetylene (C\({}_{4}\)H\({}_{2}\)\(\nu_{8}\) at 15.9 \(\mu\)m), methylacetylene (C\({}_{3}\)H\({}_{4}\)\(\nu_{9}\) at 15.8 \(\mu\)m), ethylene (C\({}_{2}\)H\({}_{4}\)\(\nu_{7}\) at 10.5 \(\mu\)m), propane (C\({}_{3}\)H\({}_{8}\)\(\nu_{26}\) at 13.4 \(\mu\)m) and benzene (C\({}_{6}\)H\({}_{6}\)\(\nu_{4}\) at 14.83 \(\mu\)m). The H\({}_{2}\) S(1) quadrupole at 17.03 \(\mu\)m and its associated dimer absorption can be seen in Channel 3-Long, but the S(0) quadrupole at 28.2 \(\mu\)m is just outside the MRS range.
The spectral database used in MRS modelling was initially based on that used for Cassini retrievals (L. N. Fletcher, Orton, et al., 2018), with updates to AsH\({}_{3}\)(Coles et al., 2019) and CH\({}_{3}\)(Adam et al., 2019) from the ExoMol database (Tennyson et al., 2016), and GeH\({}_{4}\) from HITRAN (Gordon et al., 2022). Voigt broadening was used for all bands - the sub-Lorentzian lineshape of Bailly et al. (2004) did not have an impact on the quality of the spectral fits. The line database was used to calculate \(k\)-distributions for each gas within each of the 12 MRS subbands, using the wavelength grid from stage 3 of the standard pipeline, and the wavelength-dependent resolving power of each channel determined from ground-based measurements (Labiano et al., 2021). Note that in-flight commissioning updates to the spectral resolution (Jones et al., 2023) have not been incorporated into our spectral models at this stage.
Collision-induced absorption of H\({}_{2}\) and He was included based on their dimer absorptions (L. N. Fletcher, Gustafsson, & Orton, 2018). During the course of the spectral fitting, residuals between model and data were used to identify missing bands of known species, and to search for any new species. Multiple bands of propane are observed on Saturn for the first time - only \(\nu_{26}\) at 13.4 \(\mu\)m had been included in our line database based in GEISA (Delahaye et al., 2021), and had been previously used to study the propane distribution (Guerlet et al., 2009; L. N. Fletcher, Orton, et al., 2018). Residuals at high latitudes revealed the presence of the \(\nu_{7}\), \(\nu_{20}\), \(\nu_{21}\) and \(\nu_{8}\) emission bands at 8.63, 9.49, 10.85 and 11.51 \(\mu\)m, respectively, for the first time. These were introduced into our line database using the pseudo-linelist of Sung et al. (2013), and the improvement in spectral residuals are shown in Supplemental Fig. 7.
The GeH\({}_{4}\)\(\nu_{2}\) band at 10.74 \(\mu\)m is too weak to be seen, lost amongst numerous NH\({}_{3}\) absorption features. The AsH\({}_{3}\)\(\nu_{4}\) band at 9.97 \(\mu\)m does have a detectable signature on the edge of a PH\({}_{3}\) absorption line, but in a region of the spectrum that is affected by MRS fringing at the longward end of channel 2-medium. The small spectral feature is reproduced by an abundance of \(\sim 0.4\) ppb, with a decline from equator to pole that cannot be explained by fringing. Nevertheless, precise constraints must await more robust defringing strategies. Finally, we see no evidence of emission from the HCN \(\nu_{2}\) at 14.05 \(\mu\)m, discussed in Section 5.4.
### Spatial Structure
Fig. 7 shows selected wavelengths from the three MIRI/MRS 4.9-27.9 \(\mu\)m cubes spanning from Saturn's equator to the north pole. Given Saturn's relative longitudinal homogeneity, these can be taken as a good approximation to a zonal mean, which is calculated as described in Section 2. Fig. 8 then shows the difference between the zonally-averaged
brightness and the mean brightness temperature spectrum, highlighting strong gradients as a function of latitude. These gradients are compared to the cloud-tracked zonal winds from Cassini in both the continuum and methane bands (Fig. 8c, Garcia-Melendo et al., 2011), showing how thermal-infrared brightness is related to the peaks of the eastward and westward jets. We also compare the MIRI/MRS maps (acquired in November 2022) to visible-light reflectivity scans acquired by Hubble (HST) in September 2022 (Fig. 8b Simon et al., 2023), to show how brightness temperature and aerosol reflectivity are related. The zonally-averaged reflectivity in ten HST WFC3/UVIS filters has been normalised for plotting purposes, to highlight the similarities in the location of strong brightness gradients.
Together, the composite images of Fig. 7 and the zonal-mean brightness in Fig. 8 reveals a wealth of detail. The exquisite sensitivity of MRS, even compared to previous spectroscopic maps from Cassini, reveal Saturn's banded structure in both reflected sunlight and thermal emission. The following three features stand out in the meridional (latitudinal) direction:
* **Belt/Zone Structure:** The strongest meridional gradients in tropospheric and stratospheric brightness temperatures are co-located with the peaks of the eastward and westward jets, as measured at Saturn's cloud-tops (Garcia-Melendo et al., 2011), supporting a geostrophic balance between the winds and temperature gradients via the thermal windshear equation, and the decay of Saturn's tropospheric winds with altitude (Pirraglia et al., 1981; Conrath & Pirraglia, 1983). At mid-latitudes where Ferrel-like meridional circulation cells are expected to dominate (L. N. Fletcher, Kaspi, et al., 2020), Saturn's zones are defined as cool, anticyclonic bands equatorward of eastward jets, whereas the belts are warm, cyclonic bands poleward of eastward jets (Del Genio et al., 2009). MIRI continuum emission from the troposphere (e.g., Fig. 7e-f) reveals subtle cool zones equatorward of eastward jets at 31.5\({}^{\circ}\) (an inflection in the broad equatorial jet), 47.8\({}^{\circ}\), 61.5\({}^{\circ}\), and 78.0\({}^{\circ}\)N, in addition to the broad cool Equatorial Zone at \(<9.2^{\circ}\)N where continuum-band cloud tracking
Figure 7: Composite images created by combining all three Saturn tiles to show the equator-to-pole variation in brightness temperature at different wavelengths. 5.05 \(\mu\)m senses aerosol opacity in the deep troposphere (3-6 bars), whereas 5.207 \(\mu\)m is in a strong NH\({}_{3}\) absorption and senses a blend of thermal emission and reflected sunlight from upper tropospheric aerosols. 7.67 \(\mu\)m senses stratospheric temperatures (0.1-5 mbar) via CH\({}_{4}\) emission; whereas 10.75 \(\mu\)m senses a blend of tropospheric temperature and ammonia opacity near 400-600 mbars. 14.4 and 15.5 \(\mu\)m are primarily sensitive to the H\({}_{2}\)-He continuum, sounding tropospheric temperatures in the 100-to-300-mbar range. Note that each tile observed a different longitude range, but are shown overlapping here for simplicity - this causes some artificial inconsistencies in brightness at regions of overlap.
reveals a maximum eastward windspeed (Garcia-Melendo et al., 2011). Stratospheric banding is more subtle, but a bright equatorial band is observed in methane emission at 7.67 \(\mu\)m and in the peak of the acetylene emission at 13.7 \(\mu\)m (corresponding to the equatorial stratospheric oscillation, Orton et al., 2008; Blake et al., 2022).
* **North Polar Stratospheric Vortex (NPSV):** The warm NPSV, defined by the strong gradient in stratospheric brightness temperature near 78\({}^{\circ}\)N, is visible throughout the MIRI/MRS dataset, particularly near 7-8 \(\mu\)m sensing stratospheric CH\({}_{4}\), and in regions of tropospheric continuum emission longward of 14 \(\mu\)m. This should be contrasted with generally low polar brightness temperatures in the 5-6.5 \(\mu\)m and 9-11 \(\mu\)m regions that probe higher pressures. The NPSV formed during northern spring (L. N. Fletcher, Orton, et al., 2018) and is expected to have reached its maximum contrast with respect to lower latitudes in 2021-22 (confirmed by ground-based the
Figure 8: Variation in Saturn’s zonal average brightness temperature with wavelength and latitude. (a) shows the difference from the mean brightness temperature spectrum (d), where red areas are brighter than the average and blue areas are dimmer. (b) shows the normalised zonal average brightness of HST observations of Saturn in September 2022 (Simon et al., 2023) and (c) shows Saturn’s zonal wind profiles (García-Melendo et al., 2011). Solid and dashed horizontal lines in (a-c) indicate the peaks and troughs of the zonal wind profiles respectively. The background of (c) shows part of the colour composite map of Saturn created from the HST observations.
mal imaging Blake et al., 2022), and to decline in visibility in the coming years - see below for discussion of seasonal evolution since the end of the Cassini mission in 2017. Embedded within the NPSV, the central north polar cyclone discovered by Cassini (NPC, L. N. Fletcher et al., 2008) remains visible in the MIRI/MRS maps as a peak in brightness temperature right at the pole (Fig. 7c-f).
* aerosols do not simply condense where it is cold, and sublimate where it is warm, with a more complicated pattern emerging. Rather, the finescale cloud banding observed at 5 \(\mu\)m is a closer match to that seen in visible light (e.g., Vasavada & Showman, 2005). Deep NH\({}_{3}\) absorption features in the 5.1-5.3 \(\mu\)m range, and in the 6-\(\mu\)m region, also reveal the contribution of reflected sunlight from upper tropospheric aerosols (Fig. 7b)
- without this reflection, there would be almost no radiance from these deep bands. The equatorial region is particularly bright in the 5.5-6.5 \(\mu\)m range in Fig. 8a due to the presence of upper tropospheric aerosols observed in visible light in Fig. 8a.
The deep clouds revealed by MIRI/MRS are shown in more detail in Fig. 9, which uses only the shortest MRS channel (1A). Animations in the Supporting Material show that Saturn's appearance changes dramatically with wavelength, from inside a deep absorption band (sensing sunlight reflected from upper tropospheric aerosols) to the intervening 'continuum' (sensing deeper cloud opacity). The Equatorial Zone (0-9.2\({}^{\circ}\)N) is generally dark (e.g., high aerosol opacity), with a brighter band near 10\({}^{\circ}\)N marking the boundary with the NEB. At 5.0 \(\mu\)m small low-contrast features are observed up to the prograde jet at 48\({}^{\circ}\)N, but these cannot be seen at 5.2 \(\mu\)m where reflected sunlight creates a more homogeneous appearance. Poleward of the 48\({}^{\circ}\)N jet, the 5-\(\mu\)m thermal emission increases substantially at the same time as the reflectivity sharply declines. We no longer see the 5-\(\mu\)m-bright, aerosol-depleted band near 35-40\({}^{\circ}\)N that had dominated the appearance of the northern hemisphere after the 2010-11 storm (Sromovsky et al., 2016), consistent with a re-population of the band by Saturn's seasonal aerosols in the decade since the storm, so that this band no longer stands out. This fading of the 5-\(\mu\)m storm emission is consistent with the ground-based record (Bjoraker et al., 2020). Fine banding is observed up to 61.5\({}^{\circ}\)N, when the 5-\(\mu\)m emission again rises substantially as 5.2 \(\mu\)m reflection falls, with bright emission continuing to the latitude of the hexagon at 78.0\({}^{\circ}\)N. Interior to the hexagon, the polar domain in Fig. 7a-b is 5-\(\mu\)m dark, and also dark in reflected sunlight at 5.2 \(\mu\)m, consistent with its appearance in Cassini/VIMS observations in 2016 (Sromovsky et al., 2021).
Seasonally-generated aerosols are not homogeneous over the whole of Saturn's northern hemisphere, but are confined by the banded structure - the reflected sunlight component in Fig. 9b and the 5-6 \(\mu\)m range in Fig. 8a decreases in distinct steps from the equator to the pole, with notable boundaries between the EZ and NEB (9.2\({}^{\circ}\)N) and at 48.7\({}^{\circ}\)N; the thermal 5-\(\mu\)m emission component shows a notable increase between 61.5\({}^{\circ}\)N and 78\({}^{\circ}\)N, consistent with the lowest aerosol opacity there, and with the suggestion of cloud-clearing near 65\({}^{\circ}\)N in 2019-20 (Bjoraker et al., 2020). A four-fold increase in aerosol opacity of upper-tropospheric hazes interior to the hexagon between 2013 and 2016 was observed by Cassini (Sromovsky et al., 2021), possibly accounting for the dark hexagon appearance to JWST in 2022. MIRI does not observe the hexagon latitude as a bright 5-\(\mu\)m band, which had been evident in VIMS observations in 2013 and 2016, suggesting continued increases in aerosol opacity in the polar domain, spreading to lower latitudes, but not yet reaching the 60\({}^{\circ}\)-78\({}^{\circ}\)N range.
Right at the equator, equatorward of 5\({}^{\circ}\)N, both the 5-\(\mu\)m brightness and the reflected sunlight drop to create an unusually dark band. This can be seen in Fig. 8, where the dark band corresponds to subtle colour contrasts in Hubble images from September 2022, and to a peak of reflectivity in several of the individual HST filters, notably the strong CH\({}_{4}\) band at 889 nm. Such a dark band was not evident in VIMS maps in 2006-11, although this region was generally bland and diffuse at 5 \(\mu\)m, with several dark plume-like discrete features bordering it at 6\({}^{\circ}\)-7\({}^{\circ}\)N (L. N. Fletcher et al., 2011). This dark band coincides with the rapid increase in the upper tropospheric winds observed in the 890-nm CH\({}_{4}\) band i Fig. 8c (Garcia-Melendo et al., 2011). We will return to this unique region, and the challenging spectral fits, in Section 5.
#### 3.2.1 Discrete Vortices and Hexagon
Although Saturn remains longitudinally homogeneous at most latitudes, we subtracted zonal averages from the MRS images to search for evidence for discrete features. Such longitudinal contrasts were only definitively observed in the shortest MRS channels in Fig. 9, where significant structure is observed. In particular, we see a bright cloud-free region near 48\({}^{\circ}\)N, 261\({}^{\circ}\)W; at least two dark anticyclonic vortices near 48\({}^{\circ}\)N, 230\({}^{\circ}\)W and 30\({}^{\circ}\)N, 299\({}^{\circ}\)W;
Figure 9: Mapped spatial structure in channel 1-short for low- and mid-latitudes. (d) and (h) show the zonal wind velocities (García-Melendo et al., 2011) and (i) shows an example brightness temperature spectrum. While all panels sense a blend of thermal emission and reflected sunlight, panels (a) and (e) primarily reveal deep thermal emission modulated by overlying aerosol opacity, whereas panels (b)-(c) and (f)-(g) primarily reveal reflectivity variations from aerosols in the upper troposphere. Solid and dashed horizontal lines indicate the peaks and troughs of the zonal wind profiles, respectively. The backgrounds of (a-c & e-g) show the colour composite map of Saturn created from the HST observations.
a patch of high 5-\(\mu\)m brightness at the edge of the equatorial zone near 10\({}^{\circ}\)N, 292\({}^{\circ}\)W; and spatial structure in the bright band surrounding the dark polar domain near 62\({}^{\circ}\)N, 215\({}^{\circ}\)W. In an effort to determine the history and longevity of these features, we compared the MIRI maps (13-14 November 2022) to HST observations (21 September 2022, Simon et al., 2023) and observations by amateur astronomers (using the PVOL database throughout November 2022, Hueso et al., 2018) to search for the presence of these discrete features in multiple datasets. Comparisons of the JWST and HST data are shown in the Supplemental Material.
Unfortunately, there were no conclusive detections of any of the features observed by MIRI in these supporting visible-light datasets. Given the \(\sim\) 7-week time gap between HST and JWST observations this is perhaps unsurprising, despite attempts to account for zonal drifts during this interval in the Supplemental Material. In particular, HST observations in September revealed the continued presence of the anticyclonic vortex (AV) that was generated by the 2010-11 storm (Sayanagi et al., 2013) near 42\({}^{\circ}\)N, 190\({}^{\circ}\)W, and the presence of a pair of vortices near 63\({}^{\circ}\)N, 335\({}^{\circ}\)W that could be related to the coupled vortex system on the 'double jet' near 62\({}^{\circ}\)-67\({}^{\circ}\)N studied by del Rio-Gaztelurrutia et al. (2018). Whilst the shared latitudes are compelling, the limited longitudinal coverage of the MRS maps, combined with the 7-week time gap since Hubble, makes it unlikely that these are the same features. Indeed, the long-lived AV at 42\({}^{\circ}\)N was expected to be near 176\({}^{\circ}\)W on 2022-Nov-13, and was therefore missed by MIRI (Fig. 2). This strongly argues for contemporaneous MIRI/MRS spectroscopy and NIRCAM (or HST) imaging in future observing programmes.
Finally, when the MIRI observations had originally been designed (assuming a 2018 launch), we had hoped to detect the vertices of Saturn's polar hexagon during northern summer, both in tropospheric and stratospheric thermal emission (L. N. Fletcher, Orton, et al., 2018). Unfortunately, the decreasing sub-observer latitude in 2022 reduced the chances of success, and no convincing evidence of hexagon vertices can be observed, despite considerable work to clean up and combine the individual channel-1 dithers. We will likely need to wait to the 2040s for our next infrared views of the hexagon itself.
### Seasonal Change Since Cassini
Before modelling the MRS spectra, we compare the calibrated JWST data to other observations in the mid-infrared. Although ground-based studies have been able to monitor morphological changes to Saturn's mid-IR emission since 2017 (Blake et al., 2022), these have typically been calibrated to match a low-latitude average of Cassini/CIRS radiances due to the difficulties arising from variable telluric contamination, making genuine assessments of global-scale temperature changes rather challenging. Fig. 10 compares the MIRI 7-17 \(\mu\)m brightness with those from Cassini/CIRS in 2017. Three CIRS northern hemisphere maps (2017-Jan-19, 2017-Apr-17 and 2017-Aug-26), acquired from a near-equatorial sub-spacecraft latitude at a spectral resolution of 15 cm\({}^{-1}\), were combined and zonally-averaged onto a 1\({}^{\circ}\) latitude grid. The MIRI/MRS zonal averages were convolved with the CIRS instrument lineshape to achieve the same low spectral resolution, and the differences are shown in Fig. 10. Note that the viewing geometry for each latitude was approximately the same in 2022 and 2017, such that limb darkening/brightening should be negligible.
The 7-8 \(\mu\)m CH\({}_{4}\) emission in Fig. 10(c) shows the change in Saturn's equatorial stratospheric oscillation (Orton et al., 2008; Fouchet et al., 2008), which reveals a brighter equatorial band in 2022 compared to 2017. This warm band is also visible at 7.65 \(\mu\)m in Fig. 7. This is consistent with the ground-based record (Blake et al., 2022) which showed continued warming at the equator since the end of the Cassini mission. It is not, however, consistent with expectations from one Saturnian year earlier, where the equatorial band was in its cool phase in 1993-1995 during the same season. Thus the semi-annual nature of Saturn's equatorial oscillation remains in doubt (Sinclair et al., 2013; Blake et al., 2022).
The stratosphere from 10\({}^{\circ}\) to 35\({}^{\circ}\)N appears to be cooler in 2022 than in 2017, consistent with the idea of upwelling and adiabatic cooling in the summer hemisphere as part of
Figure 10: Comparison of Cassini/CIRS observations in 2017 to MIRI/MRS observations in 2022. MRS spectra were convolved to the same spectral resolution as CIRS (15 cm\({}^{-1}\)), and three CIRS northern hemisphere maps were zonally averaged onto a latitudinal grid. Brightness temperatures as a function of wavelength are shown in (a)-(c), showing stratospheric CH\({}_{4}\) emission on the left, ethane emission near 12 \(\mu\)m, and continuum H\({}_{2}\)-He absorption on the right. In panel (d), the emission angles for MIRI (black) and CIRS (red) for each latitude are approximately the same, so differences are not due to limb brightening/darkening. CIRS measurements between \(\sim\) 9-10 \(\mu\)m are disregarded (white rectangle) as they had low signal-to-noise.
an interhemispheric circulation from summer to winter, reminiscent of the Earth's Brewer-Dobson circulation (Friedson and Moses, 2012; Bardet et al., 2022). During northern winter (\(L_{s}=310^{\circ}\)), enhancements of stratospheric hydrocarbons detected by Cassini near \(25^{\circ}\)N (Guerlet et al., 2010) were consistent with stratospheric subsidence as part of a meridional circulation from summer to winter (Friedson and Moses, 2012). The cooler brightness temperatures measured by MIRI suggest that this circulation has now reversed by \(L_{s}=150^{\circ}\), having switched direction near equinox in 2009 (Bardet et al., 2022). In the following sections, we will attempt to verify this via measurements of trace hydrocarbon species to determine vertical motions in the northern low-latitude stratosphere.
Poleward of \(60^{\circ}\)N, the MIRI/MRS observations reveal warmer temperatures than the Cassini/CIRS observations, consistent with the continued warming of the NPSV in Fig. 7 as northern summer progressed. This is also true in the northern troposphere, sampled by H\({}_{2}\)-He emission longward of 14 \(\mu\)m, which has warmed since 2017. Radiative models (Guerlet et al., 2014), combined with the viewing geometry from Earth (Blake et al., 2022), predict the visibility of the NPSV will drop considerably in the next 1-2 years as autumn approaches. Cooler MIRI north-polar temperatures at \(\sim 10\), 12.2, and 13.7 \(\mu\)m could be a consequence of differences in spatial and spectral resolution (particularly the spectral convolution in the Q-branches of ethane and acetylene), rather than reflecting a real change between 2017 and 2022.
As a further assessment of seasonal change, we compare zonal-mean scans of ground-based VLT/VISIR observations of Saturn at 7.9, 12.3 and 17.6 \(\mu\)m from 2016 to 2022 (Blake et al., 2022) to the results from CIRS and MIRI (see Supplemental Fig. 25). All three techniques capture the contrasts associated with Saturn's bands, with the 17.6-\(\mu\)m observations confirming the tropospheric warming during northern summer; and 12.3 \(\mu\)m showing the warm equatorial band and increased brightness of the NPSV. However, at 7.9 \(\mu\)m, Supplemental Fig. 25c shows the problems associated with scaling ground-based images to a low-latitude CIRS average, as it missed the stratospheric cooling between \(10^{\circ}\) and \(50^{\circ}\)N, and therefore overestimated the brightness of the NPSV. So whilst the NPSV has warmed since 2017 (by 3-4 K in brightness temperature at 7.9 \(\mu\)m), the magnitude is smaller than that presented in ground-based studies (Blake et al., 2022). In summary, MIRI/MRS observations in 2022 reveal changes to Saturn's equatorial oscillation, unexpected stratospheric cooling equatorward of \(40^{\circ}\)N, tropospheric warming at most latitudes, and the continued warming of the NPSV.
## 4 Spectral Modelling
Although inspection of the cleaned MIRI/MRS cubes can demonstrate the spatial contrasts in Saturn's mid-IR emission associated with cloud banding and discrete storms, further progress can be made by inverting the MIRI spectra to determine Saturn's temperatures, zonal winds, aerosols, and distributions of gaseous species.
### MRS Spectral Fitting
Zonally-averaged MRS spectra (Section 2.5) were fitted using the NEMESIS optimal estimation retrieval algorithm (Irwin et al., 2008), which has been previously applied to Cassini/VIMS and Cassini/CIRS spectra of Saturn. Sources of spectral linedata and the generation of \(k\)-distributions were described on Section 3.1. Spectral uncertainties reported by the MRS pipeline (the ERR backplane) were averaged for the pixels used for each latitude, but were found to be unrealistically small - we retain the spectral shape of the uncertainty envelope, but increase the error by factors of 10 to 40 (depending on MIRI channel) to enable spectral fits with a goodness-of-fit of approximately one - this is the equivalent of adding forward-modelling uncertainty during the retrieval process (Irwin et al., 2008).
Given the broad spectral coverage, and the high spectral resolution, we adopted a multi-stage approach, first fitting undersampled broadband spectra to constrain atmospheric temperatures and aerosols, before fitting narrowband MRS spectra at their native sampling to study specific gaseous species. Each MRS subband was fitted simultaneously, but used the correct geometry (emission, incidence, and azimuthal angle) to calculate the atmospheric path, as small differences arise from different pointings between tiles and dithers. In addition, we divide the data at 7.3 \(\mu\)m, with longer wavelengths considering only thermal emission and no scattering, but shorter wavelengths considering multiple scattering of both reflected and thermal photons, as has been typical of Cassini studies (see Supplemental Figure 24, which shows that aerosol-free models and cloudy, scattering models converge in the 6.6-7.3 \(\mu\)m range, although this is somewhat dependent on the choice of refractive indices described below). The sequential stages were as follows:
1. **Global Fit:** Fitting the full 7.3-16.3 \(\mu\)m region, sampling every 4th point in the spectrum to accelerate the retrieval process, to estimate the \(T(p)\) profile as a function of latitude, along with initial assessments of gaseous variability. Temperatures, continuous profiles of ethane and acetylene, parameterised profiles of NH\({}_{3}\) and PH\({}_{3}\), and scaled abundances of C\({}_{2}\)H\({}_{4}\), C\({}_{4}\)H\({}_{2}\), C\({}_{3}\)H\({}_{4}\), C\({}_{3}\)H\({}_{8}\), C\({}_{6}\)H\({}_{6}\), CH\({}_{3}\), and CO\({}_{2}\) were retrieved during step 1 (see below for discussion of priors). The gaseous abundances would be refined later, but this provided a first zonal-mean \(T(p)\) structure. Tropospheric temperatures and gas abundances were determined by simultaneously fitting the H\({}_{2}\)-He continuum observed beyond 15 \(\mu\)m and the absorption of NH\({}_{3}\) and PH\({}_{3}\) between 8-12 \(\mu\)m. Stratospheric temperatures are largely controlled by CH\({}_{4}\) emission at 7.8 \(\mu\)m, but also by the simultaneous fitting of temperature and composition from the ethane and acetylene emission. We omit wavelengths beyond 16.3 \(\mu\)m due to challenging fringing/ripples that dominate the long-wavelength spectrum, and the lack of an in-flight MRS calibration for channel 4 at the time of writing. We omit 11.9-12.3 \(\mu\)m due to a known artefact in this MRS subband, whereby light leaks through the MRS dichroic filter from 6.1 \(\mu\)m to 12.2 \(\mu\)m to create a source-dependent artefact in the data. This artefact was identified due to difficulties in fitting Saturn's C\({}_{2}\)H\({}_{6}\) emission band at 12.2 \(\mu\)m simultaneously with CH\({}_{4}\) at 7.8 \(\mu\)m.
2. **Refined Temperatures:** The initial \(T(p)\) from step 1 was then used as a prior for (i) a refined estimate of the stratospheric temperatures, using data from channels 1C and 2A (7.3-8.4 \(\mu\)m) at their full spectral resolution; and (ii) a refined estimate of tropospheric temperatures, ammonia, and phosphine from 8.0-11.5 \(\mu\)m (channels 2A to 2C). Examples of the quality of the spectral fits for latitudes at 20\({}^{\circ}\)N (representative of low latitudes) and 80\({}^{\circ}\)N (representative of the bright polar emission) are shown in Fig. 11a and Fig. 11b.
3. **Aerosol Fitting:** Temperatures were then fixed for aerosol and gaseous retrievals from the 4.9-7.3 \(\mu\)m range (channels 1A to 1C, sampling every 3rd spectral point). Unlike the longer wavelengths, this region requires multiple scattering of reflected and thermal light to fit, and full details of the aerosol model are provided below, with example spectral fits shown in Fig. 11c. The resulting aerosol profiles were then incorporated back into the initial temperature inversions to check for any changes in the \(T(p)\) structure, using only their absorption cross-sections (i.e., without scattering). Changes to the resulting \(T(p)\) were negligible, but would be dependent on the refractive indices of the aerosols, which are not uniquely constrained (see Section 4.3).
4. **Refined Composition:** Finally, we adopted the \(T(p)\) and aerosol cross-sections, alongside the derived NH\({}_{3}\) and PH\({}_{3}\) distributions, as priors for focused retrievals of specific spectral features such as the hydrocarbons, HCN, CO\({}_{2}\) and H\({}_{2}\)O. Examples for a range of gaseous species are shown in Fig. 12, with key features labelled.
Figure 11: Examples of the quality of spectral fits (solid lines) to MIRI/MRS data (points with error bars) at different stages within our multi-stage retrieval. Panel (a) is dominated by stratospheric CH\({}_{4}\) emission, used to determine the vertical temperature structure; panel (b) sounds tropospheric species like PH\({}_{3}\), NH\({}_{3}\) and CH\({}_{3}\)D; panel (c) sounds reflection and absorption by tropospheric aerosols, in addition to PH\({}_{3}\), NH\({}_{3}\), H\({}_{2}\)O, and emission from stratospheric CH\({}_{4}\) and C\({}_{2}\)H\({}_{2}\). Spectra at 80\({}^{\circ}\)N have been offset from the 20\({}^{\circ}\)N spectra by 20 K for clarity. Uncertainties are a scaled version of those reported by the JWST pipeline, as described in Section 4.
### Fitting Temperatures and Gases
Saturn's prior \(T(p)\) at \(p>50\) mbar, as well as PH\({}_{3}\) and NH\({}_{3}\) profiles, are based on a low-latitude (\(\pm 30^{\circ}\) latitude) average from Cassini/CIRS nadir observations (L. N. Fletcher, Orton, Teanby, & Irwin, 2009). The \(T(p)\) is extended deeper than 0.8 bar using a dry adiabatic lapse rate (L. N. Fletcher et al., 2011), and higher (\(p<50\) mbar) using a global average of Cassini/CIRS limb observations (Guerlet et al., 2009), resulting in a prior defined from 1 \(\mu\)bar to 10 bars. We adopt the He/H\({}_{2}\) ratio from Voyager (Conrath & Gautier, 2000), CH\({}_{4}\) and its isotopologues from Cassini/CIRS (L. N. Fletcher, Orton, Teanby, Irwin, & Bjoraker, 2009); C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{6}\) and C\({}_{3}\)H\({}_{8}\) from an average of CIRS limb measurements (Guerlet et al., 2009); all other hydrocarbons (C\({}_{2}\)H\({}_{4}\), C\({}_{4}\)H\({}_{2}\), C\({}_{3}\)H\({}_{4}\), C\({}_{6}\)H\({}_{6}\), CH\({}_{3}\)), and CO\({}_{2}\) come from averages of the seasonal photochemical model of Moses and Greathouse (2005), updated to a finer latitude grid with zero meridional mixing (\(K_{yy}=0\) m\({}^{2}\)/s, Moses et al., 2007). Prior deep abundances for CO (1.0 ppb, Noll & Larson, 1990), GeH\({}_{4}\) (0.4 ppb, Noll & Larson, 1990), H\({}_{2}\)O (0.176 ppb, de Graauw et al., 1997) and AsH\({}_{3}\) (2.2 ppb, L. N. Fletcher et al., 2011) come from previous investigations at 5 \(\mu\)m. HCN uses the stratospheric upper limit from Herschel (22 ppb at \(p<1\) mbar, L. N. Fletcher et al., 2012).
With the exception of CO, AsH\({}_{3}\), and GeH\({}_{4}\), all species were allowed to vary from their priors during the MIRI/MRS retrievals. Depending on the size and strength of their spectral contributions in Fig. 13, gases were either (i) retrieved as full, continuous profiles with height (C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{6}\)); (ii) parameterised in terms of a deep mole fraction, transition pressure, fractional scale height (compared to the atmospheric scale height) up to the tropopause (NH\({}_{3}\), H\({}_{2}\)O and PH\({}_{3}\)); or (iii) simply scaled versions of the prior profiles (i.e., which implicitly assumes that the vertical profile is an accurate representation of Saturn's at
Figure 12: Example fits (black line) to observations (points with error bars) in specific regions of Saturn’s spectrum, including those regions sensing stratospheric hydrocarbons (ethane in (a) with the central region omitted as described in the text; acetylene and propane in (b); ethylene in (c); CO\({}_{2}\) and benzene in (d); methylacetylene, diacetylene, and methyl in (e)) and tropospheric absorptions from ammonia, phosphine and (tentatively) water in (c) and (f). Spectra for 80\({}^{\circ}\)N have been offset from 20\({}^{\circ}\)N for clarity by 10 K in (a) and (c); 20 K in (b) and (f) - no offsets were used in (d) and (e).
mosphere). Contribution functions for these spectral ranges are shown in Fig. 13, indicating the approximate sensitivity levels to which the different MRS channels are sensitive.
### Fitting Aerosols
Saturn's aerosol distribution is best constrained via remote sensing at visible and near-infrared wavelengths, but the opacity, location, and wavelength-dependent scattering/absorption properties of aerosols contributes significantly in the MIRI/MRS range, particularly at wavelengths below 7.3 \(\mu\)m. Studies in this range remain somewhat limited, and are dominated by investigations of aerosol changes in discrete regions such as the polar domain (Sromovsky et al., 2021; Sanchez-Lavega et al., 2020) and the northern storm band (Sromovsky et al., 2013, 2016). Attempts to study the latitudinal distribution of aerosols
Figure 13: Cloud-free contribution functions (Jacobians showing the rate of change of spectral radiance with temperature) calculated for nadir viewing and a typical mid-latitude composition on Saturn. These contribution functions have been normalised at each wavelength, and darker shading indicates the greatest contribution. No aerosols are included in this model. Key gaseous features have been labelled, note that minor stratospheric species become apparent with higher emission angles (not shown). The y-axis changes in pressure range to emphasise dominant features.
have used Hubble (Stam et al., 2001; Perez-Hoyos et al., 2016), Cassini/ISS (Roman et al., 2013), and Cassini/VIMS (L. N. Fletcher et al., 2011), with the latter using only nightside 5-\(\mu\)m spectra to avoid the complications of reflected sunlight, whereas JWST/MIRI spectra are a blend of thermal emission and reflected sunlight. To our knowledge, there have been no aerosol retrieval studies that utilise the 5.1-6.8 \(\mu\)m domain inaccessible to Cassini.
Given the degeneracies inherent in fitting reflected sunlight spectra, as evidenced by the range of different results in the literature (L. N. Fletcher, Sromovsky, et al., 2020), we initially adopted an Occam's-razor approach to fitting the 4.9-7.3 \(\mu\)m range. Fig. 13 shows that the 5-\(\mu\)m region senses high pressures (3-7 bars) in the absence of aerosols, so we initially considered a single aerosol population near \(\sim 1.5\) bars (i.e., above the primary contribution functions near 5.0-5.2 \(\mu\)m), rather than multiple different compact cloud decks, and allowed the base pressure, top pressure, total opacity and vertical extension to vary freely during the fitting process. We also avoided imposing any particular spectral shape to the aerosol cross-section, single-scattering albedo, and phase functions (modelled via two-term Henyey Greenstein functions fitting the results of Mie scattering calculations). Values for particle radius (from a standard gamma distribution with a 5% variance) and the spectrally uniform real and imaginary refractive indices were chosen after tests at several latitudes where these values were allowed to vary freely. The radius had a small effect, with \(r=1.0\pm 0.05\)\(\mu\)m selected as a best fit. The spectral fits were largely insensitive to the real refractive index, with values representative of NH\({}_{3}\) ice \(n_{r}\sim 1.4\)(Martonchik et al., 1984) and NH\({}_{4}\)SH solid \(n_{r}\sim 2.3\)(Howett et al., 2007) fitting equally well, so a mean \(n_{r}=1.8\) was selected. The most important parameter was the imaginary refractive index \(n_{i}\), which varies considerably in the infrared depending on the assumed composition of the aerosols. Fits were significantly improved with smaller values of \(n_{i}\) in the 5-\(\mu\)m region than would be typically expected for 'pure' NH\({}_{3}\) and NH4SH aerosols - the final selected value was \(n_{i}=1\times 10^{-3}\) over the whole range, resulting in weakly absorbing particles (single scattering albedoes \(>0.95\) in this spectral range).
We attempted to fit the 4.9-7.3 \(\mu\)m range (step 3, above) using this single-cloud layer and the choices of optical properties described above. Retrievals with multiple scattering of reflected and thermal photons are numerically intensive, requiring numerical evaluation of Jacobians at each step of the inversion, so MIRI/MRS spectra from channels 1A, 1B and 1C were undersampled by a factor of 4 to ensure a good fit across the whole 4.9-7.3 \(\mu\)m range. The latitudinally-resolved \(T(p)\) derived from steps 1 and 2 of our retrieval scheme were required to correctly reproduce emission from the CH\({}_{4}\)\(\nu_{2}\) band at 6.5 \(\mu\)m and the C\({}_{2}\)H\({}_{6}\) emission band at 6.8 \(\mu\)m, seen in Fig. 11c, particularly at high-latitudes where emission from the warm NPSV dominates. We varied the vertical location and extent of the cloud simultaneously with parametric profiles of NH\({}_{3}\), PH\({}_{3}\) and H\({}_{2}\)O.
This single-cloud model was remarkably successful in fitting most of the 4.9-7.3 \(\mu\)m range, allowing us to then search for discrepancies which might hint at more complicated cloud structure, such as the multi-layer clouds of Sromovsky et al. (2021), or the potential wavelength-dependent absorptions of photochemical hazes such as those of Guerlet et al. (2015). Poor fits in the dark equatorial band and polar latitudes led us to experiment with a second cloud layer in the upper troposphere sitting at higher altitudes than the original layer, providing further degrees of freedom but informed by the observation of hazes in Saturn's upper troposphere (Roman et al., 2013). A real refractive index of \(n_{r}=1.74\) was selected, similar to the value for diphosphine at 195 K (P\({}_{2}\)H\({}_{4}\)) that had been adopted in Cassini/VIMS studies of upper-tropospheric hazes (Sromovsky et al., 2021), based on the expected photochemical production from PH\({}_{3}\). The resulting fits were weakly sensitive to the aerosol radius (\(1.0\pm 0.05\)\(\mu\)m was selected) and imaginary refractive index (\(n_{i}=5\times 10^{-3}\) was selected). This double-cloud scheme can be thought of as representing the upper tropospheric haze and the top-most condensate cloud, with their base pressures, opacities, and vertical extensions all freely varying during the 4.9-7.3 \(\mu\)m fitting, and producing the high-quality fits shown in Fig. 11(c). The results will be discussed in Section 5.
Finally, although Cassini/CIRS observed aromatic and aliphatic hydrocarbon aerosols in the polar stratosphere (\(p<8\) mbar) in limb observations (Guerlet et al., 2015), there is limited need to include them in the nadir MIRI/MRS spectral fitting. The CIRS results suggested a peak in opacity near \(6.9\pm 0.3\)\(\mu\)m, and there is a subtle but compelling residual in the spectral fits in the same location (see Supplemental Fig. 22), which will be the topic of future investigations.
## 5 Saturn's Temperatures, Aerosols and Composition in 2022
The results of the multi-stage retrievals of zonally-averaged temperatures, aerosols, and gaseous species are described in the following subsections.
### Temperatures and Winds
Saturn's zonal-mean temperatures during northern summer are shown in Fig. 14, as derived from step two (i.e., refined spectral fitting at full spectral resolution after a 'global fit' to the 7.3-16.4 \(\mu\)m spectrum). The temperature inversion confirms many of the conclusions available from the brightness temperature maps alone. The troposphere is characterised by a cool EZ; temperature gradients between mid-latitude belts and zones that correlate with the peaks of the cloud-tracked zonal winds; and a warm polar domain. The cool EZ is coincident with the highest aerosol opacity (Section 5.2), and it is therefore possible that our aerosol-free assumption for wavelengths beyond 7.3 \(\mu\)m is inadequate, despite tests in Section 4 suggesting that the derived aerosols had minimal absorption at these wavelengths. Further refinement of the aerosol refractive indices, incorporating wavelengths longer than 7.3 \(\mu\)m, would be needed to fully resolve this potential degeneracy between aerosols and temperatures.
The stratosphere exhibits warming within the polar domain, reaching maximum temperatures of \(154\pm 1\) K within the NPSV (poleward of \(78^{\circ}\)N) and \(160\pm 2\) K within the NPC (poleward of \(87^{\circ}\)N). These peak temperatures are only slightly cooler than those observed within the southern stratosphere in 2004-05 - the SPSV and SPC - by Cassini/CIRS (L. N. Fletcher, Orton, et al., 2018), a seasonal asymmetry possibly due to Saturn's orbital eccentricity. Radiative models demonstrate that the bulk of this warming is driven by seasonal radiative heating (Guerlet et al., 2014; Hue et al., 2016; Blake et al., 2022), but the sharp boundaries are related to dynamics (e.g., stratospheric winds encircling the NPSV and NPC). Temperature inversions indicate a moderate stratospheric cooling at pressures below 300 \(\mu\)bar, with temperatures approaching a \(140\pm 3\) K quasi-isothermal structure up to the base of the thermosphere. While retrieved low-pressure temperatures are somewhat influenced by the choice of prior, this isothermal structure is consistent with radiative modelling (e.g., Guerlet et al., 2014).
The most prominent feature of the MIRI temperature field is the vertical structure of Saturn's Equatorial Stratospheric Oscillation (we refer to this as the SESO, as the semi-annual nature of the oscillation is questionable). During the November-2022 phase, a prominent warm anomaly (\(153\pm 0.8\) K) is observed between the equator and \(\sim\)\(10^{\circ}\)N centred near 0.7 mbar. This is some 12-14 K warmer than temperatures at 0.1 mbar, and is responsible for the warm band of bright stratospheric emission observed in Fig. 7 and Fig. 8. This is accompanied by off-equatorial temperature maxima near \(13^{\circ}\)N, a strong warm anomaly near 0.05 mbar, and a weaker cool anomaly near 1 mbar. Similar vertical patterns were observed \(\sim 16-17\) years earlier by Cassini/CIRS limb spectroscopy (Fouchet et al., 2008) in 2005-06, and are revealed here due to the high spectral resolution of MIRI/MRS.
Latitudinal temperature gradients \(dT/dy\) (where \(y\) is the north-south distance in km) are converted to vertical windshears \(du/dz\) (where \(z\) is altitude) via the thermal wind equation (e.g. Holton, 2004), omitting the equatorial region where the Coriolis parameter tends to zero. These are shown in Fig. 14b, and show intriguing shear structure equatorward
of 30\({}^{\circ}\)N (broadly the domain occupied by Saturn's equatorial jet). The positive equatorial shearzone in near 1 mbar in 2022 is likely to be the one that was seen near 0.1-0.3 mbar in 2017 by Cassini (L. N. Fletcher et al., 2017), having descended over a decade in pressure in five years. Positive and negative shear zones associated with the SESO and its off-equatorial structures appear to move diagonally to higher pressures with decreasing latitude. For example, the negative shear zone in the upper troposphere appears continuously connected to the negative shear zone at 1 mbar and 30\({}^{\circ}\)N. The same connection is seen for the negative shear zone at 10 mbar (5\({}^{\circ}\)N) and 0.05 mbar (30\({}^{\circ}\)N). Such a system of stacked shear zones connecting the equator and off-equatorial features was nicely captured by the model of Bardet et al. (2022).
Windshears derived from nadir infrared data provide a good picture of the shearzones as a function of altitude and latitude, but using them to integrate winds with altitude and across regions of low information content (e.g., the tropopause and lower stratosphere) can generate significant uncertainties (e.g., Fouchet et al., 2008; L. N. Fletcher et al., 2016). Nevertheless, we display the estimated thermal winds in Fig. 14c, assuming that the continuum-band winds measured by Garcia-Melendo et al. (2011) are placed at 500 mbar. Treating these thermal winds with appropriate caution, we infer a localised equatorial westward jet (exceeding -200 m/s) near 1-5 mbar and equatorward of 10\({}^{\circ}\)N, embedded within a larger region of eastward flow that spans the tropics equatorward of 30\({}^{\circ}\)N. The westward jet is below an eastward equatorial jet (\(\sim\) 200 m/s) near 0.1-0.5 mbar, with the peak-to-peak contrast of \(\sim\) 400 m/s between eastward and westward maxima being comparable to that derived from Cassini limb observations (Fouchet et al., 2008). Direct observations of Saturn's stratospheric winds by ALMA (Bennahti et al., 2022) four years earlier (2018) observed a \(\sim\) 300 m/s eastward jet between 20\({}^{\circ}\)S and 20\({}^{\circ}\)N but with a coarse vertical resolution, covering 0.01-20 mbar. Although this is qualitatively consistent with the broad eastward flow inferred from MIRI in Fig. 14c, the coarse resolution of the ALMA data may average over any oscillatory wind patterns over a decade of pressure. Alternatively, differences between ALMA and MIRI might simply reflect the downward propagation of these stacked zonal jets over four years, and future joint campaigns between ALMA and JWST would be welcome to confirm this, alongside ground-based spectroscopic monitoring of the equatorial CH\({}_{4}\) emission at high spatial and spectral resolution.
Beyond the equator, the correlation between \(dT/dy\) and the cloud-tracked zonal winds causes the decay of the cloud-top winds with height (e.g., the westward jet near 42\({}^{\circ}\)N becomes eastward at \(p<\) 80 mbar), as previously observed by Cassini (Read et al., 2009). Finally, the strong \(dT/dy\) at the edge of the NPSV implies negative windshear and the inference of westward flow around the edge of the vortex for \(p<\) 10 mbar (L. N. Fletcher, Orton, et al., 2018), at a latitude consistent with the westward winds directly observed by ALMA near 74\({}^{\circ}\)N planetographic (Benmahi et al., 2022). This westward wind is zonally symmetric and in balance with the seasonal temperature gradients from radiative heating within the NPSV.
### Aerosols
The zonal-mean distribution of Saturn's aerosols are shown in Fig. 15(a), derived from fits to the 4.9-7.3 \(\mu\)m region using the double-cloud scheme and multiple scattering of reflected and thermal photons as outlined in Section 4.3. Although significant degeneracy exists in the choices of aerosol refractive indices and size distributions, the need for two clouds was evident from the difficulty fitting equatorial latitudes (where clouds are most reflective). The base pressures, vertical extensions, top-most pressures, and total opacity were then varied as free parameters for the two aerosol populations, resulting in the stacked cloud decks observed here.
The upper cloud, which may comprise a photochemically-produced haze potentially associated with diphosphine (Ferris & Benson, 1980), is optically thickest, highest (a base
Figure 14: Temperatures, windshear, and thermal winds as a function of latitude derived from MIRI/MRS spectroscopy. The peaks of cloud-tracked eastward winds are shown as vertical dashed lines for context. In panels b and c, regions of negative windshear/westward winds are shown as blue with dotted contours; regions of positive windshear/eastward winds are shown as red with solid contours. Temperature contours are every 4 K; windshear and wind contours are logarithmic to show structure, but windshears are labelled at \(\pm\)0.1, \(\pm\)0.5 and \(\pm\)1.0 m/s/km; winds are labelled at 10, 50, 100, 150, and 200 m/s. Absolute values for derived thermal winds are subject to significant uncertainties related to the integration of the thermal wind equation (L. N. Fletcher et al., 2016), so should be used as a guideline to trends only.
near 200 mbar), and most extended at the equator, reaching into the lower stratosphere near 60-70 mbar. This upper haze is found deeper (near 300 mbar) and more compact from 18\({}^{\circ}\)-48\({}^{\circ}\)N, and then declines considerably at higher latitudes beyond 60\({}^{\circ}\)N. The latitudinal dependence and altitude of this haze layer is well-matched to the small inflection in the \(T(p)\) profile observed by Cassini/CIRS and known as the 'temperature knee' (L. N. Fletcher, Sromovsky, et al., 2020), suggesting that seasonal heating of this aerosol population is responsible for the change in the curvature of the vertical temperature profile. The absence of this aerosol at high latitudes may be partially responsible for the bright 5-\(\mu\)m emission between 60\({}^{\circ}\)-80\({}^{\circ}\)N in Figs. 7, 8 and 9. The latitudinal distribution of this upper haze also matches that derived from Palomar visible-light observations acquired in 1995 (Stam et al., 2001), and Cassini/ISS determinations of aerosols during 2004-2007 (Roman et al., 2013), which identified haze-top pressures ranging from 40 to 150 mbar, with the thickest and highest at the equator, becoming deeper and thinner at mid-latitudes.
The deeper cloud, which may be associated with condensates of NH\({}_{3}\) (potentially coated in other material, e.g., Sromovsky et al., 2021), resides between 1-2 bars, but with an extension to lower pressures that maybe merges with the upper haze. The base pressure of this cloud varies with latitude, reaching the lowest pressures (1.2 bars) near 10\({}^{\circ}\)N, which was the site of the highest reflectivity in Fig. 9, and the deepest pressure (2.6 bars) within the polar domain. Indeed, the inversions poleward of 78\({}^{\circ}\)N suggest that this deep cloud resides at \(p>2\) bars and is vertically compact, responsible for both the dark thermal emission at 5 \(\mu\)m and the absence of reflectivity at 5.2 \(\mu\)m - the dark north pole is therefore caused by this deep aerosol layer. The compact nature of this cloud deck appears to be consistent with Cassini observations (L. N. Fletcher et al., 2011; Sromovsky et al., 2021), although we caution that the vertical extension is a rather poorly constrained parameter in these retrievals. The mean pressure of the cloud base is again consistent with that found by Cassini/ISS (\(1.75\pm 0.4\) bars, Roman et al., 2013), even though optical measurements only reveal this deep cloud when there are gaps in the overlying haze.
This simple two-cloud scheme does not provide constraints on two other proposed cloud layers: neither a deep cloud (2.7-4.5 bars, potentially due to NH\({}_{4}\)SH Sromovsky et al., 2021); nor the stratospheric hazes (near 50 mbar, Roman et al., 2013; Guerlet et al., 2015; Sromovsky et al., 2021). The deep cloud does not seem to be required to reproduce the MIRI data, but this may be due to lack of constraint from reflected sunlight at shorter wavelengths (e.g., from NIRSpec). The very low opacities (\(0.08\pm 0.05\) at 619 nm) and small particle sizes (\(<0.3\)\(\mu\)m) of the stratospheric aerosols inferred by Roman et al. (2013) make it very unlikely that they would contribute significant opacity at longer mid-IR wavelengths. Nevertheless, very subtle residuals in our fits to north polar latitudes near \(6.8\pm 0.2\)\(\mu\)m could be related to the stratospheric aerosols (see Supplemental Fig. 22c), and will be the topic of future studies, given their photochemical nature and potential importance in the balance of radiative heating and cooling in the polar domain (Guerlet et al., 2015).
### **Tropospheric Gases**
Saturn's tropospheric gases - namely NH\({}_{3}\), PH\({}_{3}\), and H\({}_{2}\)O - are accessible in the 4.9-6.0 \(\mu\)m region (primarily channel 1A) and the 8-11 \(\mu\)m region (spanning channels 2A to 2C). The latter sounds higher altitudes in the upper troposphere, whereas the former provides access to the deeper cloud-forming layers.
#### **5.3.1** **Phosphine**
Phosphine (PH\({}_{3}\)) is retrieved parametrically from both spectral regions (i.e., varying the deep mole fraction and scale height for a fixed transition pressure of 1 bar), and the results from 5 \(\mu\)m are shown in Fig. 15b. For \(p>1\) bar, the deep abundance varies between 4.0-5.0 ppm over most of the northern hemisphere, consistent with results from Cassini/VIMS (L. N. Fletcher et al., 2011; Sromovsky et al., 2021), but with no notable
contrasts at the equator, and a higher abundance \(6.5\pm 1.0\) ppm poleward of 80\({}^{\circ}\)N. Much of the latitudinal structure is therefore found in the upper troposphere, driven by the fractional scale height of the gas. For \(p<1\) bar, PH\({}_{3}\) is enriched equatorward of 15\({}^{\circ}\)N with evidence for a slight depletion right at the equator. Further enriched bands are found at 33\({}^{\circ}\)N, 46\({}^{\circ}\)N and between 60\({}^{\circ}\)-80\({}^{\circ}\)N, with a general decline in abundance towards the north pole. The equatorial peak and presence of bands of elevated PH\({}_{3}\) were also observed by Cassini/CIRS (L. N. Fletcher, Orton, Teanby, & Irwin, 2009), but the precise locations differ - in particular, there is no good correspondence between mid-latitude PH\({}_{3}\) bands and zones of cooler temperatures, as we might expect if only dynamics (i.e., upwelling) were controlling the mid-latitude distribution. Conversely, there is a good correspondence between higher cloud bases and the elevated PH\({}_{3}\), reinforcing links between PH\({}_{3}\) and aerosols shielding the gas from UV photolysis. This correspondence breaks down in the 5-\(\mu\)m-bright band near 60\({}^{\circ}\)-80\({}^{\circ}\)N, where we have thin clouds but enriched PH\({}_{3}\), suggesting a more complex balance between aerosol shielding and vertical mixing. Where clouds are deepest and most compact poleward of 80\({}^{\circ}\)N, PH\({}_{3}\) could be depleted either by polar subsidence (L. N. Fletcher et al., 2008) or by diminished aerosol shielding.
Inversions from the 10-\(\mu\)m region show a similar morphological structure but primarily sense \(p<1\) bar (see Fig. 13). However, the mid-latitude peaks are at different locations -29\({}^{\circ}\)N and 44\({}^{\circ}\)N, and the deep abundances vary between 7-10 ppm, a factor of \(\sim 2\) higher than those derived from the 5-\(\mu\)m region. This is a known discrepancy between PH\({}_{3}\) abundances derived from the two regions and occurs on both Saturn (L. N. Fletcher et al., 2011) and Jupiter (Giles et al., 2015), and reconciliation will require joint fitting of both spectral domains with multiply-scattering aerosols.
#### 5.3.2 Ammonia
The latitudinal distribution of NH\({}_{3}\) was also retrieved parametrically (with a transition pressure at 1.75 bars), with the 5-\(\mu\)m results shown in Fig. 15c. As previously observed by Cassini/RADAR (Laraia et al., 2013) and VIMS (L. N. Fletcher et al., 2011), MIRI reveals a strong equatorial enhancement within 5\({}^{\circ}\) of the equator, with deep abundances below \(p>1.75\) bars of 450-650 ppm, compared to a minimum of \(\sim 150\) ppm at 10\({}^{\circ}\)N. This equatorial enhancement on Saturn shares similarities with that observed on Jupiter (de Pater et al., 2016; Li et al., 2017), and suggests shared dynamics driving the unique compositions of their cold Equatorial Zones, compared to strong NH\({}_{3}\) depletion at other latitudes. Despite the strong equatorial maximum at depth, the abundance falls steeply with height equatorward of 20\({}^{\circ}\)N, such that NH\({}_{3}\) at the 200-mbar level displays a local equatorial minimum, perhaps due to the enhanced condensation through cold equatorial temperatures to form the thicker clouds observed here.
Further local maxima in the deep abundance occur at 26\({}^{\circ}\)N and 49\({}^{\circ}\)N, which are similar in size (but at different latitudes) compared to peaks observed by Cassini/VIMS in 2006 (L. N. Fletcher et al., 2011), suggesting temporal variability in the abundance of NH\({}_{3}\) in the mid-latitude bands. The region near 40\({}^{\circ}\)N is notable as displaying the shallowest gradient in NH\({}_{3}\) for \(p<1\) bar, coinciding with the coldest tropospheric band in Fig. 14a, but is actually a local minimum (\(\sim 100\) ppm) in the deep NH\({}_{3}\) abundance for \(p>1.75\) bars. Similarly, there is a general increase in deep abundance towards the polar domain (\(\sim 600\) ppm for \(p>1.75\) bars), but also a shallower upper tropospheric gradient implying a local polar minimum at 200 mbar. These MIRI results imply that ammonia displays different latitudinal distributions above and below the condensed aerosols near 1.75 bars, suggestive of differing circulation patterns at different heights (L. N. Fletcher, Kaspi, et al., 2020). To reinforce this, NH\({}_{3}\) was also retrieved from the 10-\(\mu\)m region, sensing higher altitudes of the upper troposphere, and resulted in a relatively meridionally uniform distribution similar to that derived by Cassini/CIRS (Hurley et al., 2012). The strongest NH\({}_{3}\) contrasts are therefore only seen in the cloud-forming region sensed near 5 \(\mu\)m, with only small latitudinal gradients in the stably-stratified upper troposphere sensed near 10 \(\mu\)m.
#### 5.3.3 Water
Perhaps the most tantalising prospect offered by MIRI/MRS is the possibility of mapping Saturn's tropospheric water in the 5.1-5.5 \(\mu\)m region, which remained out of reach to Cassini/VIMS because of the lack of spectral coverage beyond 5.1 \(\mu\)m. de Graauw et al. (1997) detected Saturn's tropospheric water with disc-averaged observations from ISO/SWS, fitting it with 0.2 ppm at \(p>3\) bars and generally sub-saturated conditions. H\({}_{2}\)O was retrieved parametrically, with initial testing at mid-latitudes suggesting the transition pressure should lie at \(p>4\) bars (5 bars was chosen), a weak sensitivity to the chosen deep abundance (10 ppm was selected), but a stronger sensitivity to the fractional scale height, so only the latter parameter was allowed to vary.
Tropospheric water lines are difficult to observe between the forest of PH\({}_{3}\) features in Fig. 11f, and are most visible as small notches either side of the narrow NH\({}_{3}\) feature near 5.2 \(\mu\)m. These H\({}_{2}\)O features are present at all latitudes, but are close to the level of the spectral residuals seen elsewhere in the channel-1 data (see Supplemental Fig. 22c). Nevertheless, the MIRI/MRS data provide tentative evidence of latitudinal variability of H\({}_{2}\)O in Fig. 15d. At 3.3 bars, the water abundance varies from a maximum of \(\sim\) 200 ppb right at the equator (consistent with ISO estimates from de Graauw et al., 1997), to \(\sim\) 10 ppb between 5\({}^{\circ}\)-10\({}^{\circ}\)N, to \(\sim\) 145 ppb from 15\({}^{\circ}\)-35\({}^{\circ}\)N, then declines to \(\sim\) 80 ppb from 40\({}^{\circ}\)-80\({}^{\circ}\)N. Poleward of 15\({}^{\circ}\)N, the changes in abundance with latitude mirror those in the aerosol distribution, with a tendency for enhanced H\({}_{2}\)O where the aerosols have a higher optical depth, although we caution that this could reflect a model-dependent degeneracy given the weakness of the H\({}_{2}\)O features. The low values near 5\({}^{\circ}\)N are very poorly constrained due to the thicker aerosol opacity, but are possibly due to enhanced condensation in the cool EZ. The shallower gradient in the polar domain are possibly due to the warmer upper-tropospheric temperatures there and the absence of upper tropospheric aerosols, but the same is not seen for NH\({}_{3}\). Conversely, the increase in H\({}_{2}\)O right at the equator mimics the equatorial column of NH\({}_{3}\), suggesting a volatile-rich domain at Saturn's equator.
We note that the challenge of calibrating MRS channel 4 (i.e., beyond 18 \(\mu\)m) limits our sensitivity to the distribution of stratospheric H\({}_{2}\)O, which will be the topic of future investigations.
### Stratospheric Chemistry
The spatial distribution of chemical species provides a means to trace Saturn's stratospheric circulation, to understand the photochemical lifetimes of different products, the exogenic supply of oxygenated species, and the potential influence of ionisation on the chemistry of the polar domains. As described in Section 3.1, MIRI/MRS provides access to a host of stratospheric chemicals with a higher spectral resolution and sensitivity than Cassini/CIRS. However, fringing artefacts that still plague wavelengths beyond 10 \(\mu\)m make precise quantitative measurements challenging, particular for minor species. In the following sections, we present an initial assay of Saturn's stratospheric composition based on MIRI/MRS data.
#### 5.4.1 Acetylene and Ethane
Latitudinal cross-sections of ethane (C\({}_{2}\)H\({}_{6}\)) and acetylene (C\({}_{2}\)H\({}_{2}\)) are shown in Fig. 16, and at the 0.5-mbar level in Fig. 17a,b, based on spectral fits shown in Fig. 12. Cassini observations revealed that both species are time-variable, responding to Saturn's seasonally-evolving circulation and solar flux. The equator-to-pole gradient of C\({}_{2}\)H\({}_{2}\) changes with height, with the upper stratosphere showing polar enrichment (poleward of 60\({}^{\circ}\)N and \(p<0.1\) mbar), but the lower stratosphere showing polar depletion (\(p>0.1\) mbar) due to the stronger gradient of C\({}_{2}\)H\({}_{2}\) near the poles compared to at other latitudes. C\({}_{2}\)H\({}_{6}\) also shows strong polar enrichment (poleward of 78\({}^{\circ}\)N and \(p<0.1\) mbar, within the NPSV). Both species show enhancements at the equator, as previously observed by Cassini (Guerlet et
al., 2009; Sinclair et al., 2013; Sylvestre et al., 2015) and predicted by photochemical models based on the annual-average insolation (Moses & Greathouse, 2005; Hue et al., 2015), but this too is time variable, with evidence that the equatorial C\({}_{2}\)H\({}_{6}\) peak has strengthened with time whereas C\({}_{2}\)H\({}_{2}\) has remained reasonably constant (L. N. Fletcher, Sromovsky, et al., 2020).
The different latitudinal trends in the lower stratosphere (C\({}_{2}\)H\({}_{2}\) declining towards high latitudes, C\({}_{2}\)H\({}_{6}\) increasing) have been observed previously, with short-lived C\({}_{2}\)H\({}_{2}\) more closely following photochemical predictions of Moses and Greathouse (2005) (Fig. 17a), whereas long-lived C\({}_{2}\)H\({}_{6}\) is more sensitive to stratospheric circulation. Intriguingly, local maxima between 10\({}^{\circ}\)-30\({}^{\circ}\)N observed in both species during northern winter by Cassini (2005-2012, Guerlet et al., 2009; Sylvestre et al., 2015) have now been replaced by local minima between 10\({}^{\circ}\) and 35\({}^{\circ}\)N during northern summer observed by JWST. This supports the idea that wintertime subsidence has been replaced by summertime upwelling in this latitude range (i.e., upwelling of hydrocarbon-depleted air from the lower stratosphere), associated with the seasonal reversal of the inter-hemispheric Hadley cell (Friedson & Moses, 2012; Bardet et al., 2022). This upwelling may provide an explanation for why MIRI observed colder stratospheric temperatures in 2022 (Section 5.1) compared to Cassini in 2017.
Figure 15: Aerosols, phosphine, ammonia and water derived from the 4.9-7.3 \(\mu\)m region. Aerosols in panel (a) are plotted in opacity/km at a reference wavelength of 5 \(\mu\)m, calculated following the scheme in Appendix C of Irwin et al. (2022). A logarithmic colour bar is used to show structure within the aerosol cross-section. For the gases in panels (b)-(d), the contours are also logarithmic to allow for the rapid decline of the abundance with altitude. PH\({}_{3}\) and NH\({}_{3}\) are provided in ppm, H\({}_{2}\)O is given in ppb. Vertical dotted lines show the latitudes of tropospheric eastward jets.
#### 5.4.2 Ethylene
Ethylene was previously only detected within Saturn's storm-perturbed stratosphere (Hesman et al., 2012; Moses et al., 2015), due to the elevated temperatures and a photochemical increase in the C\({}_{2}\)H\({}_{4}\) abundance in 2011. Other than the storm and a reported ground-based detection (Bezard, Moses, et al., 2001), C\({}_{2}\)H\({}_{4}\) had proven elusive until the high sensitivity of MIRI/MRS, which reveals 10.5-\(\mu\)m C\({}_{2}\)H\({}_{4}\) emission within the NPSV for the first time (Fig. 17c, 0.5-mbar abundances of \(4.0\pm 0.5\) ppb poleward of 80\({}^{\circ}\)N). This abundance is a factor of 2-3\(\times\) lower than that expected (but not seen) at the equator due to neutral photochemistry (Moses & Greathouse, 2005). Note that the emission features are only readily detectable in spectra poleward of 70\({}^{\circ}\)N - at lower latitudes, our model provides the highest C\({}_{2}\)H\({}_{4}\) abundance that would be consistent with the non-detection of emission (within uncertainties). The seasonal model overpredicts the estimated low-latitude abundance (\(2.3\pm 0.5\) ppb at 0.5 mbar compared to \(\sim 12\) ppb from the model) by a factor of 4, but the data are more consistent with updated models for Saturn's stratosphere (1-2 ppb at 0.5 mbar from Fig. 6 of Moses et al., 2015).
Nevertheless, the polar maximum in C\({}_{2}\)H\({}_{4}\) in Fig. 17c is unexpected on the grounds of neutral photochemistry, and is suggestive of either subsidence of hydrocarbon-rich air from higher altitudes (leading to the maxima in ethane and acetylene), or due to an enhanced contribution from ion-neutral chemistry at the highest latitudes. Distinguishing these scenarios will require future modelling work for the chemistry of the NPSV.
#### 5.4.3 C\({}_{3}\), C\({}_{4}\) and C\({}_{6}\) Hydrocarbons
**Propane** is detected as a perturbation to the stronger C\({}_{2}\)H\({}_{2}\) lines near 13.37 \(\mu\)m (indicated in Fig. 12), which results in the large uncertainties on the distribution in Fig. 17d. Propane bands \(\nu_{20}\) and \(\nu_{21}\) are observed near 9.4 and 10.7 \(\mu\)m, respectively (see Supplemental Fig. 23), but primarily at high latitudes. The strongest detections are within the warm NPSV, with abundances near \(200\pm 10\) ppb, and a relatively uniform distribution with latitude, consistent with that found by Sylvestre et al. (2015). It is clear that C\({}_{3}\)H\({}_{8}\) does not follow the predictions of seasonal photochemistry, suggesting the influence of meridional circulation on the distribution of long-lived propane (e.g., similar to the argument for C\({}_{2}\)H\({}_{6}\), Moses & Greathouse, 2005).
Figure 16: Distributions of acetylene and ethane, derived as continuous profiles to capture changes in the vertical gradients of these species. Contours are logarithmic, and labelled in ppmv. Vertical dotted lines show the latitudes of tropospheric eastward jets. A cross-section at 0.5 mbar is shown in Fig. 17.
Figure 17: Distributions of stratospheric species at 0.5 mbar: acetylene, ethane, ethylene, propane, CO\({}_{2}\), benzene, methylacetylene (propylene), diacetylene (1,3-butadiyne), and methyl in Saturn’s northern summer, compared to the predictions of the neutral photochemistry model for \(L_{s}=150^{\circ}\)(Moses et al., 2007; Moses & Greathouse, 2005). The photochemical model is shown in red, and is referenced to the right-hand axis so that both the shape of the distribution and differences in absolute abundances can be compared. Error bars are shown in grey, but these do not include uncertainties due to scaling a latitudinally-uniform _a priori_ profile. If the shape of the profile changes significantly with latitude (as is expected for benzene and C\({}_{4}\)H\({}_{2}\) from previous studies), this could influence the retrieved values. Vertical dotted lines show the latitudes of tropospheric eastward jets. Gases in panels (a)-(c) were derived as full vertical profiles, other gases are derived as scale factors for our _a priori_ profiles. Grey points in panel (c) signify a lack of obvious detection of C\({}_{2}\)H\({}_{4}\) emission by eye.
Unlike propane and ethane, the unsaturated hydrocarbons **methylacetylene** (C\({}_{3}\)H\({}_{4}\)) and **diacetylene** (C\({}_{4}\)H\({}_{2}\)) are relatively short-lived species that track the photochemical model predictions at latitudes equatorward of 45\({}^{\circ}\)N, with C\({}_{3}\)H\({}_{4}\) in particular showing the expected equator-to-pole contrast in Fig. 17g. C\({}_{4}\)H\({}_{2}\) shows a local minimum near 20\({}^{\circ}\)-40\({}^{\circ}\)N that has evolved from the local maximum observed by Cassini in 2005-06 (Guerlet et al., 2010), possibly related to the onset of low-latitude stratospheric upwelling during northern summer. Surprisingly, C\({}_{4}\)H\({}_{2}\) then increases towards the NPSV where some of the highest abundances are obtained in Fig. 17h. This is unlikely to be a circulation effect, as C\({}_{4}\)H\({}_{2}\) has only a slightly shorter loss timescale than C\({}_{3}\)H\({}_{4}\)(Guerlet et al., 2010), so we might expect to see the same structure in C\({}_{3}\)H\({}_{4}\). Photolysis of C\({}_{2}\)H\({}_{2}\) is the dominant production mechanism for C\({}_{4}\)H\({}_{2}\)(Moses et al., 2005), so the polar C\({}_{4}\)H\({}_{2}\) may simply be a result of the excess C\({}_{2}\)H\({}_{2}\) within the NPSV at high altitudes in Fig. 16, which is not captured by the photochemical models. Conversely, C\({}_{3}\)H\({}_{4}\) is primarily formed from interconversion of other C\({}_{3}\) hydrocarbons (Moses et al., 2005), which do not display the same enrichment as C\({}_{2}\)H\({}_{2}\). Furthermore, the model of Moses and Greathouse (2005) overestimates equatorial C\({}_{3}\)H\({}_{4}\) and C\({}_{4}\)H\({}_{2}\) by factors of \(\sim 5\) and \(\sim 4\), respectively, similar to that found by Guerlet et al. (2010).
**Methyl** (CH\({}_{3}\), first detected by ISO, Bezard et al., 1998) is also a short-lived species produced directly from methane photolysis (Moses et al., 2000). MIRI/MRS provides the first latitudinally-resolved measurements of CH\({}_{3}\), showing the same equator-to-pole decline as C\({}_{2}\)H\({}_{2}\) and C\({}_{3}\)H\({}_{4}\), along with a local minimum near 30\({}^{\circ}\)N that may be related to the seasonal upwelling. The seasonal model underpredicts the equatorial abundances by \(\sim 2.5\). Unlike most other hydrocarbon species, CH\({}_{3}\) appears to be most depleted within the NPSV, by a factor of \(\sim 5\) compared to equatorial abundances. This may reflect the annual-average insolation, with less CH\({}_{4}\) photolysis at the highest latitudes, although we note that methyl-methyl recombination reactions are dominant producer of C\({}_{2}\)H\({}_{6}\), so it may have been mostly converted into the enriched ethane of the NPSV. Alternatively, we note that the CH\({}_{3}\) abundance is very sensitive to the methane homopause pressure (Bezard et al., 1998), so the CH\({}_{3}\) depletion in the NPSV could potentially be caused by subsidence through the upper stratosphere and lower thermosphere that pushes the methane homopause to deeper pressures, providing a consistent picture with the enhanced C\({}_{2}\)H\({}_{4}\), C\({}_{2}\)H\({}_{6}\), and some other hydrocarbon abundances.
**Benzene** (C\({}_{6}\)H\({}_{6}\), first detected by Bezard, Drossart, et al., 2001) also differs substantially from neutral photochemical predictions, which would expect a distribution similar to that of C\({}_{2}\)H\({}_{2}\). Instead, we see the C\({}_{6}\)H\({}_{6}\) emission at all latitudes (e.g., between strong C\({}_{2}\)H\({}_{2}\) lines in Fig. 12), with enhancement by 1.5-2.0\(\times\) within the NPSV compared to mid-latitudes. A similar latitudinal gradient was observed by Cassini/CIRS in the southern hemisphere between 2007 and 2012, with a slight enhancement within the SPSV (Guerlet et al., 2015). Nevertheless, the peak abundance at 0.5 mbar remains 50\(\times\) smaller than the photochemical model prediction of Moses and Greathouse (2005), as previously found by Guerlet et al. (2015), and some 10\(\times\) smaller than the coupled ion-neutral chemistry of Moses et al. (2023). As discussed by Koskinen et al. (2016) and Moses et al. (2023), benzene on Saturn is greatly enhanced by ion chemistry, and increased production due to auroral-induced ion chemistry may play a role at high latitudes, but finding a specific match between models and data remains a challenge.
It is interesting to note that, of all the hydrocarbon species observed by MIRI in 2022, none of them show the substantial chemical consequences predicted by neutral photochemistry (Moses et al., 2023) if the large influx of organic-rich ring material detected by Cassini during its Grand Finale in 2017 (Serigano et al., 2022) were vapourised/ablated upon entry into the equatorial stratosphere. Furthermore, no new nitriles were observed in 2022 (see below). We thus favour the hypothesis of Moses et al. (2023) that the ring material enters as small dust particles that do not ablate and affect the stratospheric composition.
#### 5.4.4 Exogenic Species
Carbon dioxide (CO\({}_{2}\), first detected by ISO, Feuchtgruber et al., 1997) is detected at all latitudes by MIRI/MRS, with a relatively uniform abundance with latitude (\(0.60\pm 0.05\) ppb at 0.5 mbar, Fig. 17e) and hints of a slightly lower abundance (\(0.56\pm 0.03\) ppb) within the NPSV. The distribution is approximately consistent with the few measurements available from Cassini (Abbas et al., 2013), but is inconsistent with photochemical models that assume a globally constant flux of incoming oxygen species and predict CO\({}_{2}\) abundances that are greatest near the equator and tail off toward high latitudes (Moses & Greathouse, 2005). There are no strong latitudinal gradients that might imply a spatially-localised source (e.g., recent comets, Cavalie et al., 2010), or Enceladus plume and ring material entering at specific latitudes, but icy grain ablation remains too small a source to explain the measurements (Moses & Poppe, 2017). Indeed, the uniform CO\({}_{2}\) distribution derived here does not match the exogenic H\({}_{2}\)O distribution derived from Herschel observations, which showed enhancements at low latitudes (Cavalie et al., 2019). More work is required to robustly compare the CO\({}_{2}\) and H\({}_{2}\)O distributions to elucidate their sources.
Unless Saturn suffers a large-scale impact event, or the equatorial ring influx does provide excess nitrile production (Moses et al., 2023), HCN is not an expected photochemical product on Saturn, as the regions of photolytic destruction of CH\({}_{4}\) and NH\({}_{3}\) are separated by hundreds of kilometres in the vertical. Any detection of the 14.04 \(\mu\)m (\(\nu_{2}\)) HCN line is challenging due to blending with lines of C\({}_{2}\)H\({}_{2}\). We also note that the MRS wavelength calibration (Argyriou et al., 2023) and spectral resolution (Jones et al., 2023) creates fitting artefacts in channel 3B, preventing a rigorous study of upper limits in this preliminary study. Nevertheless, we find that the MRS data support HCN abundances no larger than 1 ppb at \(p<1\) mbar, an improvement over the previous upper limit of 22 ppb at \(p<1\) mbar in the sub-millimetre from Herschel/SPIRE (L. N. Fletcher et al., 2012). We do not see any evidence of the \(\nu_{5}\) band of HC\({}_{3}\)N at 15.08 \(\mu\)m, again confirming an absence of nitriles related to ring influx (Moses et al., 2023).
## 6 Conclusions
This initial survey of JWST/MIRI observations of Saturn has revealed a wealth of new insights into the evolution of the seasonal atmosphere during northern summer (November 2022, \(L_{s}=150^{\circ}\)); demonstrated MIRI/MRS capabilities to observe extended, bright, rotating and moving planetary objects that are much larger than the fields-of-view; and provided a means to evaluate and mitigate challenges related to wavelength calibration, detector saturation, and instrumental artefacts for MIRI/MRS. Spatially-resolved 4.9-27.9 \(\mu\)m maps of Saturn (three tiles spanning from the equator to the north pole) have been inverted to study the zonal-mean temperatures, windshears, aerosols, and gaseous composition from the cloud-forming region of the troposphere into the mid-stratosphere. This includes the first maps of the transitional region of Saturn's spectrum between 5.1-6.9 \(\mu\)m, where both thermal emission and scattered sunlight shape the spectrum, which were inaccessible to the VIMS and CIRS instruments on Cassini.
Although the JWST data reduction pipeline continues to evolve, we have presented algorithms for correcting artefacts such as wavelength calibration offsets, correction of partially saturated spectral regions, and the development of 'flat-field' corrections by exploiting multiple dithers on the same target. As the MIRI/MRS spectral cubes required significant processing outside of the pipeline, we perform customised cleaning on each tile and dither independently, to prevent the blending of artefacts that, in the most extreme cases, can completely obscure the spatial structure of Saturn itself. With these artefacts removed, we summarise the initial survey of Saturn as follows:
1. **Saturn's banded structure:** Latitudinal gradients in temperatures and winds-hears show strong correlations with the locations of Saturn's mid-latitude eastward
and westward jets (Garcia-Melendo et al., 2011), with the contrasts between cool zones and warmer belts confirming the decay of the zonal winds with altitude via the thermal windshear equation. Conversely, gradients in reflectivity (and the derived aerosol structure from 4.9-7.3 \(\mu\)m) show similarities to albedo contrasts in observations acquired by Hubble in September 2022 (Simon et al., 2023), with a decrease in aerosols in distinct steps from equator to pole, with the highest cloud base and thickest tropospheric hazes within the broad reflective band equatorward of 15\({}^{\circ}\)N. A narrow equatorial band (\(<\)5\({}^{\circ}\)N) is dark at 5 \(\mu\)m, coinciding with the location where the narrow upper-tropospheric jet was previously detected (Garcia-Melendo et al., 2011). The 5-\(\mu\)m brightness has evolved with time, such that a pole-encircling band from 62\({}^{\circ}\)-78\({}^{\circ}\)N is now the brightest on the planet, coinciding with a dearth of aerosol opacity, whereas the polar domain (interior to the hexagon) is the darkest at 5 \(\mu\)m due to an absence of upper-tropospheric aerosols and an optically thick and compact cloud at higher pressures. We no longer see the 5-\(\mu\)m-bright, aerosol-depleted band near 35-40\({}^{\circ}\)N that had dominated the appearance of the northern hemisphere after the 2010-11 storm, consistent with a re-population of the band by Saturn's seasonal aerosols in the decade since the storm.
2. **North Polar Stratospheric Vortex (NPSV):** The seasonal stratospheric vortex that developed poleward of 78\({}^{\circ}\)N during northern spring (L. N. Fletcher, Orton, et al., 2018) remained present in 2022, with warmer temperatures than those measured in 2017, now approaching the same temperatures as those observed in the SPSV during southern summer (2004-05). Radiative models (e.g., Guerlet et al., 2014), combined with our Earth-based vantage point (Blake et al., 2022), suggest that the visibility of the warm NPSV will decline substantially in the next 1-2 years before autumn equinox. The sharp thermal gradient at the edge of the NPSV promotes negative windshear, and thus westward stratospheric winds entraining the NPSV for \(p<10\) mbar. MIRI/MRS did not detect evidence of the vertices of Saturn's hexagon, meaning that we will need to wait until the 2040s for our next infrared views of the hexagon. However, the bright North Polar Cyclone (NPC) was still visible, embedded within the NPSV. The NPSV was enriched in several stratospheric hydrocarbon species (ethane, acetylene, ethylene, benzene, diacetylene) due to a combination of polar subsidence over the summer pole, and potential ion-neutral chemistry at high latitudes.
3. **Cyclones and anticyclones:** Channel 1-short of MIRI/MRS offers the best opportunity to probe the fine-scale cloud banding, as well as discrete features. We observe contrasts associated with cyclones (bright and aerosol-free) and anticyclones (dark and cloudy) near 48\({}^{\circ}\)N, 10\({}^{\circ}\)N and 62\({}^{\circ}\)N, but these could not be identified in contemporaneous amateur observations, nor Hubble observations \(\sim 7\) weeks earlier. This strongly argues for near-simultaneous MIRI/MRS spectroscopy and NIRCAM (or Hubble) imaging in future observing programmes, alongside long-term records of variability from ground-based facilities.
4. **Saturn's Equatorial Stratospheric Oscillation:** Windshears derived from retrieved temperature gradients reveal diagonally-stacked shear zones that rise to higher altitudes with higher latitudes, connecting the stratosphere throughout the low-latitudes equatorward of 30\({}^{\circ}\)N. MIRI/MRS reveals warm and cool temperature anomalies at the equator (and off-equatorial anomalies at 13\({}^{\circ}\)N) that are consistent with the downward propagation of the oscillatory pattern over a decade of pressure in the five years since the 2017 Cassini observations. A warm equatorial anomaly centred near 0.7 mbar is responsible for the bright equatorial band of emission observed in CH\({}_{4}\) and C\({}_{2}\)H\({}_{2}\) emission, but this is opposite to the dark band observed from the ground one Saturnian year earlier (1993-1995, Orton et al., 2008; Blake et al., 2022), raising doubts about the semi-annual nature (i.e., 15-year period) of the equatorial oscillation. Thermal wind calculations imply the presence of an equatorial westward jet near 1-5 mbar in 2022 superimposed onto a broader region of eastward flow, but future
joint campaigns between JWST and ALMA are necessary to confirm the validity of the thermal winds derived here.
5. **Reversal of Saturn's interhemispheric stratospheric circulation:** MIRI/MRS observations in 2022 reveal cooler temperatures in the 10\({}^{\circ}\)-40\({}^{\circ}\)N domain compared to Cassini in 2017. This coincides with local minima in several hydrocarbon species in 2022 (notably C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{6}\), C\({}_{4}\)H\({}_{2}\) and CH\({}_{3}\)), opposite to the local maxima detected by Cassini (Guerlet et al., 2009, 2010; Sylvestre et al., 2015). This adiabatic cooling and hydrocarbon-depleted air implies a transition from wintertime subsidence to summertime upwelling in the northern hemisphere, as part of the seasonal reversal of Saturn's interhemispheric stratospheric circulation (Friedson & Moses, 2012; Bardet et al., 2022).
6. **Stacked aerosol layers:** We present the first assessment of aerosol opacity in 5.1-6.8 \(\mu\)m range, requiring two aerosol layers to reproduce the thermal emission and reflected sunlight components. The **upper aerosol layer**, possibly a photochemically-produced haze related to diphosphine (Ferris & Benson, 1980), is thickest, highest (near 200 mbar) and most extended at the equator, but deeper (300 mbar) and more compact at mid-latitudes, before becoming negligible poleward of 60\({}^{\circ}\)N. This layer is likely to be the same as that detected by Cassini/ISS (Roman et al., 2013), and coincides with a region of localised radiative heating detected by Cassini/CIRS (L. N. Fletcher et al., 2007). The **deeper aerosol layer**, possibly associated with condensates of NH\({}_{3}\) and other species, resides at 1-2 bars, being shallowest (1.2 bars) near 10\({}^{\circ}\)N and deepest (2.6 bars) and most compact in the polar domain, responsible for both the dark thermal emission at 5 \(\mu\)m and the absence of reflectivity at 5.2 \(\mu\)m. Aerosol fitting for MIRI/MRS is somewhat degenerate, and further constraints could be provided in future by NIRSpec observations of reflected sunlight.
7. **Tropospheric species:** Latitudinal variability of **phosphine** appears to be most confined to \(p<1\) bar, displaying an equatorial maximum (L. N. Fletcher, Orton, Teanby, & Irwin, 2009) and general decline in abundance towards the polar domain. Regions of higher aerosol opacity generally correspond to regions of elevated PH\({}_{3}\), suggesting this disequilibrium species is most abundant when aerosols shield the molecule from photolysis. **Ammonia** displays a strong equatorial enrichment \(<5^{\circ}\) of the equator (450-650 ppm), similar to the NH\({}_{3}\)-enriched column in Jupiter's equatorial zone (Achterberg et al., 2006; Li et al., 2017; de Pater et al., 2016) and suggesting similar dynamical processes at the equators of both gas giants. NH\({}_{3}\) also displays different latitudinal distributions above and below the condensed aerosols near 1.7 bars, suggestive of differing circulation patterns at different heights. Neither PH\({}_{3}\) nor NH\({}_{3}\) display consistent belt/zone contrasts at mid-latitudes, suggesting secondary circulation patterns associated with mid-latitude Ferrel cells may be weak. Tropospheric **water** is mapped for the first time in its condensation region (i.e., above the expected H\({}_{2}\)O cloud), with estimates at 3.3 bars varying from 200 ppb at the equator, to 10 ppb near 5\({}^{\circ}\)N, and a distinct step in the abundance near \(\sim\)40\({}^{\circ}\)N from 145 ppb at low latitudes to 80 ppb at higher latitudes.
8. **Stratospheric chemicals:** Short-lived hydrocarbons, such as **acetylene, diacetylene, methylacetylene, ethylene, and methyl**, tend to track the annual-average insolation (i.e., peaking at low latitudes, consistent with the predictions of seasonal photochemical models, Moses & Greathouse, 2005), whereas longer-lived species (**ethane, propane, and possibly benzene**) appear to be more influenced by long-term stratospheric circulation redistributing gases to higher latitudes. Acetylene has a steeper vertical gradient within the NPSV than elsewhere, and several species (ethane, ethylene, benzene, diacetylene) show strong enrichments within the NPSV. Methyl is mapped with latitude for the first time, showing a factor of \(\sim 5\) decrease from equator to pole. Ethylene is detected in the NPSV for the first time. The benzene abundance remains an order of magnitude smaller than the predictions of photochemical models. **Carbon dioxide** is detected at all latitudes with a relatively uniform distribution,
lacking strong latitudinal gradients that might imply a spatially-localised source of exogenic oxygen species.
9. **Absence of nitriles:** None of the hydrocarbon distributions support the substantial chemical consequences predicted by neutral photochemistry (Moses et al., 2023) if the large influx of ring material detected by Cassini in 2017 (Serigano et al., 2022) were vapourised upon entry into the equatorial stratosphere. Neither HCN nor HC\({}_{3}\)N, both predicted by nitrile chemistry related to ring influx, are detected by MIRI. This suggests that ring material influx is not strongly influencing Saturn's equatorial stratosphere.
We caution the reader that the MIRI/MRS calibration continues to evolve, and that refined cleaning techniques will enable future in depth studies - in particular, removal of instrument artefacts from the 17-28 \(\mu\)m domain will enable studies of temperature and para-H\({}_{2}\) at longer wavelengths; plus a more sensitive search for stratospheric emission lines throughout the MIRI spectrum. Future joint campaigns with other JWST instruments (NIRSpec and NIRCAM) would aid in breaking degeneracies in aerosol retrievals, and collaboration with ALMA would allow a direct comparison of stratospheric winds and thermal winds associated with equatorial and polar jets.
JWST observations of the Saturn system in November 2022 (\(L_{s}=150^{\circ}\)) have revealed the wealth of possibilities offered by IFU spectroscopy from space in the mid-infrared. But Saturn's atmosphere will continue to change with the onset of northern autumn (May 2025, \(L_{s}=180^{\circ}\)), and we hope that these Cycle-1 observations will mark the starting point for a long-term MIRI legacy programme to track the seasonal evolution of Saturn's circulation and chemistry through to the next southern summer solstice (April 2032, \(L_{s}=270^{\circ}\)), completing the seasonal assessment begun by Cassini.
## Open Research Section
Level-3 calibrated Saturn MIRI/MRS data from the standard pipeline are available directly from the MAST archive (MAST Archive, 2022) via [http://dx.doi.org/10.17909/wjpz-7383](http://dx.doi.org/10.17909/wjpz-7383). Hubble observations used for comparison were acquired by the Outer Planets Legacy Program (OPAL Archive, 2022) via [https://archive.stsci.edu/prepds/opal/](https://archive.stsci.edu/prepds/opal/), also available here: [http://dx.doi.org/10.17909/T9G593](http://dx.doi.org/10.17909/T9G593).
The NEMESIS radiative transfer and retrieval code (Irwin et al., 2008) used in this study is open-access and is available for download (Irwin, 2022) from GitHub or Zenodo ([https://doi.org/10.5281/zenodo.5816724](https://doi.org/10.5281/zenodo.5816724)).
The JWST calibration pipeline (Bushouse et al., 2023) is available at [https://github.com/spacetelescope/jwst](https://github.com/spacetelescope/jwst), this work used version 1.9.4.
The custom pipeline and data processing code (King et al., 2023) developed in this study is available at [https://github.com/JWSTGiantPlanets/pipelines/](https://github.com/JWSTGiantPlanets/pipelines/) ([https://doi.org/10.5281/zenodo.7891560](https://doi.org/10.5281/zenodo.7891560)).
The data products produced in this study (L. N. Fletcher, 2023), including synthetic flat fields, zonal average spectra and quick-look visualisations, are available at [https://github.com/JWSTGiantPlanets/saturn-atmosphere-miri](https://github.com/JWSTGiantPlanets/saturn-atmosphere-miri) ([https://doi.org/10.5281/zenodo.7891588](https://doi.org/10.5281/zenodo.7891588)).
## Acknowledgments
Fletcher, King, and Roman were supported by a European Research Council Consolidator Grant (under the European Union's Horizon 2020 research and innovation programme, grant agreement No 723890) at the University of Leicester. Harkett was supported by an STFC studentship; Melin was supported by an STFC James Webb Fellowship (ST/W001527/1).
Hammel and Milam acknowledge support from NASA JWST Interdisciplinary Scientist grant 21-SMDSS21-0013.
We wish to express our gratitude to the JWST support team for their patience and perseverance as we designed these MRS observations - in particular Beth Perriello, Bryan Holler, Misty Cracraft, Tony Roman, and John Stansberry for their aid in setting up the observations in APT, and David Law for his tireless support as we developed codes to interpret MIRI/MRS data. We are grateful to Conor Nixon, Imke de Pater, Patrick Irwin, Pablo Rodriguez-Ovalle and Thierry Fouchet for their helpful discussions during the development of this work. We thank amateur observers Chris Go, Trevor Barry, Anthony Wesley, and Tiziano Olivetti for their efforts to identify discrete features during the JWST MIRI observation epoch. We thank two reviewers, Glenn Orton and Bruno Bezard, for their thorough critiques of this article. This research used the ALICE High Performance Computing Facility at the University of Leicester.
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program 1247 (PI: Fletcher). JWST observations were compared to data acquired from the NASA/ESA HST Space Telescope, associated with OPAL program (PI: Simon, GO13937), and archived by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
For the purpose of Open Access, the corresponding author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
|
2309.08719 | Final Sentential Forms | Let G be a context-free grammar with a total alphabet V, and let F be a final
language over an alphabet W such that W is a subset of V. A final sentential
form is any sentential form of G that, after omitting symbols from V - W, it
belongs to F. The string resulting from the elimination of all nonterminals
from W in a final sentential form is in the language of G finalized by F if and
only if it contains only terminals.
The language of any context-free grammar finalized by a regular language is
context-free. On the other hand, it is demonstrated that L is a recursively
enumerable language if and only if there exists a propagating context-free
grammar G such that L equals the language of G finalized by {w#w^R | w is a
string over a binary alphabet}, where w^R is the reversal of w. | Tomáš Kožár, Zbyněk Křivka, Alexander Meduna | 2023-09-15T19:13:52Z | http://arxiv.org/abs/2309.08719v1 | # Final Sentential Forms
###### Abstract
Let \(G\) be a context-free grammar with a total alphabet \(V\), and let \(F\) be a _final language_ over an alphabet \(W\subseteq V\). A _final sentential form_ is any sentential form of \(G\) that, after omitting symbols from \(V-W\), it belongs to \(F\). The string resulting from the elimination of all nonterminals from \(W\) in a final sentential form is in the _language of \(G\) finalized by \(F\)_ if and only if it contains only terminals.
The language of any context-free grammar finalized by a regular language is context-free. On the other hand, it is demonstrated that \(L\) is a recursively enumerable language if and only if there exists a propagating context-free grammar \(G\) such that \(L\) equals the language of \(G\) finalized by \(\{w\#w^{R}|\,w\in\{0,1\}^{*}\}\), where \(w^{R}\) is the reversal of \(w\).
## 1 Introduction
The present paper introduces and studies _final sentential forms_ of context-free grammars. These forms represent the sentential forms in which the sequences of prescribed symbols, possibly including nonterminals, belong to given _final languages_. If all the other symbols are terminals, these final forms are changed to the sentences of the generated languages by simply eliminating all nonterminals in them. Next, we sketch both a practical inspiration and a theoretical reason for introducing this new way of context-free language generation.
1. Indisputably, parsing represents a crucially important application area of ordinary context-free grammars (see Chapters 3 through 5 in [5]) as well as their modified versions, such as regulated grammars (see Section 20.3 in [7]). During the parsing process, the correctness of the source program syntax is often verified before all nonterminals are eliminated; nevertheless, most classically constructed parsers go on eliminating these nonterminals by using erasing rules until only terminals are derived. As a result, the entire parsing process is slowed down uselessly during this closing phase (for a simple, but straightforward illustration of this computational situation, see, for instance, Case Study 14/35 in [5] or Example 4.35 in [2]). Clearly, as the newly introduced way of language generation frees us from a necessity of this closing elimination of all nonterminals, the parsers that make use of it work faster.
2. From a theoretical viewpoint, in the present paper, we achieve a new representation for recursively enumerable languages based upon context-free languages. Admittedly, the theory of formal languages is overflow with many representations for recursively enumerable languages based upon operations over some context-free languages or their special cases (see Section 4.1.3 in [8]). Nonetheless, we believe this new representation is of some interest when compared with the previously demonstrated representations. Indeed, each of the already existing representations is demonstrated, in essence, by a proof that has the following general format. (i) First, given any recursively enumerable language \(L\), it represents \(L\) by a suitable language model \(G\), such as a phrase structure grammar in a normal form. (ii) Then, from \(G\), it derives both operations and context-free
languages involved in the representation in question. (iii) Finally, it shows that the representation made in this way from \(G\) holds true. What is important from our standpoint is that in a proof like this, the specific form of all the operations as well as the languages involved in the representation always depend on \(G\), which generates \(L\). As opposed to this, the new representation achieved in the present paper is much less dependent on \(L\) or any of its language models. More precisely, we demonstrate the existence of a unique constant language \(C\) defined as \(C=\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\) and express any recursively enumerable language \(L\) by using \(C\) and a minimal linear language without any operation. Consequently, \(C\) always remains unchanged and, therefore, independent of \(L\) or its models. Considering this independency as well as the absence of any operations in the new representation, we believe this representation might be of some interest to formal language theory.
To give a more detailed insight into this study, we first informally recall the notion of an ordinary context-free grammar and its language (this paper assumes a familiarity with formal language theory). A context-free grammar \(G\) is based upon a grammatical alphabet \(V\) of symbols and a finite set of rules. The alphabet \(V\) is divided into two disjoint subalphabets--the alphabet of terminals \(T\) and the alphabet of nonterminals \(N\). Each rule has the form \(A\to x\), where \(A\) is a nonterminal and \(x\) is a string over \(V\). Starting from a special start nonterminal, \(G\) repeatedly rewrites strings according to its rules, and in this way, it generates its sentential forms. Sentential forms that consist only of terminal symbols are called sentences, and the set of all sentences represents the language generated by \(G\).
In this paper, we shortened the generating process sketched above by introducing a final language \(F\) over a subalphabet \(W\subseteq V\). A _final sentential form_ of \(G\) is any of the sentential forms in which the sequence of symbols from \(W\) belong to \(F\). If in this form, all the symbols from \(V-W\) are terminals, the string obtained by eliminating all nonterminals from \(N\cap W\) results into a sentence of the generated language \(L(G,F)\) finalized by \(F\).
Next, we illustrate the newly introduced concept of final sentential forms by a simple example in linguistic morphology, which studies word formation, such as inflection and compounding, in natural languages.
_Example 1_.: Consider an alphabet \(\Sigma\) of consonants and vowels. Suppose that a morphological study deals with a language \(L\) consisting of all possible words over \(\Sigma\) together with their consonant-vowel binary schemes in which every consonant and every vowel are represented by \(1\) and \(0\), respectively. Mathematically, \(L=\{w\#\sigma(w)\,|\,w\in\Sigma^{+}\}\), where \(\sigma\) is the homomorphism from \(\Sigma^{*}\) to \(\{0,1\}^{*}\) defined as \(\sigma(x)=1\) and \(\sigma(y)=0\) for every consonant \(x\) in \(\Sigma\) and every vowel \(y\) in \(\Sigma\), respectively. For instance, considering \(\Sigma\) as the English alphabet, \(the\#110\in L\) while \(the\#100\not\in L\). Define the context-free grammar \(G\) with the following rules.
* \(S\to A\#B\), \(B\to 0YB\), \(B\to 0Y\), \(B\to 1XB\), \(B\to 1X\),
* \(A\to aAY\), \(A\to aY\) for all vowels \(a\) in \(\Sigma\),
* \(A\to bAX\), \(A\to bX\) for all consonants \(b\) in \(\Sigma\),
where the uppercase symbols are nonterminals with \(S\) being the start nonterminal, and the other symbols are terminals. Set \(W=\{X,Y,\#\}\) and \(F=\{w\#w^{R}\,|\,w\in\{X,Y\}^{*}\}\). For instance, take this step-by-step derivation
\[S \Rightarrow A\#B\Rightarrow tAX\#B\Rightarrow thXXX\#B\] \[\Rightarrow theYXX\#1XB\Rightarrow theYXX\#1X1XB\Rightarrow theYXX\#1X1X0Y\]
In \(theYXX\#1X1X0Y\), \(YXX\#XXY\in F\), and apart from \(X,Y,\#\in W\), \(theYXX\#1X1X0Y\) contains only terminals. The removal of all \(X\)s and \(Y\)s in \(theYXX\#1X1X0Y\) results into \(the\#110\), which thus belongs to \(L(G,F)\). On contrary,
\[S\Rightarrow^{*}theYXX\#1X1XB\Rightarrow theYXX\#1X1X0YB=\gamma\Rightarrow theYXX\#1X1X0Y0Y=\delta\]
Let \(T=\Sigma\cup\{0,1\}\). Consider \(\gamma\). Although \(YXX\#XXY\in F\), \(the\#110B\notin L(G,F)\) since \(B\notin W\cup T\). On the other hand, considering \(\delta\), after omitting symbols from \(W-T\), we have \(the\#1100\in T^{*}\), but since \(YXX\#XXYY\notin F\), \(the\#1100\notin L(G,F)\).
Clearly, \(L(G,F)=L\).
As its main result, the present paper demonstrates that \(L\) is a recursively enumerable language if and only if \(L=L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})\), where \(G\) is a context-free grammar; observe that in this equivalence, the final language \(\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\) remains constant independently of \(L\). On the other hand, the paper also proves that any \(L(G,F)\) is context-free if \(G\) is a context-free grammar and \(F\) is regular.
The rest of the paper is organized as follows. First, Section 2 gives all the necessary terminology and defines the new notions, informally sketched in this introduction. Then, Section 3 establishes the above-mentioned results and points out an open problem related to the present study.
## 2 Preliminaries and Definitions
This paper assumes that the reader is familiar with the language theory (see [6]).
For a set \(Q\), \(card(Q)\) denotes the cardinality of \(Q\). For an alphabet \(V\), \(V^{*}\) represents the free monoid generated by \(V\) under the operation of concatenation. The unit of \(V^{*}\) is denoted by \(\varepsilon\). Set \(V^{+}=V^{*}\) - \(\{\varepsilon\}\); algebraically, \(V^{+}\) is thus the free semigroup generated by \(V\) under the operation of concatenation. For \(w\in V^{*}\), \(|w|\) and \(w^{R}\) denotes the length of \(w\) and the reversal of \(w\), respectively. Let \(W\) be an alphabet and \(\omega\) be a homomorphism from \(V^{*}\) to \(W^{*}\) (see [6] for the definition of homomorphism); \(\omega\) is a _weak identity_ if \(\omega(a)\in\{a,\varepsilon\}\) for all \(a\in V\).
A _context-free grammar_ (CFG for short) is a quadruple \(G=(V,T,P,S)\), where \(V\) is an alphabet, \(T\subseteq V\), \(P\subseteq(V-T)\times V^{*}\) is finite, and \(S\in V-T\). Set \(N=V-T\). The components \(V,T,N,P\), and \(S\) are referred to as the total alphabet, the terminal alphabet, the nonterminal alphabet, the set of rules, and the start symbol of \(G\), respectively. Instead of \((A,x)\in P\), we write \(A\to x\in P\) throughout. For brevity, we often denote \(A\to x\) by a unique label \(p\) as \(p:A\to x\), and we briefly use \(p\) instead of \(A\to x\) under this denotation. For every \(p:A\to x\in P\), the _left-hand side of_\(p\) is defined as \(lhs(p)=A\). The grammar \(G\) is _propagating_ if \(A\to x\in P\) implies \(x\in V^{+}\). The grammar \(G\) is _linear_ if no more than one nonterminal appears on the right-hand side of any rule in \(P\). Furthermore, a linear grammar \(G\) is _minimal_ (see page 76 in [9]) if \(N=\{S\}\) and \(S\to\#\in P\), \(\#\in T\), is the only rule with no nonterminal on the right-hand side, whereas it is assumed that \(\#\) does not occur in any other rule. In this paper, a minimal linear grammar \(G\) is called a _palindronial grammar_ if \(card(P)\geq 2\), and every rule of the form \(S\to xSy\), where \(x,y\in T^{*}\), satisfies \(x=y\) and \(x,y\in T\). For instance, \(H=(\{S,0,1,\#\},\{0,1,\#\},\{S\to 0S0,S\to 1S1,S\to\#\},S)\) is a palindromial grammar.
For every \(u,v\in V^{*}\) and \(p:A\to x\in P\), write \(uAv\Rightarrow uxv\,[p]\) or, simply, \(uAv\Rightarrow uxv\); \(\Rightarrow\) is called the _direct derivation_ relation over \(V^{*}\). For \(n\geq 0,\Rightarrow^{n}\) denotes the \(n\)-th power of \(\Rightarrow\). Furthermore, \(\Rightarrow^{+}\) and \(\Rightarrow^{*}\) denote the transitive closure and the transitive-reflexive closure of \(\Rightarrow\), respectively. Let \(\phi(G)=\{w\in V^{*}\,|\,S\Rightarrow^{*}w\}\) denotes the set of all _sentential forms_ of \(G\). The language of \(G\) is denoted by \(L(G)\) and defined as \(L(G)=T^{*}\cap\phi(G)\). For example, \(L(H)=\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\), where \(H\) is defined as above.
Let \(G=(V,T,P,S)\) be a CFG and \(W\subseteq V\). Define the weak identity \({}_{W}\omega\) from \(V^{*}\) to \(W^{*}\) as \({}_{W}\omega(X)=X\) for all \(X\in W\), and \({}_{W}\omega(X)=\varepsilon\) for all \(X\in V-W\). Let \(F\subseteq W^{*}\). Set
\[\phi(G,F) =\{x\,|\,x\in\phi(G),\,_{W}\omega(x)\in F\}\] \[L(G,F) =\{\,_{T}\omega(y)\,|\,y\in\phi(G,F),\,_{(N-W)}\omega(y)=\varepsilon\}.\]
\(\phi(G,F)\) and \(L(G,F)\) are referred to as the set of _sentential forms of \(G\) finalized by \(F\)_ and the _language of \(G\) finalized by \(F\)_, respectively. Members of \(\phi(G,F)\) are called _final sentential forms_. \(\mathbf{REG},\mathbf{PAL},\mathbf{LIN},\mathbf{CF}\), and \(\mathbf{RE}\) denote the families of regular, palindromial, linear, context-free, and recursively enumerable languages, respectively. Observe that
\[\mathbf{REG}\cap\mathbf{PAL}=\emptyset\text{ and }\mathbf{REG}\cup\mathbf{ PAL}\subset\mathbf{LIN}.\]
Set
\[\mathbf{CF}_{\mathbf{PAL}} =\{L(G,F)\,|\,G\text{ is a CFG, }F\in\mathbf{PAL}\}\] \[\mathbf{CF}_{\mathbf{REG}} =\{L(G,F)\,|\,G\text{ is a CFG, }F\in\mathbf{REG}\}\]
_Example 2_.: Set \(I=\{i(x)\,|\,x\in\{0,1\}^{+}\}\), where \(i(x)\) denotes the integer represented by \(x\) in the standard way; for instance, \(i(011)=3\). Consider
\[L=\{u\#v\,|\,u,v\in\{0,1\}^{+},\,i(u)>i(v)\text{ and }|u|=|v|\}.\]
Next, we define a CFG \(G\) and \(F\in\mathbf{PAL}\) such that \(L=L(G,F)\). Let \(G=(V,T,P,S)\) be a context-free grammar. Set \(V=\{S,X,\overline{X},Y,\overline{Y},A,B,C,D,0,1,\#\}\), \(T=\{0,1,\#\}\), and set \(P\) as the set of the following rules
* \(S\to X\#\overline{X}\),
* \(X\to 1AX\), \(X\to 0BX\), \(X\to 1CY\), \(X\to 1C\),
* \(\overline{X}\to 1\overline{X}A\), \(\overline{X}\to 0\overline{X}B\), \(\overline{X}\to 0\overline{Y}C\), \(\overline{X}\to 0C\),
* \(Y\to\alpha DY\), \(Y\to\alpha D\), \(\overline{Y}\to\alpha\overline{Y}D\), \(\overline{Y}\to\alpha D\) for all \(\alpha\in\{0,1\}\).
Set \(W=\{A,B,C,D,\#\}\) and \(F=\{w\#v^{R}\,|\,w\in\{A,B,C,D\}^{+}\text{ and }n\geq 1\}\). Observe that \(F=L(H)\), where \(H=(\{S,A,B,C,D,\#\},\{A,B,C,D,\#\},\{S\to ASA,S\to BSB,S\to CSC,S\to DSD,S\to\#\},S)\) is a palindromial grammar. Therefore, \(F\in\mathbf{PAL}\). For instance, take this step-by-step derivation
\[S \Rightarrow X\#\overline{X}\Rightarrow 1AX\#\overline{X}\Rightarrow 1 A0BX\#\overline{X}\Rightarrow 1A0B1CY\#\overline{X}\Rightarrow 1A0B1C0D\#\overline{X}\] \[\Rightarrow 1A0B1C0D\#1\overline{X}A\Rightarrow 1A0B1C0D\#10\overline{X}BA \Rightarrow 1A0B1C0D\#100\overline{Y}CBA\] \[\Rightarrow 1A0B1C0D\#1001\overline{Y}DCBA\Rightarrow 1A0B1C0D\#1001DCBA\]
in \(G\). Notice that \({}_{W}\omega(1A0B1C0D\#1001DCBA)\in F\), and \({}_{T}\omega(1A0B1C0D\#1001DCBA)\in L(G,F)\). The reader is encouraged to verify that \(L=L(G,F)\).
A _queue grammar_ (see [3]) is a sextuple, \(Q=(V,T,U,D,s,P)\), where \(V\) and \(U\) are alphabets satisfying \(V\cap U=\emptyset\), \(T\subseteq V\), \(D\subseteq U\), \(s\in(V-T)(U-D)\), and \(P\subseteq(V\times(U-D))\times(V^{*}\times U)\) is a finite relation such that for every \(a\in V\), there exists an element \((a,b,z,c)\in P\). If \(u,v\in V^{*}U\) such that \(u=arb;v=rzc;a\in V;r,z\in V^{*};b,c\in U\); and \((a,b,z,c)\in P\), then \(u\Rightarrow v\)\([(a,b,z,c)]\) in \(G\) or, simply, \(u\Rightarrow v\). In the standard manner, extend \(\Rightarrow\) to \(\Rightarrow^{n}\), where \(n\geq 0\); then, based on \(\Rightarrow^{n}\), define \(\Rightarrow^{+}\) and \(\Rightarrow^{*}\).
The language of \(Q\), \(L(Q)\), is defined as \(L(Q)=\{w\in T^{*}\,|\,s\Rightarrow^{*}\,wf\) where \(f\in D\}\). A _left-extended queue grammar_ is a sextuple, \(Q=(V,T,U,D,s,P)\), where \(V,T,U,D\), and \(s\) have the same meaning as in a queue grammar, and \(P\subseteq(V\times(U-D))\times(V^{*}\times U)\) is a finite relation (as opposed to an ordinary queue grammar, this definition does not require that for every \(a\in V\), there exists an element \((a,b,z,c)\in P\)). Furthermore, assume that \(\#\notin V\cup U\). If \(u,v\in V^{*}\{\#\}V^{*}U\) so that \(u=w\#arb\); \(v=wa\#rzc\); \(a\in V\); \(r,z,w\in V^{*}\); \(b,c\in U\); and \((a,b,z,c)\in P\), then \(u\Rightarrow v[(a,b,z,c)]\) in \(G\) or, simply \(u\Rightarrow v\). In the standard manner, extend \(\Rightarrow\) to \(\Rightarrow^{n}\), where \(n\geq 0\); then, based on \(\Rightarrow^{n}\), define \(\Rightarrow^{+}\) and \(\Rightarrow^{*}\). The language of \(Q\), \(L(Q)\), is defined as \(L(Q)=\{v\in T^{*}\,|\,\#s\Rightarrow^{*}\,w\#vf\) for some \(w\in V^{*}\) and \(f\in D\}\). Less formally, during every step of a derivation, a left-extended queue grammar shifts the rewritten symbol over \(\#\); in this way, it records the derivation history, which plays a crucial role in the proof of Lemma 5 in the next section.
A _deterministic finite automaton_ (DFA for short) is a quintuple \(M=(Q,\Sigma,R,s,F)\), where \(Q\) is a finite _set of states_, \(\Sigma\) is an _alphabet of input symbols_, \(Q\cap\Sigma=\emptyset\), \(s\in Q\) is a special state called the _start state_, \(F\subseteq Q\) is a _set of final states_ in \(M\), and \(R\) is a total function from \(Q\times\Sigma\) to \(Q\). Instead of \(R(q,a)=p\), we write \(qa\to p\), where \(q,p\in Q\) and \(a\in\Sigma\cup\{\varepsilon\}\); \(R\) is referred to as the _set of rules_ in \(M\). For any \(x\in\Sigma^{*}\) and \(qa\to p\in R\), we write \(qax\Rightarrow px\). The _language of_\(M\), \(L(M)\), is defined as \(L(M)=\{w\,|\,w\in\Sigma^{*},\,sw\Rightarrow^{*}\,f\), \(f\in F\}\), where \(\Rightarrow^{*}\) denotes the reflexive-transitive closure of \(\Rightarrow\). Recall that DFAs characterize **REG** (see page 29 in [6]).
## 3 Results
In this section, we show that every language generated by a context-free grammar finalized by a regular language is context-free (see Theorem 2). On the other hand, we prove that every recursively enumerable language can be generated by a propagating context-free grammar finalized by \(\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\) (see Theorem 9).
**Lemma 1**.: Let \(G=(V,T,P,S)\) be any CFG and \(F\in\textbf{REG}\). Then, \(L(G,F)\in\textbf{CF}\).
**Proof.** Let \(G=(V,T,P,S)\) be any CFG and \(F\in\textbf{REG}\). Let \(F=L(M)\), where \(M=(Q,W,R,q_{s},Q_{F})\) is a DFA.
_Construction._ Introduce \(U=\{\langle paq\rangle\,|\,p,q\in Q\), \(a\in V\}\cup\{\langle q_{s}SQ_{F}\rangle\}\). From \(G\) and \(M\), construct a new CFG \(H\) such that \(L(H)=L(G,F)\) in the following way. Set
\[H=(\overline{V},T,\overline{P},\langle q_{s}SQ_{F}\rangle)\]
The components of \(H\) are constructed as follows. Set \(\overline{V}=V\cup U\) and initialize \(\overline{P}\) to \(\emptyset\). Construct \(\overline{P}\) as follows:
1. Add \(\langle q_{s}SQ_{F}\rangle\rightarrow\langle q_{s}SQ_{f}\rangle\) for all \(q_{f}\in Q_{F}\).
2. Let \(A\to y_{0}X_{1}y_{1}X_{2}\ldots X_{n}y_{n}\in P\), where \(A\in V-T,y_{i}\in(V-W)^{*}\) and \(X_{j}\in V\), \(0\leq i\leq n\), \(1\leq j\leq n\), for some \(n\geq 1\); then, add \(\langle q_{1}Aq_{n+1}\rangle\to y_{0}\langle q_{1}X_{1}q_{2}\rangle y_{1} \langle q_{2}X_{2}q_{3}\rangle\ldots\langle q_{n}X_{n}q_{n+1}\rangle y_{n}\) to \(\overline{P}\), for all \(q_{1},q_{2},\ldots,q_{n+1}\in Q\).
3. Let \(A\rightarrow\alpha\in P\), where \(A\in V-(T\cup W),\alpha\in(V-W)^{*}\); then, add \(A\rightarrow\alpha\) to \(\overline{P}\).
4. Let \(\langle paq\rangle\in U\), where \(a\in W\cap T,pa\to q\in R\); then, add \(\langle paq\rangle\to a\) to \(\overline{P}\).
4. Let \(\langle pBq\rangle\in U\), where \(pB\to q\in R,B\in W\cap(V-T)\); then, add \(\langle pBq\rangle\to\varepsilon\) to \(\overline{P}\).
To prove \(L(G,F)=L(H)\), we first prove \(L(H)\subseteq L(G,F)\); then, we establish \(L(G,F)\subseteq L(H)\). To demonstrate \(L(H)\subseteq L(G,F)\), we first make three observations--(i) through (iii)--concerning every derivation of the form \(\langle q_{s}Sq_{f}\rangle\Rightarrow^{*}y\) with \(y\in T^{*}\).
(i) By using rules constructed in (1) and (2), \(H\) makes a derivation of the form
\[\langle q_{s}Sq_{f}\rangle\Rightarrow^{*}x_{0}\langle q_{1}Z_{1}q_{2}\rangle x _{1}\ldots\langle q_{n}Z_{n}q_{n+1}\rangle x_{n}\]
where \(x_{i}\in(T-W)^{*}\), \(0\leq i\leq n\), \(\langle q_{j}Z_{j}q_{j+1}\rangle\in U\), \(Z_{j}\in W\), \(1\leq j\leq n\), \(q_{1}=q_{s}\), \(q_{n+1}=q_{f}\), \(q_{1},\ldots,q_{n+1}\in Q\), \(q_{f}\in Q_{F}\).
(ii) If
\[\langle q_{s}Sq_{f}\rangle\Rightarrow^{*}x_{0}\langle q_{1}Z_{1}q_{2}\rangle x _{1}\ldots\langle q_{n}Z_{n}q_{n+1}\rangle x_{n}\]
in \(H\), then
\[S\Rightarrow^{*}x_{0}Z_{1}x_{1}\ldots Z_{n}x_{n}\]
in \(G\), where all the symbols have the same meaning as in (i).
(iii) Let \(H\) make
\[x_{0}\langle q_{1}Z_{1}q_{2}\rangle x_{1}\ldots\langle q_{n}Z_{n}q_{n+1} \rangle x_{n}\Rightarrow^{*}y\]
by using rules constructed in (3) and (4), where \(y\in T^{*}\), and all the other symbols have the same meaning as in (i). Then, for all \(1\leq j\leq n,q_{j}Z_{j}\to q_{j+1}\in R\), \(y=x_{0}U_{1}x_{1}\ldots U_{n}x_{n}\), where \(U_{j}=_{T}\omega(Z_{j})\). As \(q_{j}Z_{j}\to q_{j+1}\in R\), \(1\leq j\leq n\), \(q_{1}=q_{s}\) and \(q_{n+1}=q_{f}\), \(q_{f}\in Q_{F}\), we have \(Z_{1}\ldots Z_{n}\in L(M)\).
Based on (i) through (iii), we are now ready to prove \(L(H)\subseteq L(G,F)\). Let \(y\in L(H)\). Thus, \(\langle q_{s}SQ_{F}\rangle\Rightarrow^{*}y\), \(y\in T^{*}\) in \(H\). As \(H\) is an ordinary CFG, we can always rearrange the applications of rules during \(\langle q_{s}SQ_{F}\rangle\Rightarrow^{*}y\) in such a way that
\[\begin{array}{llll}\langle q_{s}SQ_{F}\rangle&\Rightarrow&\langle q_{s}Sq_{ f}\rangle&(\alpha)\\ &\Rightarrow^{*}&x_{0}\langle q_{1}Z_{1}q_{2}\rangle x_{1}\ldots\langle q_{n}Z _{m}q_{m+1}\rangle x_{m}&(\beta)\\ &\Rightarrow^{*}&y&(\gamma)\end{array}\]
so that during (\(\alpha\)), only a rule from (0) is used, during \(\beta\) only rules from (1) and (2) are used, and during (\(\gamma\)) only rules from (3) and (4) are used. Recall that \(Z_{1}Z_{2}\ldots Z_{n}\in F\) (see (iii)). Consequently, \({}_{W}\omega(x_{0}Z_{1}x_{1}\ldots Z_{n}x_{n})\in F\). From (3), (4), (ii), and (iii), it follows that
\[S\Rightarrow^{*}x_{0}Z_{1}x_{1}\ldots x_{n-1}Z_{n}x_{n}\mbox{ in }G.\]
Thus, as \(L(M)=F\), we have \(y\in L(G,F)\), so \(L(H)\subseteq L(G,F)\).
To prove \(L(G,F)\subseteq L(H)\), take any \(y\in L(G,F)\). Thus,
\[S \Rightarrow^{*}x_{0}Z_{1}x_{1}\ldots x_{n-1}Z_{n}x_{n}\text{ in $G$, and}\] \[y =\,_{\tau}\omega(x_{0}Z_{1}x_{1}\ldots x_{n-1}Z_{n}x_{n})\text{ with $Z_{1}\ldots Z_{n}\in F$}\]
where \(x_{i}\in(T-W)^{*},0\leq i\leq n,Z_{j}\in W,1\leq j\leq n\). As \(Z_{1}\ldots Z_{n}\in F\), we have \(q_{1}Z_{1}\to q_{2}\),..., \(q_{n}Z_{n}\to q_{n+1}\in R\), \(q_{1},\ldots,q_{n+1}\in Q\), \(q_{1}=q_{s}\), \(q_{n+1}=q_{f}\), \(q_{f}\in Q_{F}\). Consequently, from (0) through (4) of the Construction, we see that
\[\langle q_{s}SQ_{f}\rangle \Rightarrow\langle q_{s}Sq_{f}\rangle\] \[\Rightarrow^{*}x_{0}Z_{1}x_{1}\ldots Z_{n}x_{n}\] \[\Rightarrow^{*}x_{0}U_{1}x_{1}\ldots U_{n}x_{n}\]
where \(U_{j}=\,_{\tau}\omega(Z_{j})\), \(1\leq j\leq n\). Hence, \(y\in L(H)\), so \(L(G,F)\subseteq L(H)\).
Thus, \(L(G,F)=L(H)\).
**Theorem 2**.: \(\mathbf{CF_{REG}}=\mathbf{CF}\)_._
Proof.: Clearly, \(\mathbf{CF}\subseteq\mathbf{CF_{REG}}\). From Lemma 1, \(\mathbf{CF_{REG}}\subseteq\mathbf{CF}\). Thus, Theorem 2 holds true.
Now, we prove that by using the constant palindromial language \(\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\) to finalize a propagating context-free grammar, we can represent any recursively enumerable language.
**Lemma 3**.: Let \(L\in\mathbf{RE}\). Then, there exists a left-extended queue grammar \(Q\) satisfying \(L(Q)=L\).
Proof.: See _Lemma_ 1 in [4].
**Lemma 4**.: Let \(H\) be a left-extended queue grammar. Then, there exists a left-extended queue grammar, \(Q=(V,T,U,D,s,R)\), such that \(L(H)=L(Q)\) and every \((a,b,x,c)\in R\) satisfies \(a\in V-T\), \(b\in U-D\), \(x\in(V-T)^{*}\cup T^{*}\), and \(c\in U\).
Proof.: See _Lemma_ 2 in [4].
**Lemma 5**.: Let \(Q=(V,T,U,D,s,R)\) be a left-extended queue grammar. Then, \(L(Q)=L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})\), where \(G\) is a CFG.
Proof.: Without any loss of generality, assume that \(Q\) satisfies the properties described in Lemma 4 and that \(\{0,1\}\cap(V\cup U)=\emptyset\). For some positive integer, \(n\), define an injection, \(\iota\), from \(\Psi^{*}\) to \((\{0,1\}^{n}-1^{n})\), where \(\Psi=\{ab\,|\,(a,b,x,c)\in R\), \(a\in V-T\), \(b\in U-D\), \(x\in(V-T)^{*}\cup T^{*}\), \(c\in U\}\) so that \(\iota\) is an injective homomorphism when its domain is extended to \(\Psi^{*}\); after this extension, \(\iota\) thus represents an injective homomorphism from \(\Psi^{*}\) to \((\{0,1\}^{n}-1^{n})^{*}\) (a proof that such an injection necessarily exists is simple and left to the reader). Based on \(\iota\), define the substitution, \(\nu\) from \(V\) to \((\{0,1\}^{n}-1^{n})\) as \(\nu(a)=\{\iota(aq)\,|\,q\in U\}\) for every \(a\in V\). Extend domain of \(\nu\) to \(V^{*}\). Furthermore, define the substitution, \(\mu\), from \(U\) to \((\{0,1\}^{n}-1^{n})\) as \(\mu(q)=\{\iota(aq)^{R}\,|\,a\in V\}\) for every \(q\in U\). Extend the domain of \(\mu\) to \(U^{*}\). Set \(J=\{\langle p,i\rangle\,|\,p\in U-D\) and \(i\in\{1,2\}\}\).
_Construction._ Next, we introduce a CFG \(G\) so that \(L(Q)=L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})\). Let \(G=(\overline{V},T,P,S)\), where \(\overline{V}=J\cup\{0,1,\#\}\cup T\). Construct \(P\) in the following way. Initially, set \(P=\emptyset\); then, perform the following steps 1 through 5.
1. if \((a,q,y,p)\in R\), where \(a\in V-T\), \(p,q\in U-D\), \(y\in(V-T)^{*}\) and \(aq=s\), then add \(S\to u\langle p,1\rangle v\) to \(P\), for all \(u\in\nu(y)\) and \(v\in\mu(p)\);
2. if \((a,q,y,p)\in R\), where \(a\in V-T\), \(p,q\in U-D\) and \(y\in(V-T)^{*}\), then add \(\langle q,1\rangle\to u\langle p,1\rangle v\) to \(P\), for all \(u\in\nu(y)\) and \(v\in\mu(p)\);
3. for every \(q\in U-D\), add \(\langle q,1\rangle\to\langle q,2\rangle\) to \(P\);
4. if \((a,q,y,p)\in R\), where \(a\in V-T\), \(p,q\in U-D\), \(y\in T^{*}\), then add \(\langle q,2\rangle\to y\langle p,2\rangle v\) to \(P\), for all \(v\in\mu(p)\);
5. if \((a,q,y,p)\in R\), where \(a\in V-T\), \(q\in U-D\), \(y\in T^{*}\), and \(p\in D\), then add \(\langle q,2\rangle\to\)\(\not\)# to \(P\).
Set \(W=\{0,1,\#\}\) and \(\Omega=\{xy\#_{z}\in\phi(G)\,|\,x\in\{0,1\}^{+},\,y\in T^{*},\,z=x^{R}\}\).
**Claim 6.** Every \(h\in\Omega\) is generated by \(G\) in this way
\[\begin{array}{ll}&S\\ \Rightarrow&g_{1}\langle q_{1},1\rangle t_{1}\Rightarrow g_{2}\langle q_{2},1 \rangle t_{2}\Rightarrow\cdots\Rightarrow g_{k}\langle q_{k},1\rangle t_{k} \Rightarrow g_{k}\langle q_{k},2\rangle t_{k}\\ \Rightarrow&g_{k}y_{1}\langle q_{k+1},2\rangle t_{k+1}\Rightarrow g_{k}y_{1} y_{2}\langle q_{k+2},2\rangle t_{k+2}\Rightarrow\cdots\Rightarrow g_{k}y_{1}y_{2} \ldots y_{m-1}\langle q_{k+m-1},2\rangle t_{k+m-1}\\ \Rightarrow&g_{k}y_{1}y_{2}\ldots y_{m-1}y_{m}\#t_{k+m}\end{array}\]
in \(G\), where \(k,m\geq 1\); \(q_{1},\ldots,q_{k+m-1}\in U-D\); \(y_{1},\ldots,y_{m}\in T^{*}\); \(t_{i}\in\mu(q_{i}\ldots q_{1})\) for \(i=1,\ldots,k+m\); \(g_{j}\in\nu(d_{1}\ldots d_{j})\) with \(d_{1},\ldots,d_{j}\in(V-T)^{*}\) for \(j=1,\ldots,k\); \(d_{1}\ldots d_{k}=a_{1}\ldots a_{k+m}\) with \(a_{1}\),..., \(a_{k+m}\in V-T\) (that is, \(g_{k}\in\nu(a_{1}\ldots a_{k+m})\) with \(g_{k}=(t_{k+m})^{R}\)); \(h=y_{1}y_{2}\ldots y_{m-1}y_{m}\).
**Proof.** Examine the construction of \(P\). Observe that every derivation begins with an application of a rule having \(S\) on its left-hand side. Set \(1\)-\(J=\{\langle p,1\rangle\,|\,p\in U\},2\)-\(J=\{\langle p,2\rangle\,|\,p\in U\},1\)-\(P=\{p\,|\,p\in P\) and \(lhs(p)\in 1\)-\(J\},2\)-\(P=\{p\,|\,p\in P\) and \(lhs(p)\in 2\)-\(J\}\). Observe that in every successful derivation of \(h\), all applications of rules from \(1\)-\(P\) precede the applications of rules from \(2\)-\(P\). Thus, the generation of \(h\) can be expressed as
\[\begin{array}{ll}&S\\ \Rightarrow&g_{1}\langle q_{1},1\rangle t_{1}\Rightarrow g_{2}\langle q_{2},1 \rangle t_{2}\Rightarrow\cdots\Rightarrow g_{k}\langle q_{k},1\rangle t_{k} \Rightarrow g_{k}\langle q_{k},2\rangle t_{k}\\ \Rightarrow&g_{k}y_{1}\langle q_{k+1},2\rangle t_{k+1}\Rightarrow g_{k}y_{1} y_{2}\langle q_{k+2},2\rangle t_{k+2}\Rightarrow\cdots\Rightarrow g_{k}y_{1}y_{2} \ldots y_{m-1}\langle q_{k+m-1},2\rangle t_{k+m-1}\\ \Rightarrow&g_{k}y_{1}y_{2}\ldots y_{m-1}y_{m}\#t_{k+m}\end{array}\]
where all the involved symbols have the meaning stated in Claim 6.
**Claim 7.** Every \(h\in L(Q)\) is generated by \(Q\) in this way
\[\begin{array}{ll}&\#a_{0}q_{0}\\ \Rightarrow&a_{0}\#x_{0}q_{1}\\ \Rightarrow&a_{0}a_{1}\#x_{1}q_{2}\\ \vdots\\ \Rightarrow&a_{0}a_{1}\ldots a_{k}\#x_{k}q_{k+1}\\ \Rightarrow&a_{0}a_{1}\ldots a_{k}a_{k+1}\#x_{k+1}q_{k+2}\\ \vdots\\ \Rightarrow&a_{0}a_{1}\ldots a_{k}a_{k+1}\ldots a_{k+m-1}\#x_{k+m-1}y_{1} \ldots y_{m-1}q_{k+m}\\ \Rightarrow&a_{0}a_{1}\ldots a_{k}a_{k+1}\ldots a_{k+m}\#y_{1}\ldots y_{m}q_{k+m +1}\\ \end{array}\begin{array}{ll}&[(a_{k},q_{k},z_{k},q_{k+1})]\\ \vdots\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par
**Claim 8**.: \(L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})=L(Q)\)_._
Proof.: To prove that \(L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})\subseteq L(Q)\), take any \(h\in\Omega\) generated in the way described in Claim 6. From \({}_{w}\omega(h)\in\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\) with \(W=\{0,1,\#\}\), it follows that \(xy\#z\) with \(z=x^{R}\) where \(x=g_{k}\), \(y=y_{1}\ldots y_{m}\), \(z=t_{k+m}\). At this point, \(R\) contains \((a_{0},q_{0},z_{0},q_{1})\),..., \((a_{k},q_{k},z_{k},q_{k+1})\), \((a_{k+1},q_{k+1},y_{1},q_{k+2})\),..., \((a_{k+m-1}\), \(q_{k+m-1}\), \(y_{m-1}\), \(q_{k+m})\), \((a_{k+m}\), \(q_{k+m}\), \(y_{m}\), \(q_{k+m+1})\), where \(z_{1}\),..., \(z_{k}\in(V-T)^{*}\), and \(y_{1}\),..., \(y_{m}\in T^{*}\). Then, \(Q\) makes the generation of \({}_{T}\omega(h)\) in the way described in Claim 7. Thus, \({}_{T}\omega(h)\in L(Q)\).
To prove \(L(Q)\subseteq L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})\), take any \(h\in L(Q)\). Recall that \(h\) is generated in the way described in Claim 7. Consider the rules used in this generation. Furthermore, consider the definition of \(\nu\) and \(\mu\). Based on this consideration, observe that from the construction of \(P\), it follows that \(S\Rightarrow^{*}oh\#\overline{o}\) in \(G\) for some \(o,\overline{o}\in\{0,1\}^{+}\) with \(\overline{o}=o^{R}\). Thus, \({}_{W}\omega(oh\#\overline{o})\in\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\), so consequently, \(h\in L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})\).
Claims 6 through 8 imply that Lemma 5 holds true.
**Theorem 9**.: A language \(L\in\mathbf{RE}\) if and only if \(L=L(G,\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\})\), where \(G\) is a propagating CFG.
Proof.: This theorem follows from Lemmas 3 through 5.
**Corollary 10**.: \(\mathbf{RE}=\mathbf{CF_{PAL}}\)_._
Consider \(\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\) without \(\#\)--that is \(\{ww^{R}\,|\,w\in\{0,1\}^{*}\}\). On the one hand, this language is out of \(\mathbf{CF_{PAL}}\) because the central symbol \(\#\) does not occur in it. On the other hand, it is worth pointing out that Theorem 9 can be based upon this purely binary language as well.
**Corollary 11**.: A language \(L\in\mathbf{RE}\) if and only if \(L=L(G,\{ww^{R}\,|\,w\in\{0,1\}^{*}\})\), where \(G\) is propagating CFG.
Proof.: Prove this corollary by analogy with the way Theorem 9 is demonstrated.
Before closing this paper, we point out an open problem. As its main results, the paper has demonstrated that every recursively enumerable language can be generated by a propagating context-free grammar \(G\) finalized by \(\{w\#w^{R}\,|\,w\in\{0,1\}^{*}\}\) (see Theorem 9). Can this results be established with \(G\) having a limited number of nonterminals and/or rules?
## Acknowledgement
This work was supported by Brno University of Technology grant FIT-S-23-8209.
|
2309.10156 | Large normalizers of ${\mathbb Z}^{d}$-odometers systems and realization
on substitutive subshifts | For a ${\mathbb Z}^{d}$-topological dynamical system $(X, T, {\mathbb
Z}^{d})$, an isomomorphism is a self-homeomorphism $\phi : X\to X$ such that
for some matrix $M\in {\rm GL}(d,{\mathbb Z})$ and any ${n}\in {\mathbb
Z}^{d}$, $\phi\circ T^{{n}}=T^{M{n}}\circ \phi$, where $T^{n}$ denote the
self-homeomorphism of $X$ given by the action of ${n}\in {\mathbb Z}^d$. The
collection of all the isomorphisms forms a group that is the normalizer of the
set of transformations $T^{n}$. In the one-dimensional case, isomorphisms
correspond to the notion of flip conjugacy of dynamical systems and by this
fact are also called reversing symmetries. These isomorphisms are not well
understood even for classical systems. We present a description of them for
odometers and more precisely for constant-base ${\mathbb Z}^{2}$-odometers,
which is surprisingly not simple. We deduce a complete description of the
isomorphisms of some minimal ${\mathbb Z}^{d}$-substitutive subshifts. This
enables us to provide the first example known of a minimal zero-entropy
subshift with the largest possible normalizer group. | Christopher Cabezas, Samuel Petite | 2023-09-18T21:10:45Z | http://arxiv.org/abs/2309.10156v1 | # Large normalizers of \(\mathbb{Z}^{d}\)-odometers systems and realization on substitutive subshifts
###### Abstract.
For a \(\mathbb{Z}^{d}\)-topological dynamical system \((X,T,\mathbb{Z}^{d})\), an _isomorphism_ is a self-homeomorphism \(\phi:X\to X\) such that for some matrix \(M\in GL(d,\mathbb{Z})\) and any \(\mathbf{n}\in\mathbb{Z}^{d}\), \(\phi\circ T^{\mathbf{n}}=T^{M\mathbf{n}}\circ\phi\), where \(T^{\mathbf{n}}\) denote the self-homeomorphism of \(X\) given by the action of \(\mathbf{n}\in\mathbb{Z}^{d}\). The collection of all the isomorphisms forms a group that is the normalizer of the set of transformations \(T^{\mathbf{n}}\). In the one-dimensional case, isomorphisms correspond to the notion of _flip conjugacy_ of dynamical systems and by this fact are also called _reversing symmetries_.
These isomorphisms are not well understood even for classical systems. We present a description of them for odometers and more precisely for constant-base \(\mathbb{Z}^{2}\)-odometers, which is surprisingly not simple. We deduce a complete description of the isomorphisms of some minimal \(\mathbb{Z}^{d}\)-substitutive subshifts. This enables us to provide the first example known of a minimal zero-entropy subshift with the largest possible normalizer group.
Key words and phrases:multidimensional substitutive subshift, odometer, normalizer, automorphism 2020 Mathematics Subject Classification: Primary: 37B10; Secondary: 52C23, 37B52, 20H15, 20E18 The authors acknowledge the financial support of ANR project IZES ANR-22-CE40-0011 and ECOS projet ACEDic C21E04.
Introduction
The study of the nonlinear growth of the nonlinear Schrodinger equation (SDE) in the presence of a nonlinear nonlinearity (see e.g., [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 240, 209, 224, 241, 242, 251, 252, 261, 270, 271, 282, 290, 202, 203, 204, 205, 206, 207, 209, 211, 223, 241, 242, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 271, 283, 284, 285, 286, 287, 288, 289, 291, 285, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 323, 335, 341, 342, 355, 36, 370, 390, 320, 321, 338, 343, 355, 36, 371, 385, 380, 391, 300, 322, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 323, 338, 343, 35, 36, 371, 385, 380, 392, 300, 301, 302, 303, 304, 305, 307, 309, 311, 323, 335, 341, 342, 355, 36, 371, 385, 380, 393, 306, 308, 309, 310, 323, 338, 343, 355, 36, 371, 385, 381, 382, 383, 390, 306, 309, 311, 323, 335, 341, 342, 355, 36, 371, 385, 386, 387, 388, 393, 306, 307, 308, 311, 323, 338, 343, 35, 36, 388, 394, 309, 310, 33, 343, 35, 36, 388, 395, 396, 311, 334, 355, 36, 382, 397, 398, 399, 310, 33, 343, 36, 398, 320, 343, 36, 398, 344, 35, 36, 371, 38, 390, 34, 36, 398, 399, 321, 34, 36, 38, 399, 33, 35, 39, 36, 39, 37, 38, 39, 39, 306, 31, 32, 34, 35, 36, 39, 37, 39, 38, 39, 31, 34, 36, 39, 35, 39, 36, 37, 39, 38, 39, 306, 31, 34, 39, 32, 35, 36, 39, 37, 38, 39, 31, 35, 39, 32, 36, 39, 33, 38, 39, 34, 37, 39, 35, 39, 36, 37, 39, 38, 39, 306, 31, 32, 34, 35, 36, 39, 37, 38, 39, 31, 38, 39, 32, 39, 33, 39, 306, 31, 34, 35, 39, 36, 37, 38, 39, 39, 306, 31, 38, 39, 31, 39, 32, 32, 33, 34, 35, 36, 39, 37, 38, 39, 31, 39, 306, 31, 34, 35, 36, 38, 39, 32, 37, 39, 31, 35, 39, 33, 38, 39, 306, 31, 34, 35, 36, 39, 37, 39, 38, 39, 306, 31, 39, 306, 31, 34, 35, 36, 39, 31, 32, 34, 36, 39, 32, 35, 37, 39, 31, 36, 37, 38, 39, 30, 31, 38, 39, 306, 31, 34, 35, 36, 39, 37, 39, 38, 39, 31, 39, 32, 33, 34, 35, 36, 39, 306, 31, 34, 35, 37, 39, 31, 38, 39, 32, 35, 39, 306, 31, 34, 35, 36, 37, 39, 38, 39, 306, 31, 34, 35, 36, 39, 32, 37, 39, 306, 31, 34, 35, 36, 39, 306, 31, 35, 39, 306, 31, 34, 35, 37, 39, 306, 31, 35, 39, 306, 31, 36, 37, 39, 31, 38, 39, 32, 30, 31, 34, 35, 36, 39, 306, 31, 34, 35, 37, 39, 32, 30, 31, 35, 39, 306, 31, 36, 37, 39, 31, 38, 39, 32, 33, 34, 35, 36, 39, 306, 31, 34, 35, 36, 37, 39, 31, 37, 39, 306, 31, 38, 39, 306, 31, 34, 35, 39, 32, 30, 31, 36, 37, 39, 31, 39, 33, 32, 33, 34, 35, 36, 37, 38, 39, 306, 31, 39, 306, 31, 34, 35, 37, 39, 306, 31, 34, 35, 36, 37, 39, 306, 31, 34, 35, 36, 39, 306, 31, 34, 35, 37, 39, 32, 30, 31, 35, 39, 33, 306, 31, 34, 35, 36, 37, 39, 306, 31, 34, 35, 36, 37, 39, 306, 31, 38, 39, 32, 30, 31, 39, 33, 34, 35, 36, 37, 39, 31, 39, 32, 306, 31, 34, 35, 36, 37, 39, 38, 39, 306, 31, 39, 306, 31, 34, 35, 36, 37, 39, 31, 38, 39, 306, 31, 39,
More surprisingly, low complexity is not enough to ensure the amenability of the normalizer group in dimension \(d>1\). This phenomenon differs from what happens in dimension one [11, 12, 13, 14, 15, 16], where the amenability of the automorphism group (of finite-index in the normalizer group) is shown for a large class of zero-entropy subshifts and any Toeplitz subshift.
This article is organized as follows: Section 2 is devoted to the background on topological and symbolic dynamics. In particular, relations between the normalizer of a system with the one of its maximal equicontinuous factor are exhibited. The next section concerns the study of the normalizers of odometers in order to describe them (Theorem 3.3). Finally, we construct examples of multidimensional subshifts with an odometer as their maximal equicontinuous factors in Section 4. We characterize their infinite linear representation groups (Theorem 4.1) by using our study of normalizers of odometers and the relations with their extensions.
## 2. Definitions and basic properties
### Topological dynamical systems
We recall that a _topological dynamical system_\((X,T,\mathbb{Z}^{d})\) is given by a continuous left-action \(T\colon\mathbb{Z}^{d}\times X\to X\) on a compact metric space \(X\) equipped with a distance. This provides a family of self-homeomorphisms of \(X\): \(\{T^{\boldsymbol{n}}\colon\boldsymbol{n}\in\mathbb{Z}^{d}\}\), also denoted by \(\langle T\rangle\), such that \(T^{\boldsymbol{m}}\circ T^{\boldsymbol{n}}=T^{\boldsymbol{m}+\boldsymbol{n}}\) for any \(\boldsymbol{m},\boldsymbol{n}\in\mathbb{Z}^{d}\). In particular, the homeomorphisms \(T^{\boldsymbol{n}}\) commute with each other. The _orbit_ of a point \(x\in X\) is the set \(\mathcal{O}(x,T)=\{T^{\boldsymbol{n}}(x)\colon\boldsymbol{n}\in\mathbb{Z}^{d}\}\). We will be mainly concerned by topological dynamical systems that are _minimal_, i.e., where any orbit is dense. In this case, there is no topological way to classify the orbits.
An important type of topological dynamical systems are the equicontinuous ones. A topological dynamical system \((X,T,\mathbb{Z}^{d})\) is said to be _equicontinuous_ if the set of maps \(\{T^{\boldsymbol{n}}\colon\boldsymbol{n}\in\mathbb{Z}^{d}\}\) forms an equicontinuous family of homeomorphisms. The equicontinuous systems are, in some sense, the simplest dynamical systems.
In the following, we define the endomorphisms and isomorphisms of a topological dynamical system, which are the central objects of study of this article. Isomorphisms represent internal symmetries of a given system that do not commute with the action, such as rotations and reflections.
**Definition 2.1**.: Let \((X,T,\mathbb{Z}^{d})\), \((Y,S,\mathbb{Z}^{d})\) be two topological dynamical systems and \(M\in\operatorname{GL}(d,\mathbb{Z})\). An _\(M\)-epimorphism_ is a continuous surjective map \(\phi:X\to Y\) that for any \(\boldsymbol{n}\in\mathbb{Z}^{d}\) satisfies \(\phi\circ T^{\boldsymbol{n}}=T^{M\boldsymbol{n}}\circ\phi\). When \((X,T,\mathbb{Z}^{d})=(Y,S,\mathbb{Z}^{d})\), it is called an _\(M\)-endomorphism_. Moreover, if \(\phi\) is invertible, it is called an _\(M\)-isomorphism_.
We simply call a \(\operatorname{GL}(d,\mathbb{Z})\)-_endomorphism_ (or \(\operatorname{GL}(d,\mathbb{Z})\)-_isomorphism_), any \(M\)-endomorphism (resp. isomorphism) for some \(M\in\operatorname{GL}(d,\mathbb{Z})\).
In other terms, the \(\operatorname{GL}(d,\mathbb{Z})\)-isomorphisms are conjugacies of \(\mathbb{Z}^{d}\)-actions, up to a \(\operatorname{GL}(d,\mathbb{Z})\)-transformation, i.e., an orbit equivalence with a constant orbit cocycle. In the special case where \(M\) is the identity matrix \(\operatorname{Id}\), the \(\operatorname{Id}\)-endomorphisms and \(\operatorname{Id}\)-isomorphisms are called _endomorphisms_ (or _self-factor maps_) and _automorphisms_ (or _self-conjugacies_), respectively.
When the action is _aperiodic_, i.e., the stabilizer of any point is trivial or \(T^{\boldsymbol{n}}x=x\) only for \(\boldsymbol{n}=\boldsymbol{0}\), the matrix \(M\) associated with a \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphism \(\Phi\) is unique: \(\Phi\) is an \(M_{1}\)- and \(M_{2}\)-endomorphism if and only if \(M_{1}=M_{2}\). In the following, we fix different notations that we will use throughout this article:
* \(N_{M}(X,T)\), with \(M\in\operatorname{GL}(d,\mathbb{Z})\), denotes the set of all \(M\)-endomorphisms of \((X,T)\);
* \(N(X,T)\) denotes the set of all the \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphisms of the dynamical system \((X,T,\mathbb{Z}^{d})\), i.e. \[N(X,T)=\bigcup_{M\in\operatorname{GL}(d,\mathbb{Z})}N_{M}(X,T).\] The set of \(\operatorname{GL}(d,\mathbb{Z})\)-isomorphisms, is denoted by \(N^{*}(X,T)\). In algebraic terms, this set is a group and, when the action is aperiodic, corresponds to the normalizer of the group action \(\langle T\rangle\) in the group of self-homeomorphisms \(\operatorname{Homeo}(X)\) of \(X\), that is, the set \(\phi\colon X\to X\) such that \(\phi\circ T^{\boldsymbol{n}}\circ\phi^{-1}\in\{T^{\boldsymbol{m}}:\boldsymbol {m}\in\mathbb{Z}^{d}\}\) for any \(\boldsymbol{n}\in\mathbb{Z}^{d}\);
* \(\operatorname{End}(X,T)\) and \(\operatorname{Aut}(X,T)\) denotes respectively the set of all endomorphisms and automorphisms of \((X,T,\mathbb{Z}^{d})\).
* We define the _linear representation semigroup_\(\vec{N}(X,T)\) as the semigroup of all matrices \(M\in\operatorname{GL}(d,\mathbb{Z})\) with \(N_{M}(X,T)\neq\emptyset\).
Notice that for \(\phi\in N_{M_{1}}(X,T)\) and \(\psi\in N_{M_{2}}(X,T)\), the composition \(\phi\circ\psi\) belongs to \(N_{M_{1}M_{2}}(X,T)\). Moreover, if \(\phi\) is an \(M\)-isomorphism, then its inverse is an \(M^{-1}\)-isomorphism, so the sets \(N_{M}(X,T)\) are not semigroups (except when \(M\) is the identity matrix).
Concerning the linear representation semigroup \(\vec{N}(X,T)\), it is direct to check that the isomorphism class of \(\vec{N}(X,T)\) is invariant under conjugacy. However, it is not necessarily a group, since the existence of an \(M\)-endomorphism associated with a matrix \(M\in\operatorname{GL}(d,\mathbb{Z})\) does not necessarily imply the existence of an \(M^{-1}\)-endomorphism. Nevertheless, we have the following result when a dynamical system is coalescent. Recall that a topological dynamical system \((X,T,\mathbb{Z}^{d})\) is _coalescent_ when any endomorphism of \((X,T,\mathbb{Z}^{d})\) is invertible.
**Proposition 2.2**.: _Let \((X,T,\mathbb{Z}^{d})\) be a coalescent system. If the linear representation semigroup \(\vec{N}(X,T)\) is a group, then any \(\operatorname{\textit{GL}}(d,\mathbb{Z})\)-endomorphism in \(N(X,T)\) is invertible, i.e., \(N(X,T)=N^{*}(X,T)\)._
Equicontinuous systems are examples of coalescent systems [1].
Proof.: Let \(\phi,\psi\) be respectively an \(M\)- and \(M^{-1}\)-endomorphism onto \((X,T,\mathbb{Z}^{d})\). Then \(\phi\circ\psi\) is an endomorphism onto \((X,T,\mathbb{Z}^{d})\). Since \((X,T,\mathbb{Z}^{d})\) is coalescent, then \(\phi\circ\psi\) is invertible. It follows that \(\phi\) and \(\psi\) are invertible maps.
A particular case is when the linear representation semigroup of a coalescent system is finite. In this case, it is always a group, so any \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphism is invertible.
The groups \(\langle T\rangle\) and \(\operatorname{Aut}(X,T)\) are normal subgroups of \(N^{*}(X,T)\) (the group of isomorphisms). More precisely, for aperiodic systems the following exact sequence holds:
\[1\to\qquad\operatorname{Aut}(X,T)\quad\to\quad N^{*}(X,T)\qquad\stackrel{{ j}}{{\to}}\quad\vec{N^{*}}(X,T)\quad\to\quad 1, \tag{1}\]
where all the maps are the canonical injections and \(j(\phi)=M\) whenever \(\phi\in N_{M}(X,T)\).
A _factor map_\(\pi:(X,T,\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})\) between two topological dynamical systems \((X,T,\mathbb{Z}^{d})\) and \((Y,S,\mathbb{Z}^{d})\) is a continuous onto map commuting with the
action, i.e., \(\pi\circ T^{\boldsymbol{n}}=S^{\boldsymbol{n}}\circ\pi\) for any \(\boldsymbol{n}\in\mathbb{Z}^{d}\). The system \((Y,S,\mathbb{Z}^{d})\) is said to be a _factor_ of \((X,T,\mathbb{Z}^{d})\), and \((X,T,\mathbb{Z}^{d})\) is an _extension_ of \((Y,S,\mathbb{Z}^{d})\). If \(|\pi^{-1}(\{y\})|\leq K<\infty\) for all \(y\in Y\), we say that \(\pi\) is _finite-to-1_. Sometimes there exists a \(G_{\delta}\)-dense subset \(Y_{0}\subseteq Y\) where any \(y\in Y_{0}\) satisfies \(|\pi^{-1}(\{y\})|=1\). In this case, the factor map \(\pi\) is said to be _almost 1-to-1_. We recall that when the system \((Y,S,\mathbb{Z}^{d})\) is minimal, the existence of one point with only one preimage is enough to ensure that the map is almost 1-to-1.
For every topological dynamical system, there exists at least one equicontinuous factor, which is the system given by one point. Furthermore, any topological dynamical system admits a _maximal equicontinuous factor_, i.e., a factor \(\pi_{eq}:(X,T,\mathbb{Z}^{d})\to(X_{eq},T_{eq},\mathbb{Z}^{d})\) where \((X_{eq},T_{eq},\mathbb{Z}^{d})\) is an equicontinuous system, satisfying the following universal property: for any other equicontinuous factor \(\pi:(X,T,\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})\) there exists a factor map \(\phi:(X_{eq},T_{eq},\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})\) such that \(\pi=\phi\circ\pi_{eq}\)[1]. Also, in the particular case where \(\pi:(X,T,\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})\) is an almost 1-to-1 factor on an equicontinuous system \((Y,S,\mathbb{Z}^{d})\), this factor is the maximal equicontinuous factor of \((X,T,\mathbb{Z}^{d})\). As typical examples of this case, are the odometer systems (see next section) that are almost 1-to-1 factors of particular symbolic systems [9, 10, 17].
A factor map \(\pi:(X,T,\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})\) is _compatible_ if any endomorphism \(\phi\in\operatorname{End}(X,T)\) preserves the \(\pi\)-fibers, i.e., \(\pi(\phi(x))=\pi(\phi(y))\) for any \(x,y\in X\), such that \(\pi(x)=\pi(y)\). With the same spirit, we say that a factor \(\pi\) is _compatible with \(\text{GL}(d,\mathbb{Z})\)-endomorphisms_ if for any \(\text{GL}(d,\mathbb{Z})\)-endomorphism \(\phi\in N(X,T)\), \(\pi(\phi(x))=\pi(\phi(y))\) for any \(x,y\in X\), such that \(\pi(x)=\pi(y)\).
The compatibility property allow us to relate \(\text{GL}(d,\mathbb{Z})\)-endomorphisms between factor systems. The next lemma follows the ideas from [11, Theorem 3.3] but in a larger context.
**Lemma 2.3**.: _Let \((X,T,\mathbb{Z}^{d})\), \((Y,S,\mathbb{Z}^{d})\) be two minimal systems, such that \(\pi:(X,T,\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})\) is a compatible factor. Then, there is a semigroup homomorphism \(\hat{\pi}:\operatorname{End}(X,T)\to\operatorname{End}(Y,S)\) such that_
1. \(\hat{\pi}(\phi)(\pi(x))=\pi(\phi(x))\) _for all_ \(\phi\in\operatorname{End}(X,T)\) _and_ \(x\in X\)_._
2. \(\hat{\pi}(\operatorname{Aut}(X,T))\leqslant\operatorname{Aut}(Y,S)\)_._
3. _For all_ \(\psi\in\operatorname{End}(Y,S)\)_,_ \(|\hat{\pi}^{-1}(\{\psi\})|\leq\min_{y\in Y}|\pi^{-1}(y)|\)_._
_Moreover, if \(\pi\) is compatible with \(\text{GL}(d,\mathbb{Z})\)-endomorphisms, there is an extension of \(\hat{\pi}:N(X,T)\to N(Y,S)\) defined as in Item 1. for all \(\phi\in N(X,T)\), satisfying the following properties_
1. \(\hat{\pi}(N_{M}(X,T))\subseteq N_{M}(Y,S)\)_, for any_ \(M\in GL(d,\mathbb{Z})\)_._
2. _For any_ \(M\in\text{GL}(d,\mathbb{Z})\)_, the map_ \(\hat{\pi}:N_{M}(X,T)\to N_{M}(Y,S)\) _is at most_ \(\min_{y\in Y}|\pi^{-1}(\{y\})|\)_-to-1._
3. _For any_ \(\phi\in\hat{\pi}^{-1}(N^{*}(Y,S))\)_, the cardinality of the_ \(\pi\)_-fiber is non decreasing under the_ \(\text{GL}(d,\mathbb{Z})\)_-isomorphism_ \(\hat{\pi}(\phi)^{-1}\)_. In other terms, for any integer_ \(n\geq 1\)_, the map_ \(\hat{\pi}(\phi)\) _satisfies_ \[\{y\in Y\colon|\pi^{-1}(\{y\})|\geq n\}\subset\hat{\pi}(\phi)\left(\{y\in Y \colon|\pi^{-1}(\{y\})|\geq n\}\right).\]
Proof.: Set \(\phi\in\operatorname{End}(X,T)\). By definition, the map \(\hat{\pi}(\phi):Y\to Y\) given by \(\hat{\pi}(\phi)(\pi(x))=\pi(\phi(x))\) is well defined and is surjective by minimality of \((Y,S,\mathbb{Z}^{d})\). So \(\hat{\pi}(\phi)\) is an endomorphism of \((Y,S,\mathbb{Z}^{d})\). Moreover, if \(\phi\) is an automorphism of
\((X,T,\mathbb{Z}^{d})\), then \(\hat{\pi}(\phi)\) is invertible. Indeed, \(\hat{\pi}(\phi)\circ\hat{\pi}(\phi^{-1})\circ\pi=\pi\circ\phi\circ\phi^{-1}=\pi\), so we conclude that \(\hat{\pi}(\phi)\circ\hat{\pi}(\phi^{-1})=\operatorname{id}_{Y}\).
To prove Item 3., fix any \(\psi\in\operatorname{End}(Y,S)\) and suppose that \(\min_{y\in Y}|\pi^{-1}(\{y\})|=c<\infty\) (if not, then there is nothing to prove). Let \(x_{0}\in X\) and \(y_{0}\in Y\) be such that \(|\pi^{-1}(\{y_{0}\})|=c\), and \(y_{0}=\psi(\pi(x_{0}))\). Assume there exists \(c+1\) endomorphisms \(\phi_{0},\ldots,\phi_{c}\) of \((X,T,\mathbb{Z}^{d})\), in \(\hat{\pi}(\{\psi\})^{-1}\). The compatibility then implies that \(y_{0}=\psi(\pi(x_{0}))=\pi(\phi_{0}(x_{0}))=\cdots=\pi(\phi_{c}(x_{0}))\). So, by the pigeonhole principle, there must exist two different indices \(0\leq i,j\leq c\) such that \(\phi_{i}(x_{0})=\phi_{j}(x_{0})\). The minimality of \((X,T,\mathbb{Z}^{d})\) then gives that \(\phi_{i}=\phi_{j}\).
The proofs concerning the items i and ii on \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphisms use similar arguments and are left to the reader.
To prove Item iii, we consider any \(y\in Y\) with \(n\) preimages \(x_{1},\ldots,x_{n}\in X\). Since \(\phi\) is onto, there are \(x_{1}^{\prime},\ldots,x_{n}^{\prime}\in X\) such that \(\phi(x_{i}^{\prime})=x_{i}\). Notice that, \(\hat{\pi}(\phi)(\pi(x_{i}^{\prime}))=\pi(\phi(x_{i}^{\prime}))=\pi(x_{i})=y\). It follows that \(\hat{\pi}(\phi)(\pi(x_{i}^{\prime}))=\hat{\pi}(\phi)(\pi(x_{j}^{\prime}))\) for any indices \(i,j=1,\ldots,n\). Since \(\hat{\pi}(\phi)\) is invertible, all the element \(x_{i}^{\prime}\) belongs to the same \(\pi\)-fiber of \(z\in Y\). Thus \(z\) admits at least \(n\) preimages by \(\pi\) and satisfies \(\hat{\pi}(\phi)(z)=y\). The claim follows.
It is known that factor maps between equicontinuous systems are compatible [1], but as we will see in the next section, they are not necessarily compatible with \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphisms (see Remark 3.5). Nevertheless, the maximal equicontinuous factor is an example of a factor compatible with \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphisms as proved in [5, Theorem 5 and Corollary 3]. This can also be seen by using the universal property of the maximal equicontinuous factor.
**Lemma 2.4**.: _[_5_, Corollary 3]_ _Let \((X,T,\mathbb{Z}^{d})\) be a minimal topological dynamical system and let \(\pi_{eq}:(X,T,\mathbb{Z}^{d})\to(X_{eq},T_{eq},\mathbb{Z}^{d})\) denote its maximal equicontinuous factor. Then \(\pi_{eq}\) is compatible with \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphisms._
Lemma 2.3 and Lemma 2.4 illustrate that to study \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphisms, a first step is to understand the equicontinuous systems. This will be done in Section 2.2 for the class of minimal equicontinous Cantor systems: the odometer systems.
### \(\mathbb{Z}^{d}\)-Odometer systems
Odometer systems are the equicontinuous minimal Cantor systems. Hence, they are the maximal equicontinuous factor for a large family of symbolic systems, such as Toeplitz flows and some substitutive subshifts. We refer to [9] for the study of odometer systems and \(\mathbb{Z}^{d}\)-Toeplitz sequences. We use the same notations.
In this section, we briefly recall the basic definitions of odometers. Subsequently, we describe the linear representation semigroup of odometer systems (Lemma 2.6), which we then use to completely characterize it for constant-base \(\mathbb{Z}^{2}\)- odometer systems (Theorem 3.3). It is worth noting that the initial exploration of \(\operatorname{GL}(2,\mathbb{Z})\)-endomorphisms between \(\mathbb{Z}^{2}\)-odometer systems was initiated in [18] and later extended to higher dimensions in [22].
#### 2.2.1. Basic definitions
Let \(Z_{0}\geqslant Z_{1}\geqslant\ldots\geqslant Z_{n}\geqslant Z_{n+1}\geqslant\ldots\) be a nested sequence of finite-index subgroups of \(\mathbb{Z}^{d}\) such that \(\bigcap\limits_{n\geq 0}Z_{n}=\{\mathbf{0}\}\) and let \(\alpha_{n}:\mathbb{Z}^{d}/Z_{n+1}\to\mathbb{Z}^{d}/Z_{n}\) be the function induced by the inclusion map. We consider the
inverse limit of these groups
\[\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}=\lim_{\leftarrow n}(\mathbb{Z}^{d}/Z_{n },\alpha_{n}),\]
i.e., \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\) is the subset of the product \(\prod\limits_{n\geq 0}\mathbb{Z}^{d}/Z_{n}\) consisting of the elements \(\overleftarrow{\underline{\sigma}}=(g_{n})_{n\geq 0}\) such that \(\alpha_{n}(g_{n+1})=g_{n}\) (mod \(Z_{n}\)) for all \(n\geq 0\). The odometer \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\) is a compact \(0\)-dimensional topological group, whose topology is spanned by the cylinder sets
\[[\boldsymbol{a}]_{n}=\left\{\overleftarrow{g}\in\overleftarrow{\mathbb{Z}^{d }}_{(Z_{n})}:\boldsymbol{g}_{n}=\boldsymbol{a}\right\},\]
with \(\boldsymbol{a}\in\mathbb{Z}^{d}/Z_{n}\), and \(n\geq 0\). Now, consider the group homomorphism \(\kappa_{(Z_{n})}:\mathbb{Z}^{d}\to\prod\limits_{n\geq 0}\mathbb{Z}^{d}/Z_{n}\) defined for \(\boldsymbol{n}\in\mathbb{Z}^{d}\) as
\[\kappa_{(Z_{n})}(\boldsymbol{n})=[\boldsymbol{n}\ \text{mod}\ Z_{n}]_{n\geq 0}.\]
The image of \(\mathbb{Z}^{d}\) under \(\kappa_{(Z_{n})}\) is dense in \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\), allowing us to define the \(\mathbb{Z}^{d}\)-action \(\boldsymbol{n}+\overleftarrow{g}=\kappa_{(Z_{n})}(\boldsymbol{n})+ \overleftarrow{g}\), where \(\boldsymbol{n}\in\mathbb{Z}^{d}\) and \(\overleftarrow{g}\in\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\). This action is well-defined and continuous. The resulting topological dynamical system \((\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})},+,\mathbb{Z}^{d})\) is called a \(\mathbb{Z}^{d}\)_-odometer system_, and for the rest of this article, we will simply use the notation \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\), i.e., denoted just by its phase space. Similarly, its set of automorphisms, endomorphisms, and linear representation semigroup will be denoted as \(\operatorname{Aut}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\), \(N(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\), and \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\), respectively.
Notice that the "return times" of the action to a cylinder set \([\boldsymbol{a}]_{n}\) is a finite-index subgroup of \(\mathbb{Z}^{d}\), or more precisely:
\[\forall\overleftarrow{g}\in[\boldsymbol{a}]_{n},\quad\{\boldsymbol{n}\in \mathbb{Z}^{d}:\boldsymbol{n}+\overleftarrow{g}\in[\boldsymbol{a}]_{n}\}=Z_{ n}. \tag{2}\]
This observation is the base to show that an odometer system is a minimal equicontinuous system. This also shows that the action is aperiodic since the intersection of all the \(Z_{n}\) is trivial.
We will be particularly concerned by a special case of odometers: namely the _constant-base_ ones. In these systems, \(Z_{n}=L^{n}(\mathbb{Z}^{d})\) for each \(n\geq 0\), where \(L\in\mathcal{M}(d,\mathbb{Z})\) is an expansion matrix. Recall that an integer matrix \(L\in\mathcal{M}(d,\mathbb{Z})\) is an _expansion_ if the norm of each eigenvalue is greater than \(1\). To simplify notation, we denote the constant-base odometer \(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n}(\mathbb{Z}^{d}))}\) as \(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}\).
The next result characterizes the factor odometer systems of a fixed odometer system.
**Lemma 2.5**.: _[_9_, Lemma 1]_ _Let \(\overleftarrow{\mathbb{Z}^{d}}_{(Z^{j}_{n})}\) be two odometer systems \((j=1,2)\). There exists a factor map \(\pi:\overleftarrow{\mathbb{Z}^{d}}_{(Z^{1}_{n})}\to\overleftarrow{\mathbb{Z}^ {d}}_{(Z^{2}_{n})}\) if and only if for every \(Z^{2}_{n}\) there exists some \(Z^{1}_{m}\) such that \(Z^{1}_{m}\leqslant Z^{2}_{n}\)._
#### 2.2.2. Normalizer condition
The proof of Lemma 2.5 can be modified to provide a characterization for the matrices \(M\in GL(d,\mathbb{Z})\) defining a \(\operatorname{GL}(d,\mathbb{Z})\)-_epimorphism_ between two odometer systems \(\phi:\overleftarrow{\mathbb{Z}^{d}}_{(Z^{1}_{n})}\to\overleftarrow{\mathbb{ Z}^{d}}_{(Z^{2}_{n})}\).
**Lemma 2.6**.: _Set \(M\in GL(d,\mathbb{Z})\). There exists a continuous map \(\phi:\overleftarrow{\mathbb{Z}^{d}}_{(Z^{1}_{n})}\to\overleftarrow{\mathbb{Z}^{ d}}_{(Z^{2}_{n})}\), such that_
\[\forall\boldsymbol{n}\in\mathbb{Z}^{d},\overleftarrow{g}\in\overleftarrow{ \mathbb{Z}^{d}}_{(Z^{1}_{n})},\quad\phi(\boldsymbol{n+}(\overleftarrow{g}))= M\boldsymbol{n+}\phi(\overleftarrow{g}),\]
_if and only if_
(Epimorphism Condition) \[\forall n\in\mathbb{N},\exists m(n)\in\mathbb{N}\text{ s.t. }MZ^{1}_{m(n)}\leqslant Z^{2}_{n}.\]
It follows from Lemma 2.6 that a matrix \(M\in GL(d,\mathbb{Z})\) belongs to the linear representation semigroup \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) of an odometer system if and only if it satisfies the following condition, that we called _Normalizer condition_
(NC 1) \[\forall n\in\mathbb{N},\exists m(n)\in\mathbb{N}\text{ s.t. }MZ_{m(n)}\leqslant Z_{n}.\]
Proof.: For the sufficiency, assume that \(M\in GL(d,\mathbb{Z})\) satisfies (Epimorphism Condition). Since the sequences \(\{Z^{i}_{n}\}_{n>0}\), \(i=1,2\) are decreasing, we may assume that \(m(n)\leq m(n+1)\) for all \(n\in\mathbb{N}\). Thus, we have homomorphisms \(\phi^{M}_{m(n)}:\mathbb{Z}^{d}/Z^{1}_{m(n)}\to\mathbb{Z}^{d}/Z^{2}_{n}\), given by \(\phi_{m(n)}(\boldsymbol{m}\ (\text{mod}\ Z^{1}_{m(n)}))=M\boldsymbol{m}\ (\text{mod}\ Z^{2}_{n})\). To finish the proof, we only have to remark that \(\phi^{M}:\overleftarrow{\mathbb{Z}^{d}}_{(Z^{1}_{n})}\to\overleftarrow{ \mathbb{Z}^{d}}_{(Z^{2}_{n})}\) defined as \(\phi((\boldsymbol{g}_{n})_{n\in\mathbb{N}})=(\phi_{m(n)}(\boldsymbol{g}_{m(n) }))_{n\in\mathbb{N}}\) is an \(M\)-epimorphism.
We now prove the necessity. Let \(\phi:\overleftarrow{\mathbb{Z}^{d}}_{(Z^{1}_{n})}\to\overleftarrow{\mathbb{Z}^ {d}}_{(Z^{2}_{n})}\) be an \(M\)-epimorphism. By continuity, for any \(n\in\mathbb{N}\) and \(\boldsymbol{g}\in\mathbb{Z}^{d}/Z^{2}_{n}\), there exists \(m\in\mathbb{N}\) and \(\boldsymbol{f}\in\mathbb{Z}^{d}/Z^{1}_{m}\) such that \([\boldsymbol{f}]_{m}\subseteq\phi^{-1}([\boldsymbol{g}]_{n})\). Set \(\boldsymbol{h}\in Z^{1}_{m_{i}}\). For all \(\overleftarrow{f}\in[\boldsymbol{f}]_{m}\), we have by (2) that \(\boldsymbol{h+}\overleftarrow{f}\in[\boldsymbol{f}]_{m}\), which implies that \(\phi(\boldsymbol{h+}\overleftarrow{f})=M\boldsymbol{h+}\phi(\overleftarrow{f}) \in[\boldsymbol{g}]_{n}\). Since \(\phi(\overleftarrow{f})\) is in \([\boldsymbol{g}]_{n}\), the set \(\left\{\boldsymbol{m}\in\mathbb{Z}^{d}\colon\boldsymbol{m+}\phi(\overleftarrow{ f})\in[\boldsymbol{g}]_{n}\right\}\) is equal to \(Z^{2}_{n}\) (by (2)), which implies that \(M\boldsymbol{h}\in Z^{2}_{n}\).
Notice that since the sequences \(\{Z^{i}_{n}\}\), \(i=1,2\) are decreasing, the (Epimorphism Condition) is satisfied for any large enough \(m\in\mathbb{N}\) provided it is true for one constant. This remark implies that the set of matrices \(M\) (not necessarily invertible) satisfying the condition (NC 1) is stable under product and sum, so it is a ring. By applying this remark we get the following result
**Corollary 2.7**.: _The semigroup \(N(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) of GL\((d,\mathbb{Z})\)-endomorphims of an odometer \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\) is a group._
_In particular any GL\((d,\mathbb{Z})\)-endomorphim is invertible and the linear representation semigroup \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) is a group._
Proof.: Recall that odometer systems are equicontinuous and, hence, are coalescent [1]. Thus, from Corollary 3.2, to show that any GL\((d,\mathbb{Z})\)-endomorphism of an odometer system is invertible, it is enough to show that \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) is a group.
Since \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) is a semigroup, we only have to prove that any element \(M\in\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\) admits an inverse inside. Since the sets of matrices satisfying (Epimorphism Condition) is a ring, any integer polynomial of \(M\) also satisfies (Epimorphism Condition). Now, the Cayley-Hamilton theorem implies that
\(M^{d}=\sum\limits_{k=0}^{d-1}b_{k}M^{k}\), where \(b_{k}\in\mathbb{Z}\) are the coefficients of the characteristic polynomial of \(M\). Notice that \(b_{0}=(-1)^{d}\det(M)\), so that \(b_{0}\in\{-1,1\}\). Multiplying by \(M^{-1}\), we conclude that \(M^{-1}\) can be written as an integer polynomial on \(M\). Hence, \(M^{-1}\) satisfies (Epimorphism Condition), and by Lemma 2.6, we conclude that \(M^{-1}\in\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\).
Recall that the automorphisms of the odometer system are well known: they are the translations on it [1].
**Lemma 2.8**.: _For any odometer system we have that_
\[\operatorname{Aut}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})=\{\overleftarrow {g}\in\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\mapsto\overleftarrow{g}+ \overleftarrow{h}\in\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}:\overleftarrow{h }\in\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\}.\]
_In particular \(\operatorname{Aut}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) is an abelian group isomorphic to \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\)._
As a direct consequence of Corollary 2.7 we get the following algebraic structure of the normalizer group of odometer systems.
**Corollary 2.9**.: _The normalizer group \(N(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) of an odometer system is isomorphic to a semidirect product between the odometer system \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\) and the linear representation group \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\)._
Proof.: Recall from Lemma 2.6 that for each \(M\in\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\), one can associate a \(M\)-isomorphism \(\phi^{M}\) of \(\mathbb{Z}^{d}_{(Z_{n})}\) defined for any \((\boldsymbol{g}_{m(n)}\ (\mathrm{mod}\ Z_{m(n)}))_{n\in\mathbb{N}}\in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\) by
\[\phi^{M}((\boldsymbol{g}_{m(n)}\ (\mathrm{mod}\ Z_{m(n)}))_{n\in\mathbb{N}})=(M \boldsymbol{g}_{n}\ (\mathrm{mod}\ Z_{n}))_{n\in\mathbb{N}}.\]
Notice that the set \(\{\phi^{M}\colon M\in\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\}\) is a group and defines a group homomorphism \(h:\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\to N(\overleftarrow{ \mathbb{Z}^{d}}_{(Z_{n})})\) such that \(j\circ h\) is the identity in \(\{\phi^{M}\colon M\in\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\}\), so the exact sequence (1) is split exact.
So, to study the normalizer group of an odometer system, we just have to determine its linear representation group. Actually, all these results lead to wonder the following question:
**Question 2.10**.: _Give a characterization of the groups of the form \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) for any odometer \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\)._
We do not answer to this question, but we provide necessary conditions for specific odometers: the universal and the constant-base ones in Sections 3.1 and 3.2 respectively.
### Symbolic dynamics
We recall here classical definitions and we fix the notations on multidimensional subshifts.
#### 2.3.1. Basic definitions
Let \(\mathcal{A}\) be a finite alphabet and \(d\geq 1\) be an integer. We define a topology on \(\mathcal{A}^{\mathbb{Z}^{d}}\) by endowing \(\mathcal{A}\) with the discrete topology and considering in \(\mathcal{A}^{\mathbb{Z}^{d}}\) the product topology, which is generated by cylinders. Since \(\mathcal{A}\) is finite, \(\mathcal{A}^{\mathbb{Z}^{d}}\) is a metrizable compact space. In this space, the group \(\mathbb{Z}^{d}\) acts by translations (or shifts), defined for every \(\boldsymbol{n}\in\mathbb{Z}^{d}\) as
\[S^{\boldsymbol{n}}(x)_{\boldsymbol{k}}=x_{\boldsymbol{n}+\boldsymbol{k}},\ x=(x_{ \boldsymbol{k}})_{\boldsymbol{k}}\in\mathcal{A}^{\mathbb{Z}^{d}},\ \boldsymbol{n},\boldsymbol{k}\in\mathbb{Z}^{d}.\]
Let \(P\subseteq\mathbb{Z}^{d}\) be a finite subset. A _pattern_ is an element \(\mathfrak{p}\in\mathcal{A}^{P}\). We say that \(P\) is the _support_ of \(\mathfrak{p}\), and we denote \(P=\operatorname{supp}(\mathfrak{p})\). A pattern _occurs in_\(x\in\mathcal{A}^{\mathbb{Z}^{d}}\), if there exists \(\boldsymbol{n}\in\mathbb{Z}^{d}\) such that \(\mathfrak{p}=x|_{\boldsymbol{n}+P}\) (identifying \(\boldsymbol{n}+P\) with \(P\) by translation). In this case, we denote it \(\mathfrak{p}\sqsubseteq x\) and we call this \(\boldsymbol{n}\) an _occurrence in_\(x\) of \(\mathfrak{p}\).
A _subshift_\((X,S,\mathbb{Z}^{d})\) is given by a closed subset \(X\subseteq\mathcal{A}^{\mathbb{Z}^{d}}\) which is invariant by the \(\mathbb{Z}^{d}\)-action. A subshift defines a _language_. For a finite subset \(P\Subset\mathbb{Z}^{d}\) we define
\[\mathcal{L}_{P}(X)=\{\mathfrak{p}\in\mathcal{A}^{P}:\exists x\in X,\ \mathfrak{p} \sqsubseteq x\}.\]
The _language_ of a subshift \(X\) is defined as
\[\mathcal{L}(X)=\bigcup_{P\Subset\mathbb{Z}^{d}}\mathcal{L}_{P}(X).\]
Let \((X,S,\mathbb{Z}^{d})\) be a subshift and \(x\in X\). We say that \(\boldsymbol{p}\in\mathbb{Z}^{d}\) is a _period_ of \(x\) if for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\), \(x_{\boldsymbol{n}+\boldsymbol{p}}=x_{\boldsymbol{n}}\). The subshift \((X,S,\mathbb{Z}^{d})\) is said to be _aperiodic_ if there are no nontrivial periods.
Let \(\mathcal{B}\) be another finite alphabet and \(Y\subseteq\mathcal{B}^{\mathbb{Z}^{d}}\) be a subshift. For \(P\Subset\mathbb{Z}^{d}\), we define a _\(P\)-block map_ as a map of the form \(\Phi:\mathcal{L}_{P}(X)\to\mathcal{B}\). This induces a factor map \(\phi:X\to Y\) given by
\[\phi(x)_{\boldsymbol{n}}=\Phi(x|_{\boldsymbol{n}+P}).\]
The map \(\phi\) is called the _sliding block code_ induced by \(\Phi\) and \(P\) is the support of the map \(\phi\). In most of the cases we may assume that the support of the sliding block codes is a ball of the form \(B(\boldsymbol{0},r)\), where \(r\) is a positive integer. We define the _radius_, denoted as \(r(\phi)\), as the smallest positive integer \(r\) for which a \(B(\boldsymbol{0},r)\)-block map can be defined to induce \(\phi\).. The next theorem characterizes the factor maps between two subshifts.
**Theorem 2.11** (Curtis-Hedlund-Lyndon).: _Let \((X,S,\mathbb{Z}^{d})\) and \((Y,S,\mathbb{Z}^{d})\) be two subshifts. A map \(\phi:(X,S,\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})\) is a factor map if and only if there exists a \(B(\boldsymbol{0},r)\)-block map \(\Phi:\mathcal{L}_{B(\boldsymbol{0},r)}(X)\to\mathcal{L}_{1}(Y)\), such that \(\phi(x)_{\boldsymbol{n}}=\Phi(x|_{\boldsymbol{n}+B(\boldsymbol{0},r)})\), for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\) and \(x\in X\)._
For \(\operatorname{GL}(d,\mathbb{Z})\)-epimorphisms there is a similar characterization. See [8, Theorem 2.7] for a proof.
**Theorem 2.12** (Curtis-Hedlund-Lyndon theorem for \(\operatorname{GL}(d,\mathbb{Z})\)-epimorphisms).: _Let \((X,S,\mathbb{Z}^{d})\) and \((Y,S,\mathbb{Z}^{d})\) be two subshifts and \(M\in\operatorname{GL}(d,\mathbb{Z})\). A map \(\phi:(X,S,\mathbb{Z}^{d})\to(X,S,\mathbb{Z}^{d})\) is an \(M\)-endomorphism if and only if there exists a \(B(\boldsymbol{0},r)\)-block map \(\Phi:\mathcal{L}_{B(\boldsymbol{0},r)}(X)\to\mathcal{L}_{1}(Y)\), such that \(\phi(x)_{\boldsymbol{n}}=\Phi(x|_{M^{-1}\boldsymbol{n}+B(\boldsymbol{0},r)})\), for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\) and \(x\in X\)._
This means, for any \(\operatorname{GL}(d,\mathbb{Z})\)-epimorphism \(\phi\) we can define a _radius_ (also denoted by \(r(\phi)\)), as the infimum of \(r\in\mathbb{N}\) such that we can define a \(B(\boldsymbol{0},r)\)-block map which induces it. In the case \(r(\phi)=0\), we say that \(\phi\) is induced by a _letter-to-letter map_.
#### 2.3.2. Substitutive subshifts
We provide a brief overview of multidimensional substitutive subshifts of constant-shape that will be used throughout this article. We refer to [8] for basic properties on this topic, where we follow the same notations. Let \(L\in\mathcal{M}_{d}(\mathbb{Z})\) be an integer expansion matrix, \(F\subseteq\mathbb{Z}^{d}\) be a fundamental domain of \(L(\mathbb{Z}^{d})\) in \(\mathbb{Z}^{d}\), i.e., a set of representative classes of \(\mathbb{Z}^{d}/L(\mathbb{Z}^{d})\) (with \(\mathbf{0}\in F\)) and \(\mathcal{A}\) be a finite alphabet. A _constant-shape substitution_ is a map \(\zeta:\mathcal{A}\to\mathcal{A}^{F}\). We say that \(F\) is the _support_ of the substitution. Since any element \(\boldsymbol{n}\in\mathbb{Z}^{d}\) can be expressed uniquely as \(\boldsymbol{n}=L(\boldsymbol{j})+\boldsymbol{f}\), with \(\boldsymbol{j}\in\mathbb{Z}^{d}\) and \(\boldsymbol{f}\in F\), the substitution extends to \(\mathcal{A}^{\mathbb{Z}^{d}}\) as
\[\zeta(x)_{L(\boldsymbol{j})+\boldsymbol{f}}=\zeta(x_{\boldsymbol{j}})_{ \boldsymbol{f}}.\]
For any \(n>0\), we define the \(n\)-th iteration of the substitution \(\zeta^{n}:\mathcal{A}\to\mathcal{A}^{F_{n}}\) by induction \(\zeta^{n+1}=\zeta\circ\zeta^{n}\), where the supports of these substitutions satisfy the recurrence \(F_{n+1}=L(F_{n})+F_{1}\) for all \(n\geq 1\). We will always assume that the sequence of supports \((F_{n})_{n>0}\) is _Folner_, i.e., for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\) we have that
\[\lim_{n\to\infty}\frac{|F_{n}\triangle(F_{n}+\boldsymbol{n})|}{|F_{n}|}=0.\]
The supports do not need to cover all the space. Nevertheless, up to adding a finite set and taking its images under power of the expansion matrix \(L\), they cover the space. This property is explained in the following proposition. It is similar to the notion of rest in numeration theory and will be technically useful.
**Proposition 2.13**.: _[_8_, Proposition 2.10]_ _Let \(\zeta\) be a constant-shape substitution. Then, the set \(K_{\zeta}=\bigcup\limits_{m>0}((id-L^{m})^{-1}(F_{m})\cap\mathbb{Z}^{d})\) is finite and satisfies_
\[\bigcup_{n\geq 0}L^{n}(K_{\zeta})+F_{n}=\mathbb{Z}^{d},\]
_using the notation \(F_{0}=\{0\}\)._
The _language_ of a substitution is the set of all patterns that occur in \(\zeta^{n}(a)\), for some \(n>0\), \(a\in\mathcal{A}\), i.e.,
\[\mathcal{L}_{\zeta}=\{\mathtt{p}\colon\mathtt{p}\sqsubseteq\zeta^{n}(a),\ \text{for some}\ n>0,\ a\in\mathcal{A}\}.\]
A substitution \(\zeta\) is called _primitive_ if there exists a positive integer \(n>0\), such that for every \(a,b\in\mathcal{A}\), \(b\) occurs in \(\zeta^{n}(a)\). If \(\zeta\) is a primitive constant-shape substitution, the existence of _periodic points_ is well-known, i.e., there exists at least one point \(x_{0}\in X_{\zeta}\) such that \(\zeta^{p}(x_{0})=x_{0}\) for some \(p>0\). In the primitive case, the subshift is preserved by replacing the substitution with a power of it; that is, \(X_{\zeta^{n}}\) is equal to \(X_{\zeta}\) for all \(n>0\). Consequently, we may assume the existence of at least one fixed point. In other words, there exists a point \(x\in X_{\zeta}\) such that \(x=\zeta(x)\). As in the one-dimensional case, it is important to note that the number of periodic points (or, if we consider proper iterations, the number of fixed points) is finite.
The substitutive subshift \((X_{\zeta},S,\mathbb{Z}^{d})\) is the topological dynamical system, where \(X_{\zeta}\) is the set of all sequences \(x\in\mathcal{A}^{\mathbb{Z}^{d}}\) such that every pattern occurring in \(x\) is in \(\mathcal{L}_{\zeta}\). When the substitutive subshift \((X_{\zeta},S,\mathbb{Z}^{d})\) is aperiodic, the substitution satisfy a combinatorial property called _recognizability_[8, 28].
**Definition 2.14**.: Let \(\zeta\) be a primitive substitution and \(x\in X_{\zeta}\) be a fixed point. We say that \(\zeta\) is _recognizable on \(x\)_ if there exists some constant \(R>0\) such that for
all \(\boldsymbol{i},\boldsymbol{j}\in\mathbb{Z}^{d}\),
\[x|_{B(L_{\zeta}(\boldsymbol{i}),R)\cap\mathbb{Z}^{d}}=x|_{B(\boldsymbol{j},R) \cap\mathbb{Z}^{d}}\implies(\exists\boldsymbol{k}\in\mathbb{Z}^{d})((\boldsymbol {j}=L_{\zeta}(\boldsymbol{k}))\wedge(x_{\boldsymbol{i}}=x_{\boldsymbol{k}})).\]
The recognizability property implies some topological and combinatorial properties of the substitutive subshift that we summarize in the following:
* The substitutive subshift \((X_{\zeta},S,\mathbb{Z}^{d})\) is aperiodic.
* For any \(n>0\), the map \(\zeta^{n}:X_{\zeta}\to\zeta^{n}(X_{\zeta})\) is a homeomorphism.
* For any \(n>0\), every \(x\in X_{\zeta}\) can be written in a unique way \(x=S^{\boldsymbol{f}}\zeta^{n}(x_{1})\) with \(\boldsymbol{f}\in F_{n}\) and \(x_{1}\in X_{\zeta}\).
It follows that the map \(\pi_{n}\colon X_{\zeta}\to F_{n}\) by \(\pi_{n}(x)=\boldsymbol{f}\) when \(x\in S^{\boldsymbol{f}}\zeta^{n}(X_{\zeta})\) is well-defined, continuous and can be extended to a factor map \(\pi\colon(X_{\zeta},S,\mathbb{Z}^{d})\to(\overleftarrow{\mathbb{Z}^{d}}_{(L^{ n})},\boldsymbol{+},\mathbb{Z}^{d})\) defined as \(\pi(x)=(\pi_{n}(x))_{n}\)[8].
## 3. Description of the linear representation group of odometer systems
In this section, we describe the linear representation group and its elements for several odometers, specifically the universal and constant-base \(\mathbb{Z}^{2}\)-odometer systems. It is worth noting that we did not find a similar result in the existing literature. One of our main tools involves the characterization (NC 1) for matrices in the linear representation group of an odometer system \(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}\).
We begin by presenting an equivalent formulation of (NC 1) in terms of arithmetical equations. For any given \(n\in\mathbb{N}\), let \(L_{n}\in\mathcal{M}(d,\mathbb{Z})\) be a matrix such that \(L_{n}(\mathbb{Z}^{d})=Z_{n}\). It is important to note that this matrix is unique, up to a composition with a matrix in \(\operatorname{GL}(d,\mathbb{Z})\). Then, the condition (NC 1) is equivalent to: for all \(n\in\mathbb{N}\), there exists \(m_{M}(n)\in\mathbb{N}\) such that \(L_{n}^{-1}ML_{m_{M}(n)}\) is an endomorphism in \(\mathbb{Z}^{d}\). Since \(\det(L)L^{-1}=\operatorname{adj}(L)\), where \(\operatorname{adj}(L)\) is the _adjugate matrix_ of \(L\), we can express (NC 1) equivalently as:
(NC 2) \[\forall n\in\mathbb{N},\exists m_{M}(n)\in\mathbb{N},\ \operatorname{adj}(L_{n}) ML_{m_{M}(n)}\equiv 0\ (\operatorname{mod}\ \det(L_{n})).\]
### The universal \(\mathbb{Z}^{d}\)-odometer case
Let \((\Gamma_{n})_{n\in\mathbb{N}}\) be an enumeration of all finite-index subgroups of \(\mathbb{Z}^{d}\). We define the _universal d-dimensional odometer system_ as follows: Start with \(\Lambda_{0}=\Gamma_{0}\), and for any \(n\geq 1\) set \(\Lambda_{n}=\Lambda_{n-1}\cap\Gamma_{n}\). Since the intersection of finite-index subgroups remains a finite-index subgroup, we can define the universal \(d\)-dimensional odometer as \(\overleftarrow{\mathbb{Z}^{d}}_{(\Lambda_{n})}\). This odometer is universal in the sense that, by Lemma 2.5, any odometer system is a topological factor of the universal odometer. For example, the universal 1-dimensional odometer is equal to \(\overleftarrow{\mathbb{Z}}_{(n\mathbb{Z})}\). With respect to its linear representation group, (NC 2) leads to the following result.
**Proposition 3.1**.: _The linear representation group of the \(d\)-dimensional universal odometer is equal to \(\operatorname{GL}(d,\mathbb{Z})\)._
Proof.: Consider \(L_{n}\in\mathcal{M}(d,\mathbb{Z})\) such that \(L_{n}(\mathbb{Z}^{d})=\Lambda_{n}\). A matrix \(M\in\operatorname{GL}(d,\mathbb{Z})\) is in \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(\Lambda_{n})})\) if and only if \(M\) satisfies (NC 2). Now, for any \(n\in\mathbb{N}\), we can choose \(m(n)\in\mathbb{N}\) large enough such that \(\Lambda_{m(n)}\leqslant\det(L_{n})\mathbb{Z}^{d}\). This implies that \(\operatorname{adj}(L_{n})ML_{m(n)}\equiv 0\ (\operatorname{mod}\ \det(L_{n}))\) for any matrix \(M\in\operatorname{GL}(d,\mathbb{Z})\). We then conclude that \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(\Lambda_{n})})=\operatorname{GL}(d, \mathbb{Z})\).
### The constant-base \(\mathbb{Z}^{2}\)-odometer case
We will be mainly interested in \(\text{GL}(d,\mathbb{Z})\)-endomorphisms of constant-base odometers, i.e., when \(Z_{n}=L^{n}(\mathbb{Z}^{d})\) for each \(n\in\mathbb{N}\) and for some expansion matrix \(L\). In this case we get the following direct corollary of Lemma 2.6 and the condition (NC 2).
**Corollary 3.2**.: _Let \(L\in\mathcal{M}(d,\mathbb{Z})\) be an expansion matrix._
1. _A matrix_ \(M\in\text{GL}(d,\mathbb{Z})\) _is in_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\)_, if and only if_
* \(\forall n\in\mathbb{N},\exists m(n)\in\mathbb{N},\ \text{adj}(L^{n})ML^{m(n)} \equiv 0\ (mod\ \det(L^{n}))\)_._
2. _If_ \(M\in\text{GL}(d,\mathbb{Z})\) _commutes with some power of the expansion matrix_ \(L\)_, then_ \(M\) _is in the linear representation semigroup_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\)_._
3. _For any_ \(M\in\text{GL}(d,\mathbb{Z})\) _we have that_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(ML^{n}M^{-1})})=M\vec{N}(\overleftarrow {\mathbb{Z}^{d}}_{(L^{n})})M^{-1}\)_._
In the next theorem, we present the structure of the linear representation group of constant-base \(\mathbb{Z}^{2}\)-odometer systems based on computable arithmetical conditions of the expansive matrix \(L\). Within this family, we obtain a bifurcation phenomenon at the level of the linear representation group, depending on arithmetic relations of the coefficients of the matrix \(L\). To describe the different cases, we introduce some additional notations. For any positive integer \(n>1\), the _radical \(\text{rad}(n)\) of \(n\)_ is defined as the product of the distinct prime numbers that divide \(n\). If \(n<-1\), we define \(\text{rad}(n)\) just as \(\text{rad}(-n)\). The _centralizer \(\text{Cent}_{\text{GL}(2,\mathbb{Z})}(L)\) of a matrix \(L\) in \(\text{GL}(2,\mathbb{Z})\)_ is defined as the subgroup consisting of all matrices in \(\text{GL}(2,\mathbb{Z})\) commuting with \(L\). Recall that, as established in Corollary 3.2, the centralizer \(\text{Cent}_{\text{GL}(2,\mathbb{Z})}(L)\) is always a subgroup of \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\).
**Theorem 3.3**.: _Let \(L\in\mathcal{M}(2,\mathbb{Z})\) be an integer expansion matrix._
1. _If_ \(\text{rad}(\det(L))\) _divides_ \(\text{trace}(L)\)_, then the linear representation group_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is equal to_ \(\text{GL}(2,\mathbb{Z})\)_._
2. _Otherwise_ 1. _If the spectrum of the matrix_ \(L\) _is disjoint from the integers, then the linear representation group_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is the centralizer_ \(\text{Cent}_{GL}(2,\mathbb{Z})(L)\)_. Moreover, if the spectrum of_ \(L\) _is disjoint from the real line, then_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is a finite group._ 2. _When the spectrum of_ \(L\) _contains an integer value, the linear representation group_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is finite or virtually_ \(\mathbb{Z}\)_._ _More precisely, under explicit arithmetical properties of_ \(L\)_,_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n}(\mathbb{Z}^{2}))})\) _isomorphic to_ \(\mathbb{Z}/2\mathbb{Z}\) _or_ \(\mathbb{Z}^{2}/(2\mathbb{Z}\times 2\mathbb{Z})\)_, or its abelianization is finite, and its commutator subgroup is cyclic._
Along the proof, the group structure of \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n}(\mathbb{Z}^{2}))})\) is specified in terms of the arithmetical properties of the coefficient of \(L\).
The following examples illustrate the different cases of Theorem 3.3 according to the expansion matrix \(L\).
**Example 3.4** (Different results for Theorem 3.3).:
1. As we will see in the proof of Theorem 3.3, the case (1) can be easily generalized for higher dimensions in the following way: If \(\text{rad}(\det(L))\) divides every coefficient of
the characteristic polynomial of \(L\), then the linear representation semigroup \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\) is equal to \(\operatorname{GL}(d,\mathbb{Z})\). In particular, if \(L=pM\), with \(p\in\mathbb{Z}\) and \(M\in\operatorname{GL}(d,\mathbb{Z})\), then the linear representation group \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{L^{n}})\) is \(\operatorname{GL}(d,\mathbb{Z})\).
2. The matrix \(L_{1}=\begin{pmatrix}2&-1\\ 1&5\end{pmatrix}\) illustrates the case (2)(a): \(\operatorname{trace}(L_{1})=7\), \(\det(L_{1})=11\), and \(L_{1}\) has real eigenvalues (which are equal to \(7/2\pm\sqrt{5}/2\)). The matrices in \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{1}^{n})})\) are the ones commuting with \(L_{1}\), which is an infinite group containing \(\begin{pmatrix}2&1\\ -1&-1\end{pmatrix}\).
3. The matrix \(L_{2}=\begin{pmatrix}2&-1\\ 1&3\end{pmatrix}\) also illustrates the case (2)(a) but with a spectrum disjoint from the real line: \(\operatorname{trace}(L_{2})=5\) and \(\det(L_{2})=7\), and \(L_{2}\) has complex eigenvalues \(5/2\pm i\sqrt{3}/2\). The linear representation semigroup \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{L_{2}^{n}})\) is equal to \(\operatorname{Cent}_{GL(2,\mathbb{Z})}(L_{2})\), which corresponds to the set \[\left\{\begin{array}{cc}\begin{pmatrix}1&1\\ -1&0\end{pmatrix},&\begin{pmatrix}-1&-1\\ 1&0\end{pmatrix},&\begin{pmatrix}0&-1\\ 1&1\end{pmatrix}\\ \begin{pmatrix}0&-1\\ 1&-1\end{pmatrix},&\begin{pmatrix}1&0\\ 0&1\end{pmatrix},&\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\end{array}\right\}.\]
4. The matrix \(L_{3}=\begin{pmatrix}6&1\\ 0&2\end{pmatrix}\) illustrates the case (2)(b). This is an upper triangular matrix which is not diagonalizable by \(\operatorname{GL}(2,\mathbb{Z})\). It will be proved that \(\vec{N}(\overleftarrow{\mathbb{Z}}_{(L_{3}^{n})})\) is conjugate to \(\left\{\begin{pmatrix}m_{11}&m_{12}\\ 0&m_{22}\end{pmatrix}:|m_{11}m_{22}|=1,m_{12}\in\mathbb{Z}\right\}\) via the matrix \(\begin{pmatrix}1&0\\ 4&1\end{pmatrix}\), so it is virtually \(\mathbb{Z}\). It can be directly checked that the linear representation group \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\) associated with a matrix \(L\) diagonalizable by \(\operatorname{GL}(2,\mathbb{Z})\) also has the same group structure, being isomorphic to a set of invertible upper triangular matrices.
5. The matrix \(L_{4}=\begin{pmatrix}3&1\\ 0&5\end{pmatrix}\) also concerns the case (2)(b). This matrix has eigenvalues \(3\) and \(5\). Along the proof of Theorem 3.3 it will be shown that a matrix \(M\) is in \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) if and only if \(M\) commutes with \(L_{4}\), so \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{4}^{n})})\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\).
6. The last example illustrating the case (2)(b) is the matrix \(L_{5}=\begin{pmatrix}2&1\\ 0&3\end{pmatrix}\) with eigenvalues \(2\) and \(3\). It will be also shown that \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{5}^{n})})=\operatorname{Cent}_{ \operatorname{GL}(2,\mathbb{Z})}(L_{5})\), which is isomorphic to \((\mathbb{Z}/2\mathbb{Z})^{2}\).
**Remark 3.5**.: Note that Theorem 3.3 implies that the factor map between equicontinuous systems is not necessarily compatible with \(\operatorname{GL}(d,\mathbb{Z})\)-endomorphisms. Consider \(X\) as the universal \(\mathbb{Z}^{2}\)-odometer, and set \(Y=\overleftarrow{\mathbb{Z}^{2}}_{(L_{1}^{n})}\). Hence \((Y,\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak \allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak \allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak \allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak \allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak \allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\)\) is an equicontinuous factor of \((X,\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak\allowbreak \allowbreak\allowbreak\allowbreak\allow\break\allowbreak\allowbreak\allow\break \allowbreak\allow\break\allowbreak\allow\break\allowbreak\allowbreak\allow \break\allowbreak\allow\break\allowbreak\allow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Proof of Theorem 3.3
In this subsection, we will prove Theorem 3.3. We decompose this proof according to its spectral properties. We will get more precise results as the ones stated in Theorem 3.3. We start with the case where the expansion matrix has integer eigenvalues.
From now on, an integer expansion matrix \(L\) will be denoted as \(L=\begin{pmatrix}p&q\\ r&s\end{pmatrix}\), its powers as \(L^{n}=\begin{pmatrix}p(n)&q(n)\\ r(n)&s(n)\end{pmatrix}\) and a matrix \(M\) in \(\operatorname{GL}(2,\mathbb{Z})\) as \(M=\begin{pmatrix}m_{11}&m_{12}\\ m_{21}&m_{22}\end{pmatrix}\).
#### 3.3.1. The triangular case
We now consider the case where \(L\) is a triangular matrix. We focus only on the upper triangular case, i.e., \(q\neq 0\) and \(r=0\). The lower triangular case can be deduced from this, thanks to Corollary 3.2 via conjugation with the matrix \(\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\). For all \(n\in\mathbb{Z}\) we have \(L^{n}=\begin{pmatrix}p^{n}&q(n)\\ 0&s^{n}\end{pmatrix},\) where \(q(n)=q(p^{n}-s^{n})/(p-s)=q\sum\limits_{i=0}^{n-1}p^{i}s^{n-1-i}\). Since \(\det(L)=ps\) and \(\operatorname{trace}(L)=p+s\), so \(\operatorname{rad}(\det(L))\) divides \(\operatorname{trace}(L)\) if and only if \(\operatorname{rad}(p)=\operatorname{rad}(s)\). In this case we get a more precise result about the linear representation group as the one mentioned in Theorem 3.3.
**Proposition 3.6**.: _Let \(L\in\mathcal{M}(2,\mathbb{Z})\) be an expansion upper triangular matrix such that \(\operatorname{rad}(\det(L))\) does not divide \(\operatorname{trace}(L)\). Then, we have one of the following:_
1. _If_ \(\operatorname{rad}(p)\) _does not divide_ \(s\) _and_ \(\operatorname{rad}(s)\) _divides_ \(p\)_, then a matrix_ \(M\in GL(2,\mathbb{Z})\) _is in_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{L^{n}})\) _if and only if_ \((p-s)^{2}m_{12}=m_{21}q^{2}+(p-s)(m_{11}-m_{22})q\)_. Moreover,_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is virtually_ \(\mathbb{Z}\)_._
2. _Assume that_ \(\operatorname{rad}(p)\) _divides_ \(s\) _and_ \(\operatorname{rad}(s)\) _does not divide_ \(p\)_. Then_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is virtually_ \(\mathbb{Z}\)_._
3. _If_ \(\operatorname{rad}(p)\) _does not divide_ \(s\) _and_ \(\operatorname{rad}(s)\) _does not divide_ \(p\)_, we have two cases:_ * _If_ \(2q\in(p-s)\mathbb{Z}\)_, then_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is isomorphic to_ \(\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}\)_._ * _Otherwise,_ \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) _is isomorphic to_ \(\mathbb{Z}/2\mathbb{Z}\)_._
Proof.: Let \(M\) be in \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\). Define the matrix \(\overline{M}=(p-s)M-(m_{11}-m_{22})L-(p\cdot m_{22}-m_{11}\cdot s)\mathrm{id}_{ \mathbb{R}^{2}}\). Then, \(\overline{M}\) satisfies (NC 3). Moreover, note that \(\overline{M}\) has the form \(\overline{M}=\begin{pmatrix}0&\overline{m_{12}}\\ \overline{m_{21}}&0\end{pmatrix}\), where \(\overline{m_{12}}=(p-s)m_{12}-(m_{11}-m_{22})q\) and \(\overline{m_{21}}=(p-s)m_{21}\), with \(\overline{m_{12}},\overline{m_{21}}\in\mathbb{Z}\). Now, (NC 3) implies that for all \(n,m>0\),
\[\begin{pmatrix}-\overline{m_{21}}p^{m}q(n)&\overline{m_{12}}s^{n+m}-\overline{ m_{21}}q(m)q(n)\\ \overline{m_{21}}p^{n+m}&\overline{m_{21}}pq(m)\end{pmatrix}\equiv\begin{pmatrix} 0&0\\ 0&0\end{pmatrix}\ (\text{mod }p^{n}s^{n}). \tag{3}\]
Suppose that \(\operatorname{rad}(s)\) does not divide \(p\). Then, there exists a prime number \(t\) dividing \(s\) such that for all \(n>0\) and \(m>0\), \(p^{m}\) is an invertible element in \(\mathbb{Z}/t^{n}\mathbb{Z}\). Hence, \(\overline{m}_{21}\equiv 0\ (\text{mod }t^{n})\) for any \(n>0\), which implies that \(\overline{m}_{21}=0\), so \(m_{21}=0\). Now, by (3), we get that
\[\forall n\geq 0,\exists m\geq 0,\quad\overline{m}_{12}s^{m}\equiv 0\ (\text{mod }p^{n}). \tag{4}\]
There are two cases:
* If \(\mathrm{rad}(p)\) does not divide \(s\), then (4) implies that \(\overline{m}_{12}=0\). We conclude that \(\overline{M}=\begin{pmatrix}0&0\\ 0&0\end{pmatrix}\), i.e., \((p-s)M=(m_{11}-m_{22})L+(p\cdot m_{22}-m_{11}\cdot s)\mathrm{id}_{\mathbb{R}^{ 2}}\). Since \(m_{21}=0\), then \(M\) has the form \[M=\begin{pmatrix}m_{11}&m_{12}(m_{11},m_{22})\\ 0&m_{22}\end{pmatrix},\] where \(m_{12}(m_{11},m_{22})\) satisfies \((p-s)m_{12}(m_{11},m_{22})=(m_{11}-m_{22})q\).
* Note that \(m_{11}=m_{22}\) if and only if \(m_{12}=0\).
* If \(m_{11}\neq m_{22}\), then \(m_{11}-m_{22}\in\{-2,2\}\), so \((p-s)m_{12}=\pm 2q\). Since \(M\) has integer coefficients, this necessarily implies that \(2q\in(p-s)\mathbb{Z}\). If this condition is satisfied, then \(M\) has the form \[M=\begin{pmatrix}m_{11}&\frac{(m_{11}-m_{22})q}{p-s}\\ 0&m_{22}\end{pmatrix}.\] It is not difficult to see that \(M^{2}\) is the identity matrix. We conclude that \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}\). If \(2q\notin(p-s)\mathbb{Z}\), then \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\).
* If \(\mathrm{rad}(p)\) divides \(s\), then any \(\overline{m}_{12}\in\mathbb{Z}\) satisfies (4). Thus, any matrix \(M=\begin{pmatrix}m_{11}&m_{12}\\ 0&m_{22}\end{pmatrix}\) with \(|m_{11}m_{22}|=1\) satisfies (NC 3).
Finally, if \(\mathrm{rad}(s)\) divides \(p\), then for any \(n>0\) and any \(m\) large enough \(s^{n}\) divides \(p^{m}\) and \(q(m)\). Let \(t\) be a prime number dividing \(p\) that does not divide \(s\). Then, by (3) we obtain that
\[(p-s)^{2}\overline{m}_{12}s^{n+m}\equiv\overline{m}_{21}q^{2}s^{n+m}\ ( \mathrm{mod}\ t^{n}). \tag{5}\]
Since \(t\) does not divide \(s\), for any \(n,m>0\), \(s^{n+m}\) is an invertible element in \(\mathbb{Z}/t^{n}\mathbb{Z}\). So (5) is reduced to
\[\forall n,\ (p-s)^{2}\overline{m}_{12}\equiv\overline{m}_{22}q^{2}\ ( \mathrm{mod}\ t^{n}). \tag{6}\]
This implies that \((p-s)^{2}\overline{m}_{12}=\overline{m}_{21}q^{2}\). Thus, we get that
\[(p-s)^{2}m_{12}=m_{21}q^{2}+(p-s)(m_{11}-m_{22})q. \tag{7}\]
This implies that if \(M\in\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\), then \(M\) is in \(\mathrm{span}_{\mathbb{Q}}\left\{L,\mathrm{id},\begin{pmatrix}0&1\\ (p-s)^{2}/q^{2}&0\end{pmatrix}\right\}\). We separate here in two cases:
* If \((p-s)\) divides \(q\), we write \(q=k(p-s)\) for some \(k\in\mathbb{Z}\). By (7), we have that \[m_{12}=m_{21}k^{2}+k(m_{11}-m_{22}).\] Since \(|\det(M)|=1\) and \(\det(M)=(m_{11}+m_{21}\cdot k)(m_{22}-m_{21}\cdot k)\), we get that \(|m_{11}+m_{21}\cdot k|=1\) and \(|m_{22}-m_{21}\cdot k|=1\). We can parameterize the matrices in \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) as follows: \[\left\{\begin{array}{ccc}\begin{pmatrix}1-m\cdot k&-mk^{2}\\ m&1+m\cdot k\end{pmatrix},\begin{pmatrix}1-m\cdot k&2k-mk^{2}\\ m&m\cdot k-1\end{pmatrix}\\ \begin{pmatrix}-1-m\cdot k&-2k-mk^{2}\\ m&1+m\cdot k\end{pmatrix},\begin{pmatrix}-1-m\cdot k&-mk^{2}\\ m&-1+m\cdot k\end{pmatrix}&:m\in\mathbb{Z}\end{array}\right\}.\]
Note that this group is virtually \(\mathbb{Z}\), since the quotient by \(\left\langle\begin{pmatrix}1-k&-k^{2}\\ 1&1+k\end{pmatrix}\right\rangle\) is finite.
* If \((p-s)\) does not divide \(q\), we will find a matrix \(P\in\operatorname{GL}(2,\mathbb{Z})\) such that \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}(L^{n}))\) is conjugate to the group of matrices \(\left\{\begin{pmatrix}m_{11}&m_{12}\\ 0&m_{22}\end{pmatrix}:m_{11},m_{12},m_{22}\in\mathbb{Z},|m_{11}m_{22}|=1\right\}\) and we conclude that \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}(L^{n}))\) is virtually \(\mathbb{Z}\). Indeed, set \(c=\gcd(p-s,q)\) and \(g=(p-s)/c\), \(h=q/c\). Since \(\gcd(g,h)=1\), Bezout's lemma implies the existence of two numbers \(e,f\in\mathbb{Z}\) such that \(eh-gf=1\). A standard computation shows that \(P=\begin{pmatrix}e&f\\ g&h\end{pmatrix}\) is such a matrix.
#### 3.3.2. The general case
We are ready to prove Theorem 3.3.
Proof of Theorem 3.3.: We continue to use the notations introduced in Section 3.3. Since we already proved the triangular case, we assume that the coefficients of the expansion matrix \(L\) satisfy \(q\cdot r\neq 0\).
It will be useful to note that the Cayley-Hamilton theorem implies that
\[L^{2}=\operatorname{trace}(L)L-\det(L)\mathrm{id}_{\mathbb{R}^{2}}. \tag{8}\]
First assume that \(\operatorname{rad}(\det(L))\) divides \(\operatorname{trace}(L)\). By (8), we can conclude that \(L^{2}\equiv 0\ (\operatorname{mod}\ \operatorname{rad}(\det(L)))\). Hence, for all \(n\in\mathbb{N}\) there exists \(m(n)\in\mathbb{N}\) large enough such that \(L^{m(n)}\equiv 0\ (\operatorname{mod}\ \det(L)^{n})\). Therefore, any matrix in \(\operatorname{GL}(2,\mathbb{Z})\) satisfies (NC 3) and we can deduce that \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}(L^{n}))=GL(2,\mathbb{Z})\).
Now we deal with the case when \(\operatorname{rad}(\det(L))\) does not divide \(\operatorname{trace}(L)\). In dimension \(2\), this implies that \(L\) is diagonalizable. Let \(M=\begin{pmatrix}m_{11}&m_{12}\\ m_{21}&m_{22}\end{pmatrix}\) be in \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}(L^{n}))\), so that it satisfies (NC 3). Define the matrix \(\overline{M}=rM-m_{21}L-(r\cdot m_{11}-p\cdot m_{21})\mathrm{id}_{\mathbb{R}^ {2}}\). The matrix \(\overline{M}\) also satisfies (NC 3) and has the form \(\overline{M}=\begin{pmatrix}0&\overline{m_{12}}\\ 0&\overline{m_{22}}\end{pmatrix}\), with \(\overline{m}_{12},\overline{m}_{21}\in\mathbb{Z}\).
Suppose first that \(L\) has integer eigenvalues \(t_{1},t_{2}\in\mathbb{Z}\), i.e., we can write
\[(eh-fg)L=P\begin{pmatrix}t_{1}&0\\ 0&t_{2}\end{pmatrix}\mathrm{adj}(P),\quad\text{ for some integer matrix }P=\begin{pmatrix}e&f\\ g&h\end{pmatrix}.\]
If \(|eh-fg|=1\), then we can use Proposition 3.6 with Corollary 3.2 to conclude that \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}(L^{n}))\) is conjugate (via \(P\) in \(\operatorname{GL}(2,\mathbb{Z})\)) to the linear representation group \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}(t_{1}^{n}\mathbb{Z}\times t_{2}^{n} \mathbb{Z}))\). The same conclusion holds when \(L\) is conjugate (via a \(\operatorname{GL}(2,\mathbb{Z})\)-matrix) to a triangular matrix. We then assume that \(|eh-fg|>1\), \(e,f,g,h\in\mathbb{Z}\), and \(\gcd(e,g)=\gcd(f,h)=1\). For any \(n>0\), the coefficients of \(L^{n}\) are given by:
\[p(n)=\frac{eht_{1}^{n}-fgt_{2}^{n}}{eh-fg}\quad q(n)=\frac{ef(t_{1}^{n}-t_{2}^ {n})}{eh-fg}\]
\[r(n)=\frac{gh(t_{1}^{n}-t_{2}^{n})}{eh-fg}\quad\ s(n)=\frac{eht_{2}^{n}-fgt_{1 }^{n}}{eh-fg}.\]
So, (NC 3) can be rewritten as:
\[gh(t_{1}^{m}-t_{2}^{m})[\overline{m}_{12}(eht_{2}^{n}-fgt_{1}^{n})- \overline{m}_{22}ef(t_{2}^{n}-t_{1}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n})\] \[gh(t_{1}^{m}-t_{2}^{m})[\overline{m}_{22}(eht_{1}^{n}-fgt_{2}^{n} )-\overline{m}_{12}gh(t_{1}^{n}-t_{2}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n})\] \[(eht_{2}^{m}-fgt_{1}^{m})[\overline{m}_{12}(eht_{2}^{n}-fgt_{1}^{n })-\overline{m}_{22}ef(t_{2}^{n}-t_{1}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n})\] \[(eht_{2}^{m}-fgt_{1}^{m})[\overline{m}_{22}(eht_{1}^{n}-fgt_{2}^{n })-\overline{m}_{12}gh(t_{1}^{n}-t_{2}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n}). \tag{9}\]
Since \(\text{rad}(\det(L))=\text{rad}(t_{1}t_{2})\) does not divide \(\text{trace}(L)=t_{1}+t_{2}\), the following three cases hold:
**Case 1**.: Suppose that \(\text{rad}(t_{1})\) divides \(t_{2}\), but there exists a prime number \(t\) dividing \(t_{2}\) that does not divide \(t_{1}\). Then (9) can be reduced to
\[fgt_{1}^{m+n}[\overline{m}_{22}\cdot eh-\overline{m}_{12}\cdot gh]\equiv 0\ (\text{mod}\ t^{n}). \tag{10}\]
Since \(t\) does not divide \(t_{1}\), for any \(n,m\geq 0\), \(t_{1}^{n+m}\) is an invertible element in \(\mathbb{Z}/t^{n}\mathbb{Z}\). We can also choose \(n\) large enough so that \(t^{n}\) does not divide any of the coefficients \(e,f,g,h\in\mathbb{Z}\). We then conclude that \(\overline{m}_{12}g=\overline{m}_{22}e\). This implies that
\[\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\subseteq\left\{aL+b \text{id}+c\begin{pmatrix}0&e\\ 0&g\end{pmatrix},a,b,c\in\frac{1}{r}\mathbb{Z}\right\}\cap\text{GL}(2,\mathbb{ Z}).\]
Since \(P^{-1}\begin{pmatrix}0&e\\ 0&g\end{pmatrix}P=\begin{pmatrix}g&h\\ 0&0\end{pmatrix}\), the set \(P^{-1}\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})P\) is a subgroup of unimodular upper triangular matrices in \(G\), where
\[G=\left\{\begin{pmatrix}a&b\\ 0&c\end{pmatrix},\quad a,b,c\in\frac{1}{r}\mathbb{Z},|ac|=1\right\}.\]
Notice that any commutators of \(G\) are of the form \(\begin{pmatrix}1&b\\ 0&1\end{pmatrix}\). So, the derived subgroup \(G^{\prime}\) (generated by the commutators) is isomorphic to \(\mathbb{Z}\). Moreover, the abelianization \(G/G^{\prime}\) of \(G\) is finite. Therefore, the abelianization of \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) is finite, and its derived subgroup is isomorphic to a subgroup (eventually trivial) of \(\mathbb{Z}\). Conversely, a direct computation shows that the matrix \(P^{-1}\begin{pmatrix}1&\det(P)\\ 0&1\end{pmatrix}P\) satisfies (NC 3), proving that the derived subgroup of \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) is nontrivial.
**Case 2**.: The case where \(\text{rad}(t_{2})\) divides \(t_{1}\) but \(\text{rad}(t_{1})\) does not divide \(t_{2}\) is symmetric to the former one.
**Case 3**.: Neither \(\text{rad}(t_{1})\) divides \(t_{2}\) nor \(\text{rad}(t_{2})\) divides \(t_{1}\). The former computations in the two cases provide that
\[\overline{m}_{12}g=\overline{m}_{22}e,\quad\wedge\quad\overline{m}_{12}h= \overline{m}_{22}f.\]
Since \(eh-fg\neq 0\), this implies that \(\overline{m}_{12}=0\) and \(\overline{m}_{22}=0\), so \(\overline{M}=0\). We conclude that \(M\) commutes with \(L\), i.e., the linear representation group \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\) is equal to \(\text{Cent}_{\text{GL}(2,\mathbb{Z})}(L)\), which can be isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\).
Now we suppose that \(L\) does not have integer eigenvalues. A direct induction on (8) gives, for any \(n>0\), that \(L^{n}\equiv\text{trace}(L)^{n-1}L\ (\text{mod}\ \det(L))\). Since \(\text{rad}(\det(L))\) does not divide \(\text{trace}(L)\), there exists a prime number \(t\) dividing \(\det(L)\) that does
not divide \(p\) or \(s\). Without loss of generality (up to a conjugation with \(\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\)) we may assume that \(t\) does not divide \(s\). As \(s(n)\equiv\operatorname{trace}(L)^{n-1}s\ (\bmod\ \det(L))\), then, for all \(n>0\) and \(m>0\), \(s(m)\) is an invertible element in \(\mathbb{Z}/t^{n}\mathbb{Z}\). Hence, (NC 3) implies that
\[\overline{m_{12}}s(n)-\overline{m_{22}}q(n) \equiv 0\ (\bmod\ t^{n}) \tag{12}\] \[-\overline{m_{12}}r(n)+\overline{m_{22}}p(n) \equiv 0\ (\bmod\ t^{n}), \tag{11}\]
which is equivalent to
\[\operatorname{adj}(L^{n})\biggl{(}\overline{m_{12}}\biggr{)}\equiv\begin{pmatrix} 0\\ 0\end{pmatrix}\ (\bmod\ t^{n}). \tag{13}\]
Consider the set \(E=\biggl{\{}\begin{pmatrix}\overline{m_{12}}\\ \overline{m_{22}}\end{pmatrix}\in\mathbb{Z}^{2}\colon\text{ satisfying \eqref{eq:eq:eq:eq:eq:eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq
\((k_{n}^{1})_{n>0},(k_{n}^{2})_{n>0}\subseteq\mathbb{Z}\) such that \(\det(L)^{n}k_{n}^{1}=\overline{m_{12}}s(n)-\overline{m_{22}}q(n)\) and \(\det(L)^{n}k_{n}^{2}=-\overline{m_{12}}r(n)+\overline{m_{22}}p(n)\), i.e.,
\[\begin{array}{ll}k_{n}^{1}&=\overline{m_{12}}\frac{s(n)}{\det(L)^{n}}- \overline{m_{22}}\frac{q(n)}{\det(L)^{n}}\\ \\ k_{n}^{2}&=-\overline{m_{12}}\frac{r(n)}{\det(L)^{n}}+\overline{m_{22}}\frac{p (n)}{\det(L)^{n}}.\end{array} \tag{16}\]
Since \(L\) is an expansion matrix, then \(L^{-1}\) is a contraction, so we have that
\[\lim_{n\to\infty}\frac{p(n)}{\det(L)^{n}}=\lim_{n\to\infty}\frac{q(n)}{\det(L) ^{n}}=\lim_{n\to\infty}\frac{r(n)}{\det(L)^{n}}=\lim_{n\to\infty}\frac{s(n)}{ \det(L)^{n}}=0,\]
this implies that for all \(n\in\mathbb{N}\) large enough, \(k_{n}^{1}=k_{n}^{2}=0\), and we conclude that \(\overline{m_{12}}=\overline{m_{22}}=0\).
Theorem 3.3 implies that the linear representation group of constant-base \(\mathbb{Z}^{2}\)-odometer systems is computable. But the techniques developed in this article may not be directly applicable to higher dimensions. This raises the following question:
**Question 3.8**.: _Regarding the linear representation group of higher dimensional constant-base odometer systems, are its elements computable? Is its group structure computable?_
By "computable elements", we mean that if there exists an algorithm to decide whether a matrix \(M\) belongs to the linear representation group or not. The second question involves finding an algorithm to determine the linear representation group, up to isomorphism, as a function of the base matrix \(L\).
## 4. Minimal subshifts with infinite linear representation group
In this section, we present minimal substitutive subshifts with infinite linear representation groups, thereby providing a positive response to a question posed in [2]. Their normalizer groups are fully explained. We prove the following result.
**Theorem 4.1**.: _For any expansion matrix \(L\in\mathcal{M}(d,\mathbb{Z})\) with \(|\det L|\geq 3\), there exists an aperiodic minimal substitutive \(\mathbb{Z}^{d}\)-subshift \(X\) with expansion matrix \(L\) such that_
* _It is coalescent._
* _It is an almost 1-to-1 extension of_ \(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}\)_._
* _Its automorphisms are reduced to the shift transformations._
* _Its linear representation semigroup_ \(\vec{N}(X,S)\) _is equal to_ \[\{M\in\bigcup_{k\geq 0}\bigcap_{n\geq k}L^{n}\text{GL}(d,\mathbb{Z})L^{-n}: \exists n_{0},L^{-n}ML^{n}=L^{-p}ML^{p}\;(\text{mod}\;L(\mathbb{Z}^{d})), \forall n,p\geq n_{0}\}.\]
* _Its normalizer group is a semidirect product of_ \(\mathbb{Z}^{d}\) _with_ \(\vec{N}(X,S)\)_._
_The isomorphisms are explicit._
In particular, when \(L\) is proportional to the identity, the former result provides an example of a minimal subshift with a linear representation semigroup equal to \(\text{GL}(d,\mathbb{Z})\).
To describe these explicit examples, we will briefly introduce some notions coming from (aperiodic) tiling theory. Most of the references come from [3]. From a
tiling perspective, the _half-hex inflation_ is a well-known inflation rule analogous to a symbolic substitution (for more properties about this tiling substitution, see [3, Section 6.4]). The tiles consist of 6 regular half-hexagons, each one being the image under a 6-fold rotation of a single one. The inflation rule, up to rotation, is described in Fig. 1.
In tiling terminology, it is an _edge-to-edge inflation_, which means that each inflated tile is precisely dissected into copies of the tiles, and the vertices of any tile only intersect with the vertices of the adjacent tiles. This inflation defines an aperiodic tiling of the plane (see [3, Example 6.4]). Since the largest edge of any half-hex can only meet the largest edge of the adjacents half-hexes, two half-hexes always join to form a regular hexagon via their largest edges. By applying this procedure, the half-hex tiling can be decomposed into three hexagons, each distinguished by a single diagonal line, as shown in Fig. 2 (see [3]).
Using these full hexagons, we can define a _pseudo inflation_ (using the vocabulary on [3]), which is conjugated to the half-hex tiling as in Fig. 3.
From this pseudo inflation, we construct a tiling substitution with only the four shaded hexagons in Fig. 3. In this tiling substitution, there is an invariant discrete lattice \(\Lambda\subseteq\mathbb{R}^{2}\) generated by the centers of these hexagons, using the vectors \(\boldsymbol{u}\) and \(\boldsymbol{v}\) as depicted in Fig. 3. The discrete translation \(\Lambda\)-subaction is conjugate to the substitutive subshift associated with the following constant-shape substitution, called _half-hex substitution_, \(\zeta_{hh}\) with an expansion matrix \(L_{hh}=2\cdot\mathrm{id}_{\mathbb{Z}^{2}}\) and support \(F_{1}^{hh}=\{(0,0),(1,0),(0,1),(1,-1)\}\)
Figure 1. Tile-substitution of the half-hex tiling.
Figure 3. New tile-substitution conjugate to the half-hex tiling, with a discrete 2-dimensional tranlsation-invariant subaction in \(\mathbb{R}^{2}\).
Figure 2. The three tiles as a new alphabet for the half-hex tiling.
We recall that the notations and the notions we will use are summarized in Section 2.3.2. A straightforward computation, based on [20, Theorem 4.8], reveals that the extreme points of the convex hull of \(F_{n}^{hh}\) are \(\{(0,0),(0,2^{n}-1),(2^{n}-1,0),(2^{n}-1,1-2^{n})\}\). Since \(F_{n}^{hh}\) is a fundamental domain of \(2^{n}\mathbb{Z}^{2}\), it has cardinality \(4^{n}\). Furthermore, \(F_{n}^{hh}\subset\operatorname{conv}(F_{n}^{hh})\cap\mathbb{Z}^{2}\), where \(\operatorname{conv}(F_{n}^{hh})\) denotes the convex hull of \(F_{n}^{hh}\). Actually, the Pick formula provides that the cardinality of \(\operatorname{conv}(F_{n}^{hh})\cap\mathbb{Z}^{2}\) and \(F_{n}^{hh}\) are the same, so \(F_{n}^{hh}=\operatorname{conv}(F_{n}^{hh})\cap\mathbb{Z}^{2}\). It then follows that \((F_{n}^{hh})_{n\geq 0}\) is a Folner sequence.
Inspired by the half-hex substitution, we consider an integer expansion matrix \(L\in\mathcal{M}(d,\mathbb{Z})\) with \(|\det(L)|\geq 3\), a fundamental domain \(F_{1}\) of \(L(\mathbb{Z}^{d})\) in \(\mathbb{Z}^{d}\), and set the finite alphabet \(\mathcal{A}=F_{1}\setminus\{\mathbf{0}\}\). We define the substitution \(\sigma_{L}\colon\mathcal{A}\to\mathcal{A}^{F_{1}}\) as follows:
\[\forall a\in\mathcal{A},\quad\sigma_{L}(a)_{\boldsymbol{f}}=\left\{\begin{array} []{ll}a&\text{ when }\boldsymbol{f}=\mathbf{0},\\ \boldsymbol{f}&\text{ when }\boldsymbol{f}\neq\mathbf{0}.\end{array}\right. \tag{17}\]
Under the hypothesis that the sequence of supports \((F_{n})_{n>0}\) is a Folner sequence, we get the substitutive subshift \((X_{\sigma_{L}},S,\mathbb{Z}^{d})\). It is important to notice that all the patterns \(\sigma_{L}(a)\) coincide except at the origin, where the letter is uniquely determined.
For computational purposes, we introduce the map
\[\tau\colon\boldsymbol{n}\in\mathbb{Z}^{d}\setminus\{\mathbf{0}\}\mapsto \boldsymbol{f}\in F_{1}\setminus\{\mathbf{0}\}, \tag{18}\]
where \(\boldsymbol{n}=L^{p+1}(\boldsymbol{z})+L^{p}(\boldsymbol{f})\) with \(\boldsymbol{z}\in\mathbb{Z}^{d}\), \(\boldsymbol{f}\in F_{1}\setminus\{\mathbf{0}\}\) and \(p\) is the smallest integer such that \(\boldsymbol{n}\not\in L^{p+1}(\mathbb{Z}^{d})\). The value \(p\) serves as a multidimensional \(L\)-adic valuation of \(\boldsymbol{n}\). A motivation to introduce this map is due to the next formula that enables to compute the value of a \(\sigma_{L}\)-fixed point \(\bar{x}\) at some position only by the knowledge of this position. More precisely, it is straightforward to check that
\[\forall\boldsymbol{n}\neq\mathbf{0}\in\mathbb{Z}^{d},\quad\bar{x}_{ \boldsymbol{n}}=\tau(\boldsymbol{n}). \tag{19}\]
This property is a typical one of automatic sequences. As a consequence, \(\sigma_{L}\) has exactly \(|\mathcal{A}|=|\det L|-1\) fixed points in \(X_{\sigma_{L}}\), and they all coincide except at the origin. Moreover, we have the following standard recognizability property.
**Lemma 4.2**.: _Let \(\overline{x}\) be a fixed point of \(\sigma_{L}\), then for any integer \(n>0\) and any \(\boldsymbol{a},\boldsymbol{b}\in\mathbb{Z}^{d}\setminus\{0\}\),_
\[\overline{x}_{\boldsymbol{a}+F_{n}}=\overline{x}_{\boldsymbol{b}+F_{n}}\implies \boldsymbol{a}\equiv\boldsymbol{b}\ (\text{mod }L^{n}(\mathbb{Z}^{d})).\]
_In particular, if the sequence of supports of the iterations \(\sigma_{L}^{n}\) is Folner, then the substitution \(\sigma_{L}\) is recognizable on any fixed point \(\overline{x}\) of \(\sigma_{L}\), so \(\sigma_{L}\) is aperiodic._
Figure 4. The first two supports of the half-hex substitution.
Proof.: We prove the claim by induction on \(n>0\). We start with the base case \(n=1\).
Suppose \(\boldsymbol{a}\notin L(\mathbb{Z}^{d})\), i.e., \(\boldsymbol{a}=L(\boldsymbol{c})+\boldsymbol{g}\) with \(\boldsymbol{g}\in F_{1}\setminus\{\boldsymbol{0}\}\). If \(b\notin L(\mathbb{Z}^{d})\), then \(\boldsymbol{b}=L(\boldsymbol{d})+\boldsymbol{h}\) with \(\boldsymbol{h}\in F_{1}\setminus\boldsymbol{0}\). Since \(\overline{x}\) is a fixed point of \(\sigma_{L}\), we have that \(\overline{x}_{\boldsymbol{a}}=\sigma(x_{\boldsymbol{c}})_{\boldsymbol{g}}= \boldsymbol{g}=\sigma(x_{\boldsymbol{d}})_{\boldsymbol{h}}\), so \(\boldsymbol{g}=\boldsymbol{h}\), which implies that \(\boldsymbol{a}\equiv\boldsymbol{b}\) (mod \(L(\mathbb{Z}^{d})\)). If \(b\in L(\mathbb{Z}^{d})\), then for any \(\boldsymbol{f}\in F_{1}\setminus\{\boldsymbol{0}\}\) we have that \(\overline{x}_{\boldsymbol{b}+\boldsymbol{f}}=\boldsymbol{f}=\overline{x}_{ \boldsymbol{a}+\boldsymbol{f}}\). We consider \(\boldsymbol{f}\in F_{1}\setminus\{\boldsymbol{0}\}\) such that \(\boldsymbol{a}+\boldsymbol{f}\notin L(\mathbb{Z}^{d})\), i.e., \(\boldsymbol{a}+\boldsymbol{f}=L(\boldsymbol{e})+\boldsymbol{h}\), so \(\overline{x}_{\boldsymbol{a}+\boldsymbol{f}}=\boldsymbol{h}\), and \(\boldsymbol{h}=\boldsymbol{f}\), i.e., \(\boldsymbol{a}\in L(\mathbb{Z}^{d})\) which is a contradiction.
Now, suppose there exists some \(n>0\) such that \(\overline{x}_{\boldsymbol{a}+F_{n}}=\overline{x}_{\boldsymbol{b}+F_{n}}\implies \boldsymbol{a}\equiv\boldsymbol{b}\) (mod \(L^{n}(\mathbb{Z}^{d})\)). Let \(\boldsymbol{a},\boldsymbol{b}\in\mathbb{Z}^{d}\) be such that
\[\overline{x}_{\boldsymbol{a}+F_{n+1}}=\overline{x}_{\boldsymbol{b}+F_{n+1}}.\]
Since \(F_{n}\subseteq F_{n+1}\), by the induction hypothesis we have that \(\boldsymbol{a}\equiv\boldsymbol{b}\) (mod \(L^{n}(\mathbb{Z}^{d})\)). We recall that \(F_{n+1}=F_{n}+L^{n}(F_{1})\), so we write
\[\boldsymbol{a}=L^{n+1}(\boldsymbol{c})+\boldsymbol{f}+L^{n}(\boldsymbol{g}), \quad\boldsymbol{b}=L^{n+1}(\boldsymbol{d})+\boldsymbol{f}+L^{n}(\boldsymbol{h})\]
for some \(\boldsymbol{f}\in F_{n}\), \(\boldsymbol{g},\boldsymbol{h}\in F_{1}\) and \(\boldsymbol{c},\boldsymbol{d}\in\mathbb{Z}^{d}\). We prove that \(\boldsymbol{g}=\boldsymbol{h}\). If \(\boldsymbol{f}=\boldsymbol{0}\) we can use a similar argument as for the case \(n=1\) to conclude that \(\boldsymbol{g}=\boldsymbol{h}\). Suppose then \(\boldsymbol{f}\neq\boldsymbol{0}\). We consider \(\boldsymbol{j}\in F_{n+1}\) such that \(\boldsymbol{f}\equiv-\boldsymbol{j}\) (mod \(L^{n}(\mathbb{Z}^{d})\)), so
\[\boldsymbol{a}+\boldsymbol{j}=L^{n+1}(\boldsymbol{c}_{1})+L^{n}(\boldsymbol{g })\quad\boldsymbol{b}+\boldsymbol{j}=L^{n+1}(\boldsymbol{d}_{1})+L^{n}( \boldsymbol{h}),\]
for some \(\boldsymbol{c}_{1},\boldsymbol{d}_{1}\in\mathbb{Z}^{d}\). Since \(\overline{x}\) is a fixed point of \(\sigma_{L}\), we get that
\[\overline{x}_{\boldsymbol{a}+\boldsymbol{j}}=\sigma_{L}^{n-2}( \sigma_{L}(\sigma_{L}(\overline{x})_{\boldsymbol{c}_{1}})_{\boldsymbol{g}}) _{\boldsymbol{0}})_{\boldsymbol{0}}=\boldsymbol{g}\] \[\overline{x}_{\boldsymbol{b}+\boldsymbol{j}}=\sigma_{L}^{n-2}( \sigma_{L}(\sigma_{L}(\overline{x})_{\boldsymbol{d}_{1}})_{\boldsymbol{h}}) _{\boldsymbol{0}})_{\boldsymbol{0}}=\boldsymbol{h}.\]
Recall that \(\overline{x}_{\boldsymbol{a}+\boldsymbol{j}}=\overline{x}_{\boldsymbol{b}+ \boldsymbol{j}}\), hence \(\boldsymbol{g}=\boldsymbol{h}\) which implies that \(\boldsymbol{a}\equiv\boldsymbol{b}\) (mod \(L^{n+1}(\mathbb{Z}^{d})\)).
Recall that the map \(\pi:(X_{\sigma_{L}},S,\mathbb{Z}^{d})\to(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n} )},+,\mathbb{Z}^{d})\) is defined in Section 2.3.2.
**Proposition 4.3**.: _If the sequence of supports of the iterations \(\sigma_{L}^{n}\) is Folner, then \(\sigma_{L}\) is an aperiodic, primitive constant-shape substitution and the factor map \(\pi:(X_{\sigma_{L}},S,\mathbb{Z}^{d})\to(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n} )},+,\mathbb{Z}^{d})\) is almost 1-to-1._
_More precisely, we have_
\[|\pi^{-1}((\overleftarrow{\overline{g}}))|=\left\{\begin{array}{ll}|\mathcal{ A}|&\text{ if }\overleftarrow{\overline{g}}\in\mathcal{O}(\overleftarrow{0},+),\\ 1&\text{ otherwise.}\end{array}\right.\]
In particular, the subshift \(X_{\sigma_{L}}\) is a substitutive Toeplitz subshift and its maximal equicontinuous factor is \(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}\). As an explicit example, the substitutive subshift \(X^{hh}\) associated with the half-hex substitution \(\zeta_{hh}\) is an almost 1-to-1 extension of the constant-base odometer \(\overleftarrow{\mathbb{Z}^{2}}_{(2^{n}\mathbb{Z}^{2})}\).
Proof.: Since \(\tau\) is a bijection, \(\sigma_{L}\) is a primitive substitution. The aperiodicity follows from the recognizability given by Lemma 4.2.
Now we study the fibers \(\pi^{-1}(\{\overleftarrow{\overline{g}}\})\) for \(\overleftarrow{\overline{g}}=(g_{n})_{n}\in\overleftarrow{\mathbb{Z}^{d}}_{(L^{ n})}\). Suppose \(|\pi^{-1}(\{\overleftarrow{\overline{g}}\})|\geq 2\) and set \(x_{1},x_{2}\in\pi^{-1}(\{\overleftarrow{\overline{g}}\})\), i.e., for any \(n>0\) there exists \(y_{1}^{(n)},y_{2}^{(n)}\in X_{\sigma_{L}}\) such that \(x_{i}=S^{g_{n}}\sigma_{L}^{n}(y_{i}^{(n)})\), for \(i\in\{1,2\}\). Let \(\boldsymbol{a}\in\mathbb{Z}^{d}\) such that \(x_{1\boldsymbol{a}}\neq x_{2\boldsymbol{a}}\). This implies that \(\sigma_{L}^{n}(y_{1}^{(n)})_{\boldsymbol{a}+g_{n}}\neq\sigma_{L}^{n}(y_{2}^{(n)} )_{\boldsymbol{a}+g_{n}}\). For every \(n>0\)
we write \(\mathbf{a}+g_{n}=L^{n}(\mathbf{b}_{n})+\mathbf{f}_{n}\), with \(\mathbf{b}_{n}\in\mathbb{Z}^{d}\) and \(\mathbf{f}_{n}\in F_{n}\). Since for \(i\in\{1,2\}\) we have that \(\sigma_{L}^{n}(y_{i}^{(n)})_{\mathbf{a}+\mathbf{g}_{n}}=\sigma_{L}^{n}(y_{i}^{(n)}(\mathbf{ b}_{n}))_{\mathbf{f}_{n}}\) and these letters are different, then for any \(n>0\), \(\mathbf{f}_{n}\) must be \(\mathbf{0}\in F_{n}\). This implies that for every \(n>0\)
\[\mathbf{g}_{n}\equiv-\mathbf{a}\ (\text{mod}\ L^{n}(\mathbb{Z}^{d})).\]
Hence \(\overleftarrow{g}=\kappa_{(L^{n})}(-\mathbf{a})\), i.e., \(\overleftarrow{g}\in\mathcal{O}(\overleftarrow{0},+)\). It follows that \(x_{1}\) and \(x_{2}\) are in the orbit of two fixed points of \(\sigma_{L}\). In this case, \(|\pi^{-1}(\overleftarrow{g})|\) has cardinality \(|\mathcal{A}|\) and they only differ in the coordinate \(-\kappa_{(L^{n})}^{-1}(\overleftarrow{g})\). If \(\overleftarrow{g}\) is not in \(\mathcal{O}(\overleftarrow{0},+)\), then \(\pi^{-1}(\overleftarrow{g})\) has cardinality \(1\). We conclude that the factor map \(\pi:(X_{\sigma_{L}},S,\mathbb{Z}^{d})\to(\overleftarrow{\mathbb{Z}^{d}(L^{n})},+,\mathbb{Z}^{d})\) is almost \(1\)-to-\(1\).
As a consequence of Proposition 4.3, we get the following property on the \(\text{GL}(d,\mathbb{Z})\)-endomorphisms of \(X_{\sigma_{L}}\).
**Corollary 4.4**.: _Assume the sequence of supports of the iterations \(\sigma_{L}^{n}\) is Folner. Then any \(\text{GL}(d,\mathbb{Z})\)-endomorphism \(\phi\in N(X_{\sigma_{L}},S)\) maps a \(\sigma_{L}\)-fixed point onto a shifted of a \(\sigma_{L}\)-fixed point._
Proof.: Since the factor map \(\pi:X_{\sigma_{L}}\to\overleftarrow{\mathbb{Z}^{d}(L^{n})}\) is almost \(1\)-to-\(1\) (Proposition 4.3), then the odometer system \(\overleftarrow{\mathbb{Z}^{d}(L^{n})}\) is the maximal equicontinuous factor of the substitutive subshift \((X_{\sigma_{L}},S,\mathbb{Z}^{d})\). So there exists a semigroup homomorphism \(\hat{\pi}:N(X_{\sigma_{L}},S)\to N(\overleftarrow{\mathbb{Z}^{d}(L^{n})})\) which is injective (Lemma 2.3). Recall that any equicontinuous system is coalescent, so any endomorphism is invertible and by Item iii of Lemma 2.3, any endomorphism \(\phi\) satisfies
\[\left\{\overleftarrow{g}\in\overleftarrow{\mathbb{Z}^{d}(L^{n})}\colon|\pi^{-1 }(\{\overleftarrow{g}\})|=|\mathcal{A}|\right\}\subseteq\hat{\pi}(\phi)\left( \left\{\overleftarrow{g}\in\overleftarrow{\mathbb{Z}^{d}(L^{n})}\colon|\pi^{-1 }(\{\overleftarrow{g}\})|=|\mathcal{A}|\right\}\right).\]
Since \(\left\{\overleftarrow{g}\in\overleftarrow{\mathbb{Z}^{d}(L^{n})}\colon|\pi^{-1 }(\{\overleftarrow{g}\})|=|\mathcal{A}|\right\}\) is the orbit \(\mathcal{O}(\overleftarrow{0},+)=\kappa_{(L^{n})}(\mathbb{Z}^{d})\), it implies that \(\phi\) maps the \(\pi\)-fiber of the orbit \(\mathcal{O}(\overleftarrow{0},+)\) onto itself. This \(\pi\)-fiber consists of the orbits of \(\sigma_{L}\)-fixed points. It follows that, up to compose \(\phi\) with a shift map, the image of a \(\sigma_{L}\)-fixed point \(\bar{x}\) by \(\phi\) is also a \(\sigma_{L}\)-fixed point.
We also characterize the \(\text{GL}(d,\mathbb{Z})\)-endomorphisms of the substitutive subshift \((X_{\sigma_{L}},S,\mathbb{Z}^{d})\) by the following results. The first one concerns endomorphisms and automorphisms.
**Lemma 4.5**.: _Let \(\sigma_{L}\) defined as (17) and assume the sequence of supports of the iterations \(\sigma_{L}^{n}\) is Folner. Then the subshift \((X_{\sigma_{L}},S,\mathbb{Z}^{d})\) satisfies_
* _it is coalescent,_
* _the automorphism group_ \(\operatorname{Aut}(X_{\sigma_{L}},S)\) _is trivial, i.e., consist only on the shifted transformations_ \(S^{\mathbf{n}}\)_,_ \(\mathbf{n}\in\mathbb{Z}^{d}\)_,_
Proof.: First we prove that \(\text{End}(X_{\sigma_{L}},S)=\langle S\rangle\). We keep the notations of Lemma 2.3. Set \(\phi\in\text{End}(X_{\sigma_{L}},S)\). According to Corollary 4.4, \(\hat{\pi}(\phi)\) is an endomorphism of the odometer, which means it is a translation, as proven in Lemma 2.8. Moreover, since it preserves the \(\overleftarrow{0}\)-orbit, \(\hat{\pi}(\phi)\) is a translation by some element \(\kappa_{(L^{n})}(\mathbf{n})\) with \(\mathbf{n}\in\mathbb{Z}^{d}\). By definition of \(\hat{\pi}\), this translation is \(\hat{\pi}(S^{\mathbf{n}})\), thus equal
to \(\hat{\pi}(\phi)\). We conclude, by the injectivity of \(\hat{\pi}\), that \(\operatorname{End}(X_{\sigma_{L}},S)=\langle S\rangle\). As a consequence, \((X_{\sigma_{L}},S,\mathbb{Z}^{d})\) is a coalescent system.
Now, we characterize the elements of their linear representation semigroups. For this, we introduce the following notations. Let \(N_{L}\) be the group
\[\{M\in\bigcup_{k\geq 0}\bigcap_{n\geq k}L^{n}\mathrm{GL}(d,\mathbb{Z})L^{-n}: \exists n_{0},L^{-n}ML^{n}=L^{-p}ML^{p}\;(\mathrm{mod}\;L(\mathbb{Z}^{d})), \forall n,p\geq n_{0}\}.\]
The crucial properties of an element \(M\) of \(N_{L}\) are the following: for each integer \(n>0\), there is a \(\mathbb{Z}^{d}\)-automorphism \(M_{n}\) such that \(L^{n}M_{n}=ML^{n}\). Moreover, each automorphism \(M_{n}\) permutes the \(L(\mathbb{Z}^{d})\)-cosets. With an abuse of notation, we will denote these permutations on \(F_{1}\setminus\{\mathbf{0}\}\) by \(M_{n}\;(\mathrm{mod}\;L(\mathbb{Z}^{d}))\). These permutations are ultimately all the same, for large enough \(n\). Linked with the computation of the digits of fixed points, we have the following relation
\[\tau\circ M(\boldsymbol{n})=M_{p}\circ\tau(\boldsymbol{n})\;(\mathrm{mod}\;L( \mathbb{Z}^{d})), \tag{20}\]
where \(p\) is the smallest integer such that \(\boldsymbol{n}\not\in L^{p+1}(\mathbb{Z}^{d})\).
**Lemma 4.6**.: _Assume that the sequence of supports of the iterations \(\sigma_{L}^{n}\) is Folner. Then the linear representation semigroup \(\vec{N}(X_{\sigma_{L}},S)\) is the linear group \(N_{L}\)._
Proof.: We start to show that \(\vec{N}(X_{\sigma_{L}},S)\leqslant N_{L}\). Set \(M\in\vec{N}(X_{\sigma_{L}},S)\), and let \(\phi\in N(X_{\sigma_{L}},S)\) be an \(M\)-endomorphism with radius \(r(\phi)\). Up to compose \(\phi\) with a shift, we can assume that it preserves the set of \(\sigma_{L}\)-fixed points (Corollary 4.4).
Since \(\pi\) is compatible with \(\mathrm{GL}(d,\mathbb{Z})\)-endomorphisms (Lemma 2.4), Lemma 2.3 provides that \(M\in\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\). This set is a group (Corollary 2.7), so \(M^{-1}\) also belongs to \(\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})\), i.e., for any \(n>0\), there exists \(m>0\) such that \(L^{-n}M^{-1}L^{m}\) is an endomorphism of \(\mathbb{Z}^{d}\) (see Section 3). We define \(m(n)=\min\{m>0:L^{-n}M^{-1}L^{m}\) is an endomorphism of \(\mathbb{Z}^{d}\}\). Since the determinant of a \(\mathbb{Z}^{d}\)-endormorphism is an integer, we have that \(m(n)\geq n\).
We will show that actually \(m(n)=n\) for any large enough \(n\), so that \(M^{-1}\) belongs to \(\bigcup_{k>0}\bigcap_{n\geq k}L^{n}\mathrm{GL}(d,\mathbb{Z})L^{-n}\). Since this set is stable by taking the inverse, this is also true for \(M\).
We prove the claim by contradiction, i.e., we assume there is an infinite set of integers \(j\) such that \(m(j)>j\). Choose \(n\) large enough integer so that the ball of radius \(r(\phi)\) centered at the origin is included in \(L^{n}(K_{\sigma_{L}})+F_{n}\), see Proposition 2.13. For such \(n\), \(L^{j}(\mathbb{Z}^{d})\cap(L^{n+1}(K_{\sigma_{L}})+F_{n})=\{\mathbf{0}\}\), for any \(j\) large enough. Since the sequence \((m(j))_{j\in S}\) goes to infinity, one can moreover assume that \(m(j+1)>m(j)\). With this convention, the group \(L^{-j}M^{-1}L^{m(j)}(\mathbb{Z}^{d})\) can not be a subset of \(L(\mathbb{Z}^{d})\), since otherwise this implies \(m(j)\geq m(j+1)\). So there is some \(\bar{\boldsymbol{g}}\in\mathbb{Z}^{d}\) such that \(L^{-j}M^{-1}L^{m(j)}(\bar{\boldsymbol{g}})\neq 0\;(\mathrm{mod}\;L( \mathbb{Z}^{d}))\). Moreover, the set \(L^{-m(j)-1}ML^{j+1}(\mathbb{Z}^{d})\setminus\mathbb{Z}^{d}\) is not empty, since otherwise the matrix \(L^{-m(j)-1}ML^{j+1}\) would have integer coefficients, which is impossible since its determinant \(\det L^{j-m(j)}\) is not an integer. This provides an element \(\boldsymbol{h}_{0}\in L^{-m(j)-1}ML^{j+1}(\mathbb{Z}^{d})\setminus\mathbb{Z}^ {d}\). Set \(\boldsymbol{h}_{1}=L(\boldsymbol{h}_{0})\), we have then \(\boldsymbol{h}_{1}\not\in L(\mathbb{Z}^{d})\) and \(L^{m(j)}(\boldsymbol{h}_{1})=ML^{j+1}(\boldsymbol{h}_{2})\) for some element \(\boldsymbol{h}_{2}\in\mathbb{Z}^{d}\). These elements will enable us to provide a contradiction.
Set \(\boldsymbol{g}_{1}=L^{m(j)}(\bar{\boldsymbol{g}})\) and \(\boldsymbol{g}_{2}=\boldsymbol{g}_{1}+ML^{j+1}\boldsymbol{h}_{2}\). By construction, such elements satisfy
\[M^{-1}\boldsymbol{g}_{1}=L^{j}(L^{-j}M^{-1}L^{m(j)}(\bar{\boldsymbol{g}}))=M^{ -1}\boldsymbol{g}_{2}\;(\mathrm{mod}\;L^{j}(\mathbb{Z}^{d})),\]
so that
\[\tau(M^{-1}\boldsymbol{g}_{1})=\tau(M^{-1}\boldsymbol{g}_{2})=L^{-j}M^{-1}L^{m(j)} \bar{\boldsymbol{g}}\ (\mathrm{mod}\ L(\mathbb{Z}^{d})).\]
Moreover the same relation and the very choice of \(j\) give for any \(\boldsymbol{k}\in K_{\sigma_{L}}\), \(\boldsymbol{f}\in F_{n}\)
\[\tau(M^{-1}\boldsymbol{g}_{1}+L^{n+1}(\boldsymbol{k})+\boldsymbol{f})=\tau(M^ {-1}\boldsymbol{g}_{2}+L^{n+1}(\boldsymbol{k})+\boldsymbol{f}). \tag{21}\]
On the other way, the very choice of \(\boldsymbol{h}_{2}\) implies that
\[\tau(\boldsymbol{g}_{2})=\tau(\bar{\boldsymbol{g}}+\boldsymbol{h}_{1})\ \ \ \ \ \ \ \ \text{ whereas }\tau(\boldsymbol{g}_{1})=\tau(\bar{\boldsymbol{g}}).\]
Since \(\boldsymbol{h}_{1}\not\in L(\mathbb{Z}^{d})\), we get
\[\tau(\boldsymbol{g}_{1})\neq\tau(\boldsymbol{g}_{2}). \tag{22}\]
Consider \(\overline{x}\in X_{\sigma_{L}}\) a fixed point of \(\sigma_{L}\). The relation (21) implies that
\[\bar{x}_{|M^{-1}\boldsymbol{g}_{1}+L^{n+1}(K_{\sigma_{L}})+F_{n}}=\bar{x}_{|M ^{-1}\boldsymbol{g}_{2}+L^{n+1}(K)+F_{n}}.\]
By the choice of \(n\in\mathbb{N}\) and the Curtis-Hedlund-Lyndon theorem (Theorem 2.12), we also have that \(\phi(\bar{x})_{\boldsymbol{g}_{1}}=\phi(\bar{x})_{\boldsymbol{g}_{2}}\). Since \(\phi\) preserves the set of \(\sigma_{L}\)-fixed points, we get by (19), \(\phi(\bar{x})_{\boldsymbol{g}_{1}}=\tau(\boldsymbol{g}_{1})\) and \(\phi(\bar{x})_{\boldsymbol{g}_{2}}=\tau(\boldsymbol{g}_{2})\), contradicting (22). So \(m(n)=n\) for any large enough \(n\).
We still have to show that \(L^{-p}ML^{p}\ (\mathrm{mod}\ L(\mathbb{Z}^{d}))\) is uniform in \(p\) for any large enough integer \(p\). Let \(K_{\sigma_{L}}\) be the finite set provided by Proposition 2.13 and let \(n\) be large enough so that \(L^{n}(K_{\sigma_{L}})+F_{n}\) contains the ball \(B_{r(\phi)}(\boldsymbol{0})\). Consider \(\boldsymbol{f}\in F_{1}\setminus\{0\}\) and integers \(p,q>n\). We claim that
\[\bar{x}_{|L^{p}(\boldsymbol{f})+L^{n}(K_{\sigma_{L}})+F_{n}}=\bar{x}_{|L^{q}( \boldsymbol{f})+L^{n}(K_{\sigma_{L}})+F_{n}}. \tag{23}\]
Indeed, by the equality (19) and since \(p>n\) we get the following for any \(\boldsymbol{k}\in K_{\sigma_{L}}\), \(\boldsymbol{f}_{n}\in F_{n}\)
\[\bar{x}_{L^{p}(\boldsymbol{f})+L^{n}(k)+\boldsymbol{f}_{n}}=\begin{cases}\tau (\boldsymbol{f}_{n})&\text{ if }\boldsymbol{f}_{n}\in F_{n}\setminus\{0\}\\ \tau(\boldsymbol{k})&\text{ if }\boldsymbol{k}\neq\boldsymbol{0}\wedge \boldsymbol{f}_{n}=\boldsymbol{0}\\ \tau(\boldsymbol{f})&\text{ otherwise.}\end{cases}\]
In particular, notice that \(\bar{x}_{|L^{p}(\boldsymbol{f})+L^{n}(K_{\sigma_{L}})+F_{n}}\) is independent of \(p\) and so the equality (23) follows.
Moreover, the equality (23) implies, by Curtis-Hedlund-Lyndon theorem (Theorem 2.12), that \(\phi(\bar{x})_{ML^{p}\boldsymbol{f}}=\phi(\bar{x})_{ML^{q}\boldsymbol{f}}\). Recall that \(M_{k}\) denotes \(L^{-k}ML^{k}\) for any integer \(k\geq 0\). Since \(\phi(\bar{x})\) is also fixed by \(\sigma_{L}\), the same computation as before provides \(\phi(\bar{x})_{ML^{p}\boldsymbol{f}}=\tau(ML^{p}\boldsymbol{f})=M_{p} \boldsymbol{f}\ (\mathrm{mod}\ L(\mathbb{Z}^{d}))\), by (20). Similarly, we also have that \(\phi(\bar{x})_{ML^{q}\boldsymbol{f}}=M_{q}\boldsymbol{f}\ (\mathrm{mod}\ L(\mathbb{Z}^{d}))\). Hence \(M_{p}\boldsymbol{f}=M_{q}\boldsymbol{f}(\mathrm{mod}\ L(\mathbb{Z}^{d}))\) for any \(p,q>n\) and \(\boldsymbol{f}\in F_{1}\setminus\{0\}\). It follows that \(M\) is in \(N_{L}\).
We will show now the converse inclusion, that is \(N_{L}\leqslant\vec{N}(X_{\sigma_{L}},S)\), so that the two sets are actually equal.
Recall that for a matrix \(M\in N_{L}\), all the matrices \(M_{p}=L^{-p}ML^{p}\) permute \(L(\mathbb{Z}^{d})\)-cosets, so they define an isomorphism on \(F_{1}\setminus\{\boldsymbol{0}\}\). Moreover, this isomorphism is independent of \(p\), for any \(p\) no greater than some \(n_{0}\in\mathbb{N}\). The recognizability property of \(\sigma_{L}\) enables us to define the truncation of a "\(L\)-adic" valuation as a local map. More precisely, from Lemma 4.2, we can define the local map \(v\colon\mathcal{L}_{F_{n_{0}}}(X_{\sigma_{L}})\to\{0,1,\ldots,n_{0}\}\) by
\[v(\bar{x}_{|\boldsymbol{n}+F_{n_{0}}})=\min\{n_{0},q\ \text{ where }\boldsymbol{n}\in L^{q}(\mathbb{Z}^{d})\setminus L^{q+1}(\mathbb{Z}^{d})\}.\]
We then set the map \(\phi_{M}:X_{\sigma_{L}}\to\phi(X_{\sigma_{L}})\) induced by the local map
\[\phi_{M}(x)_{\boldsymbol{n}}=M_{v(x_{|M^{-1}\boldsymbol{n}+F_{n_{0}}})}x_{M^{-1} \boldsymbol{n}}\ (\text{mod}\ L(\mathbb{Z}^{d})),\quad\text{for any}\ x\in X_{\sigma_{L}}, \boldsymbol{n}\in\mathbb{Z}^{d}.\]
Notice that \(\phi_{M}\) is an \(M\)-epimorphism onto the subshift \(\phi_{M}(X_{\sigma_{L}})\) by Curtis-Hedlund-Lyndon theorem (Theorem 2.12). Actually, the two subshifts are the same \(\phi_{M}(X_{\sigma_{L}})=X_{\sigma_{L}}\) so that \(\phi_{M}\) is an \(M\)-endomorphism. To prove it, it is enough to show that \(\phi_{M}\) maps a \(\sigma_{L}\)-fixed point \(\overline{x}\in X_{\sigma_{L}}\) to another fixed point of \(\sigma_{L}\) within \(X_{\sigma_{L}}\), so that \(\phi_{M}(X_{\sigma_{L}})\cap X_{\sigma_{L}}\neq\emptyset\). The minimality of the subshift \(X_{\sigma_{L}}\) enables us to conclude that \(\phi_{M}\) is an \(M\)-endomorphism.
Indeed, Equation (19) provides for any \(\boldsymbol{n}\neq\boldsymbol{0}\in\mathbb{Z}^{d}\) that
\[\phi_{M}(\bar{x})_{M\boldsymbol{n}} =M_{v(\bar{x}_{|\boldsymbol{n}+F_{n_{0}}})}\bar{x}_{\boldsymbol{n }}\ (\text{mod}\ L(\mathbb{Z}^{d}))\] \[=M_{v(\bar{x}_{|\boldsymbol{n}+F_{n_{0}}})}\tau(\boldsymbol{n})\ ( \text{mod}\ L(\mathbb{Z}^{d}))\] \[=\tau(M\boldsymbol{n})\ \text{by relation (\ref{eq:1})}.\]
So \(\phi_{M}(\bar{x})\) is fixed by \(\sigma_{L}\), and the claim follows, i.e., \(\phi_{M}\) is an \(M\)-endomorphism of \(X_{\sigma_{L}}\). Hence \(\vec{N}(X_{\sigma_{L}},S)=N_{L}\), and it is a group. Proposition 2.2 ensures then that \(N(X_{\sigma_{L}},S)\) is a group.
**Lemma 4.7**.: _Assume the sequence of supports of the iterations \(\sigma_{L}^{n}\) is Folner, then the normalizer semigroup \(N(X_{\sigma_{L}},S)\) is isomorphic to a semidirect product between \(\mathbb{Z}^{d}\) and the linear group \(N_{L}\)._
Proof.: From preceding Lemma 4.6 and Lemma 4.5, Proposition 2.2 ensures then that \(N(X_{\sigma_{L}},S,\mathbb{Z}^{d})\) is a group. We have to prove that the map
\[M\in N_{L}\mapsto\phi_{M}\in\vec{N}(X_{\sigma_{L}},S)\]
is a group embedding. This will show that the exact sequence (1) splits and \(N(X_{\sigma_{L}},S,\mathbb{Z}^{d})\) is a semidirect product between \(\mathbb{Z}^{d}\) and the linear group \(N_{L}\). To prove it is a group morphism, the only nontrivial point to check is the composition relation \(\phi_{MM^{\prime}}=\phi_{M}\circ\phi_{M^{\prime}}\) for any \(M,M^{\prime}\in N_{L}\). Since the maps \(\phi_{MM^{\prime}}\) and \(\phi_{M}\circ\phi_{M^{\prime}}\) have the same linear part, the closed set \(\{x\in X_{\sigma_{L}}:\phi_{MM^{\prime}}(x)=\phi_{M}\circ\phi_{M^{\prime}}(x)\}\) is \(S\)-invariant. By minimality, we only have to prove it is nonempty. We will show it contains any \(\sigma_{L}\)-fixed point \(\bar{x}\). Since in the previous part, we have shown that the fixed points are preserved under the maps \(\phi_{M}\), we only have to check that the images under the two maps have the same \(\boldsymbol{0}\) coordinate.
Let \(n_{0}\) be the integer associated with \(M\) such that the transformation \(M_{p}\) coincides mod \(L^{p}(\mathbb{Z}^{d})\) for \(p\geq n_{0}\). Define \(n_{0}^{\prime}\) similarly for \(M^{\prime}\). It is direct to check that the integer \(\max(n_{0},n_{0}^{\prime})\) plays a similar role for \(MM^{\prime}\). By definition we have \(\phi_{M^{\prime}}(\bar{x})_{\boldsymbol{0}}=M^{\prime}_{n_{0}^{\prime}}\bar{x }_{\boldsymbol{0}}\ (\text{mod}\ L(\mathbb{Z}^{d}))\) and \(\phi_{M}\circ\phi_{M^{\prime}}(\bar{x})_{\boldsymbol{0}}=M_{n_{0}}M^{\prime} _{n_{0}^{\prime}}\bar{x}_{\boldsymbol{0}}\ (\text{mod}\ L(\mathbb{Z}^{d}))\). Now, a direct computation gives that \((MM^{\prime})_{\max(n_{0},n_{0}^{\prime})}=M_{n_{0}}M^{\prime}_{n_{0}^{\prime}} \ (\text{mod}\ L(\mathbb{Z}^{d}))\) and this shows the claim, i.e., the two images have the same \(\boldsymbol{0}\) coordinate.
To prove that the morphism is injective, consider a matrix \(M\) in its kernel, i.e., such that \(\phi_{M}=\text{Id}\). Composing this relation with the shift map \(S^{\boldsymbol{z}}\), with \(\boldsymbol{z}\in\mathbb{Z}^{d}\), and since \(\phi_{M}\) is an \(M\)-endomorphism, we get that \(S^{M\boldsymbol{z}}=S^{\boldsymbol{z}}\) for any \(\boldsymbol{z}\in\mathbb{Z}^{d}\). By aperiodicity of the subshift, \(M\) has to be the identity matrix.
Theorem 4.1 resumes all the former results. In particular, if the expansion matrix \(L\) is a multiple of the identity, then \(\vec{N}(X_{\sigma_{L}},S)=\operatorname{Cent}_{\operatorname{GL}(d,\mathbb{Z})}(L )=\operatorname{GL}(d,\mathbb{Z})\). As a consequence, we get the following direct corollary:
**Corollary 4.8**.: _The normalizer semigroup of the half-hex substitution \(N(X_{hh},S)\) is a group and it is isomorphic to a semidirect product between \(\mathbb{Z}^{2}\) and \(\operatorname{GL}(2,\mathbb{Z})\). Moreover, its automorphism group \(\operatorname{Aut}(X_{hh},S)\) is trivial._
This implies that the half-hex substitutive subshift is a minimal subshift with an infinite linear representation group. In fact, since \(\vec{N}(X_{hh},S)=\operatorname{GL}(2,\mathbb{Z})\), its linear representation group is the largest possible. As an other example, consider the matrix \(L_{6}=\begin{pmatrix}2&0\\ 0&4\end{pmatrix}\). By Theorem 3.3, we have that \(\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{6}^{0})})=\operatorname{GL}(2, \mathbb{Z})\), but Theorem 4.1 and a standard analysis provide that \(\vec{N}(X_{\sigma_{L_{6}}},S)\) is the set of matrices \(\left\{\begin{pmatrix}a&2b\\ 0&d\end{pmatrix}:a,d\in\{-1,1\},b\in\mathbb{Z}\right\}\). In particular, \(\vec{N}(X_{\sigma_{L_{6}}},S)\) is virtually \(\mathbb{Z}\): its quotient by the group generated by the matrix \(\begin{pmatrix}1&2\\ 0&1\end{pmatrix}\) is finite.
It is then natural to wonder what is the collection of all the groups \(\vec{N}(X_{\sigma_{L}},S)\) that appear for all the matrices \(L\) and in particular if any subgroup of \(\operatorname{GL}(2,\mathbb{Z})\) can be realized like this. This question can be very difficult because it requires precise control of the combinatorics which can be difficult to manage. A more manageable way could be by realizing linear representation groups of specific odometers (see Question 2.10). With this, we may expect to get an answer of the following:
**Question 4.9**.: _Does there exist for any odometer system \((\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\) an almost 1-to-1 Toeplitz extension \((X,S,\mathbb{Z}^{d})\) such that \(\vec{N}(X,S)=\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\)?_
Together with Question 2.10 this will enable us to enrich that family of examples with a large linear representation group.
In the more restrictive class of subshifts given by the finite data of a constant-shape substitution, it is natural to ask whether the elements of the normalizer are computable. There is a body of evidence indicating that their automorphisms can be described by an algorithm. But, as illustrated by the characterization in Theorem 4.1, nothing is clear concerning the elements of the linear representation group. Related to Question 3.8, we ask the following:
**Question 4.10**.: _Regarding the linear representation group for substitutive constant-shape subshifts, are its elements computable? Is its group structure computable?_
Here we mean "computable elements" in the sense that there is an algorithm deciding whether or not a matrix \(M\) belongs to the linear representation group. The second question is to find an algorithm for determining the linear representation group, up to isomorphism, as a function of the substitution.
|
2309.09407 | A bijection for tuples of commuting permutations and a log-concavity
conjecture | Let $A(\ell,n,k)$ denote the number of $\ell$-tuples of commuting
permutations of $n$ elements whose permutation action results in exactly $k$
orbits or connected components. We provide a new proof of an explicit formula
for $A(\ell,n,k)$ which is essentially due to Bryan and Fulman, in their work
on orbifold higher equivariant Euler characteristics. Our proof is
self-contained, elementary, and relies on the construction of an explicit
bijection, in order to perform the $\ell+1\rightarrow \ell$ reduction. We also
investigate a conjecture by the first author, regarding the log-concavity of
$A(\ell,n,k)$ with respect to $k$. The conjecture generalizes a previous one by
Heim and Neuhauser related to the Nekrasov-Okounkov formula. | Abdelmalek Abdesselam, Pedro Brunialti, Tristan Doan, Philip Velie | 2023-09-18T00:12:26Z | http://arxiv.org/abs/2309.09407v3 | # A bijection for tuples of commuting permutations and a log-concavity conjecture
###### Abstract.
Let \(A(p,n,k)\) denote the number of \(p\)-tuples of commuting permutations of \(n\) elements whose permutation action results in exactly \(k\) orbits or connected components. We provide a new proof of an explicit formula for \(A(p,n,k)\) which is essentially due to Bryan and Fulman, in their work on orbifold higher equivariant Euler characteristics. Our proof is self-contained, elementary, and relies on the construction of an explicit bijection, in order to perform the \(p+1\to p\) reduction. We also investigate a conjecture by the first author, regarding the log-concavity of \(A(p,n,k)\) with respect to \(k\). The conjecture generalizes a previous one by Heim and Neuhauser related to the Nekrasov-Okounkov formula.
## 1. Introduction
For \(n\geq 0\), let us denote by \([n]\) the finite set \(\{1,\ldots,n\}\), and by \(\mathfrak{S}_{n}\) the symmetric group of permutations of \([n]\). For \(p\geq 0\), we consider the set of ordered \(p\)-tuples of commuting permutations
\[\mathscr{C}_{p,n}:=\left\{\ (\sigma_{1},\ldots,\sigma_{p})\in(\mathfrak{S}_{n} )^{p}\ |\ \forall i,j,\ \sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i}\ \right\}\.\]
For a tuple \((\sigma_{1},\ldots,\sigma_{p})\) of (non-necessarily commuting) permutations, let \(\langle\sigma_{1},\ldots,\sigma_{p}\rangle\) be the subgroup they generate inside \(\mathfrak{S}_{n}\). The obvious action of \(\mathfrak{S}_{n}\) on \([n]\) restricts to an action of \(\langle\sigma_{1},\ldots,\sigma_{p}\rangle\) with a number of orbits which we will denote by \(\kappa(\sigma_{1},\ldots,\sigma_{p})\). For \(0\leq k\leq n\), we let \(\mathscr{C}_{p,n,k}\) be the subset of \(\mathscr{C}_{p,n}\) made of tuples for which \(\kappa(\sigma_{1},\ldots,\sigma_{p})=k\). We finally define our main object of study
\[A(p,n,k):=|\mathscr{C}_{p,n,k}|\,\]
where, as usual, \(|\cdot|\) denotes the cardinality of finite sets. Our main result is a new proof of the following theorem giving an explicit, albeit complicated, formula for the \(A(p,n,k)\).
**Theorem 1.1**.: _We have_
\[A(p,n,k)=\frac{n!}{k!}\times\sum_{n_{1},\ldots,n_{k}\geq 1}\leavevmode \hbox{\small 1\kern-3.8pt\normalsize 1}\{n_{1}+\cdots+n_{k}=n\}\times\prod_{i=1}^{k} \left[\frac{B(p,n_{i})}{n_{i}}\right]\,\]
_where \(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\{\cdots\}\) denotes the indicator function of the condition between braces, and \(B(p,\cdot)\) is the multiplicative function (in the number theory sense, i.e., \(B(p,ab)=B(p,a)B(p,b)\) when \(a,b\) are coprime) which satisfies_
\[B(p,q^{m})=\frac{(q^{p}-1)(q^{p+1}-1)\cdots(q^{p+m-1}-1)}{(q-1)(q^{2}-1)\cdots (q^{m}-1)}\,\]
_when \(m\geq 0\) and \(q\) is a prime number._
Our motivation for considering the previous theorem is the following log-concavity conjecture by the first author.
**Conjecture 1.1**.: _(Abdesselam [2]) For all \(p\geq 1\), all \(n\geq 3\), and for all \(k\) such that \(2\leq k\leq n-1\),_
\[A(p,n,k)^{2}\geq A(p,n,k-1)\ A(p,n,k+1)\.\]
The case \(p=1\), included for esthetic coherence, is not conjectural. Since \(A(1,n,k)=c(n,k)\), the unsigned Stirling number of the first kind, the stated log-concavity property is a well known fact (see, e.g., [1] and references therein). The case \(p=2\) is a conjecture by Heim and Neuhauser [8] related to the Nekrasov-Okounkov formula [13, 16], as will be explained in SS3. The case "\(p=\infty\)" is proved in [2]. The form in which Theorem 1.1 is stated is the one needed for the proof given in [2], and we did not see this precise formulation in the literature. However, we do not claim Theorem 1.1 is new. Indeed, it follows easily from the following identity by Bryan and Fulman [5]
\[\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}\ A(p,n,k)\ x^{k}u^{n}=\prod_{d_{ 1},\ldots,d_{p-1}=1}^{\infty}(1-u^{d_{1}\cdots d_{p-1}})^{-x\,d_{1}^{p-2}d_{2 }^{p-3}\cdots d_{p-2}}\, \tag{1}\]
which holds in the ring of formal power series \(\mathbb{C}[[x,u]]\). To see how Theorem 1.1 can be derived from (1), first (re)define, for \(p\geq 1\) and \(n\geq 1\),
\[B(p,n):=\sum_{s_{1}|s_{2}|\cdots|s_{p-1}|n}s_{1}\cdots s_{p-1}\, \tag{2}\]
where the sum is over tuples of integers \(s_{1},\ldots,s_{p-1}\geq 1\) which form an "arithmetic flag", namely, such that \(s_{1}\) divides \(s_{2}\), \(s_{2}\) divides \(s_{3}\),..., \(s_{p-1}\) divides \(n\). In particular, \(B(1,n)=1\), and \(B(2,n)=\sigma(n)\) the divisor sum from number theory. Since the divisor lattice factorizes over the primes, it is clear from the alternative definition (2), that the \(B(p,\cdot)\) is a mutiplicative function, in the number theory sense. Its computation reduces to the prime power case. If \(q\) is a prime and \(m\geq 0\), then we have
\[B(p,q^{m}) = \sum_{0\leq m_{1}\leq\cdots\leq m_{p-1}\leq m}\ q^{m_{1}+\cdots+m _{p-1}}\] \[= \sum_{\lambda\subset(m)^{p-1}}q^{|\lambda|}\] \[= \left[\begin{array}{c}m+p-1\\ m\end{array}\right]_{q}\] \[= \frac{(q^{p}-1)(q^{p+1}-1)\cdots(q^{p+m-1}-1)}{(q-1)(q^{2}-1) \cdots(q^{m}-1)}\.\]
Here, we changed variables to the integer partition \(\lambda=(m_{p-1},m_{p-2},\ldots,m_{1})\) with weight \(|\lambda|\) and whose shape is contained in the rectangular partition \((m)^{p-1}\) with \(p-1\) parts equal to \(m\). Finally, we used the well known formula for the sum over \(\lambda\) as a Gaussian polynomial or \(q\)-binomial coefficient (see, e.g., [15, Prop. 1.7.3]). This shows the equivalence between (2) and the definition given in Theorem 1.1. By changing variables from \(s_{1},\ldots,s_{p-1}\) to \(d_{1},\ldots,d_{p}\) given by
\[d_{1}=s_{1}\,\ d_{2}=\frac{s_{2}}{s_{1}}\,\ldots,\ d_{p-1}=\frac{s_{p-1}}{s _{p-2}}\,\ d_{p}=\frac{n}{s_{p-1}}\,\]
we can also write
\[B(p,n)=\sum_{d_{1}\cdots d_{p}=n}d_{1}^{p-1}d_{2}^{p-2}\cdots d_{p-1}\,\]
as a multiple Dirichlet convolution of power functions (see, e.g., [12] where the connection to \(q\)-binomial coefficients was also noted).
We then have the following easy formal power series computations
\[\sum_{n=1}^{\infty}\frac{B(p,n)}{n}\ u^{n} = \sum_{d_{1},\ldots,d_{p}\geq 1}\frac{d_{1}^{p-1}d_{2}^{p-2}\cdots d _{p-1}}{d_{1}\cdots d_{p}}\times u^{d_{1}\cdots d_{p}}\] \[= \sum_{m\geq 1}B(p-1,m)\times\sum_{d_{p}\geq 1}\frac{(u^{m})^{d_{p} }}{d_{p}}\] \[= \sum_{m\geq 1}B(p-1,m)\times\left(-\log(1-u^{m})\right)\,\]
where we introduced the new summation index \(m:=d_{1}\cdots d_{p-1}\). Multiplying by \(x\), and taking exponentials gives
\[\exp\left(x\sum_{n=1}^{\infty}\frac{B(p,n)}{n}\ u^{n}\right)=\prod_{m=1}^{ \infty}(1-u^{m})^{-xB(p-1,m)}\,\]
which is the right-hand side of (1) when collecting factors according to \(m:=d_{1}\cdots d_{p-1}\). We have thus shown that (1) can be rewritten as
\[\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}\ A(p,n,k)\ x^{k}u^{n}=\exp\left( x\sum_{n=1}^{\infty}\frac{B(p,n)}{n}\ u^{n}\right)\.\]
Extracting coefficients of monomials in \(x\) and \(u\), on both sides, immediately yields Theorem 1.1. In the article [5], \(x\) is assumed to be the Euler characteristic of a manifold. However, their proof of (1) holds if \(x\) merely is a formal variable. Their work was aiming at generalizing the "stringy" orbifold Euler characteristic [7, 3], from sums over pairs of commuting permutations, to commuting tuples of arbitrary length \(p\). Another motivation for their work was the study by Hopkins, Kuhn, and Ravenel [10] of a hierarchy of cohomology theories where the \(p\)-th level seemed to crucially involve \(p\)-tuples of commuting elements of a finite group such as \(\mathfrak{S}_{n}\). The group-theoretic proof by Bryan and Fulman involved a delicate analysis of conjugacy classes in wreath products. Another proof one can find in the literature is the algebraic one by White [17]. It uses the remark that \(\mathscr{C}_{p,n}\) is in bijection with \(\operatorname{Hom}(\mathbb{Z}^{p},\mathfrak{S}_{n})\), namely, the set of group homomorphisms from the additive group \(\mathbb{Z}^{p}\) to the symmetric group \(\mathfrak{S}_{n}\), i.e., \(\mathbb{Z}^{p}\) actions on a set of \(n\) elements. The proof by White also uses the fact that \(B(p,n)\) is the number of subgroups of \(\mathbb{Z}^{p}\) of index \(n\) (a remark by Stanley already mentioned in [5]) and the main part of the argument is the computation of this number using Hermite normal forms, i.e., Gaussian elimination over the integers. Note that \(B(p,n)\) is a well studied quantity, see, e.g., [11, Ch. 15] as well as the article by Solomon [14] where work on \(B(p,n)\) is traced back to the time of Hermite and Eisenstein. Aslo note that a proof of the \(x=1\) evaluation of the \(p=3\) case of (1) was also given in [4]. Our proof, given in the next section, is elementary and in the spirit of bijective enumerative combinatorics. In Lemma 2.1, we
reduce the \(A(p,n,k)\) to the \(k=1\) case of transitive actions, via a polymer gas representation, in the language of statistical mechanics, or the exponential formula in enumerative combinatorics, often mentioned as the general slogan "sums over all objects are exponentials of sums over connected objects". The main argument is a reduction of \(A(p+1,n,1)\) to the computation of \(A(p,n,1)\). We condition the sum over tuples \((\sigma_{1},\ldots,\sigma_{p+1})\), first on the number \(r\) of orbits for the sub-tuple \((\sigma_{1},\ldots,\sigma_{p})\) and then on the set partition \(X=\{X_{1},\ldots,X_{r}\}\) of \([n]\) given by that orbit decomposition. With \(r\) and \(X\) fixed, we then construct a bijection
\[(\sigma_{1},\ldots,\sigma_{p+1})\longmapsto(\widetilde{\sigma},\gamma,\tau,z) \tag{3}\]
where \(\widetilde{\sigma}\) is a transitive \(p\)-tuple of commuting permutations on the subset \(X_{1}\) containing the element \(1\in[n]\). By \(\gamma\) we denote a permutation of \([r]\) which is such that \(\gamma(1)=1\). The \(\tau\) is a certain collection of bijective maps between blocks \(X_{i}\). Finally, the crucial ingredient is \(z\) which is an element of \(X_{1}\). One can intuitively understand our proof as counting possibly flat or degenerate discrete \((p+1)\)-dimensional tori with \(n\) points. As is familiar in topology, one can build such a torus by gluing both ends of a cylinder. However, we are allowed to perform a twist when doing this gluing and this is determined by \(z\). Namely, \((\sigma_{p+1})^{r}\), the "Poincare return map" to \(X_{1}\), does not necessarily fix \(1\) but may send it to some \(z\neq 1\). We remark that it is possible to explicitly iterate the bijection involved in the \(p+1\) to \(p\) reduction, but given the complexity of the resulting recursive combinatorial data, we will refrain from doing this here.
## 2. Proofs
We first take care of the reduction to the transitive action case.
**Lemma 2.1**.: _We have_
\[A(p,n,k)=\ \frac{n!}{k!}\times\sum_{n_{1},\ldots,n_{k}\geq 1}\mbox{\rm 1 \kern-2.2ptl}\{n_{1}+\cdots+n_{k}=n\}\times\prod_{i=1}^{k}\left(\frac{A(p,n_{i},1)}{n_{i}!}\right)\.\]
**Proof:** For a tuple \((\sigma_{1},\ldots,\sigma_{p})\) in \(\mathscr{C}_{p,n,k}\), let \(\Pi(\sigma_{1},\ldots,\sigma_{p})\) denote the unordered set partition of \([n]\) given by the orbits of the action of the subgroup \(\langle\sigma_{1},\ldots,\sigma_{p}\rangle\). We condition the sum over tuples in \(\mathscr{C}_{p,n,k}\), according to this set partition. We also sum over orderings of the blocks of that partition (with \(k\) blocks), and compensate for this overcounting by dividing by \(k!\). This gives
\[A(p,n,k)=\frac{1}{k!}\times\sum_{(X_{1},\ldots,X_{k})}\ \ \ \sum_{(\sigma_{1},\ldots, \sigma_{p})\in\mathscr{C}_{p,n,k}}\mbox{\rm 1\kern-2.2ptl}\left\{\ \Pi(\sigma_{1},\ldots,\sigma_{p})=\{X_{1},\ldots,X_{k}\}\ \right\}\,\]
where the sum is over ordered tuples of subsets \((X_{1},\ldots,X_{r})\), where the \(X_{i}\) are nonempty, pairwise disjoint, and together have union equal to \([n]\). For \(1\leq i\leq k\) and \(1\leq j\leq p\), we let \(\sigma_{j}^{(i)}\) be the restriction and corestriction of \(\sigma_{j}\) to the subset \(X_{i}\) which must be stable by \(\sigma_{j}\). For fixed \(X_{1},\ldots,X_{k}\), the sum over tuples \((\sigma_{1},\ldots,\sigma_{p})\) clearly amounts to summing independently over the tuples \((\sigma_{1}^{(i)},\ldots\sigma_{p}^{(i)})\) in each \(X_{i}\), \(1\leq i\leq k\). The tuple \((\sigma_{1}^{(i)},\ldots\sigma_{p}^{(i)})\) is made of commuting permutations of \(X_{i}\) whose action on the latter must be transitive. The
number of such tuples only depends on the size \(|X_{i}|\) of the set \(X_{i}\), and not its location within \([n]\). As a result, we have
\[A(p,n,k) = \frac{1}{k!}\times\sum_{(X_{1},\ldots,X_{k})}A(p,|X_{1}|,1)\cdots A (p,|X_{k}|,1)\] \[= \frac{1}{k!}\times\sum_{n_{1},\ldots,n_{k}\geq 1}1\!\!1\{n_{1}+ \cdots+n_{k}=n\}\times\frac{n!}{n_{1}!\cdots n_{k}!}\times\prod_{i=1}^{k}A(p,n _{i},1)\,\]
where the multinomial coefficient accounts for the number of tuples of disjoint sets \((X_{1},\ldots,X_{k})\) with fixed cardinalities \(n_{1},\ldots,n_{k}\).
We now move on to the main part of the proof, i.e., the \(p+1\) to \(p\) reduction and showing that
\[A(p+1,n,1)=\sum_{rs=n}A(p,s,1)\times\frac{n!}{r!\times s!^{r}}\times(r-1)! \times s!^{r-1}\times s\, \tag{4}\]
where the sum is over pairs of integers \(r,s\geq 1\) whose product is \(n\). Let \((\sigma_{1},\ldots,\sigma_{p+1})\in\mathscr{C}_{p+1,n,1}\) denote a \((p+1)\)-tuple of commuting permutations being counted on the left-hand side. We let \(X=\{X_{1},\ldots,X_{r}\}:=\Pi(\sigma_{1},\ldots,\sigma_{p})\) be the set of orbits determined by the first \(p\) permutations. For a fixed set partition \(X\) of \([n]\), define \(\mathscr{C}_{p+1,n,1}^{X}\subset\mathscr{C}_{p+1,n,1}\) as the set of \((p+1)\)-tuples which produce the given \(X\) by the above definition. We organize the count by conditioning on \(X\), i.e., writing
\[A(p+1,n,1)=\sum_{X}\left|\mathscr{C}_{p+1,n,1}^{X}\right|\,\]
and then computing the terms in the last sum by constructing an explicit bijection between \(\mathscr{C}_{p+1,n,1}^{X}\) and a set of combinatorial data whose cardinality is easy to derive. We will use an automatic numbering of the blocks of \(X\) by ordering them according to their minimal element, with respect to the ordered set \([n]\). We let \(X_{1}\) be the block containing the element \(1\in[n]\), and number the other blocks so that
\[1=\min X_{1}<\min X_{2}<\cdots<\min X_{r}\.\]
**Lemma 2.2**.: _Let \(f\) be an element of \(\langle\sigma_{p+1}\rangle\), i.e., a power of \(\sigma_{p+1}\), and let \(\alpha,\beta\in[r]\). If \(\exists x\in X_{\alpha}\), \(f(x)\in X_{\beta}\), then \(\forall y\in X_{\alpha}\), \(f(y)\in X_{\beta}\)._
**Proof:** Since such \(y\) is in the same \(\langle\sigma_{1},\ldots,\sigma_{p}\rangle\)-orbit as \(x\), there exists a permutation \(g\in\langle\sigma_{1},\ldots,\sigma_{p}\rangle\), such that \(y=g(x)\). Since \(\sigma_{1},\ldots,\sigma_{p+1}\) commute, then \(g\) must commute with \(f\), and therefore \(f(y)=f(g(x))=g(f(x))\). This shows that \(f(y)\) is in the same \(\langle\sigma_{1},\ldots,\sigma_{p}\rangle\)-orbit as \(f(x)\), namely, \(X_{\beta}\).
The last lemma allows us, from an \(f\in\langle\sigma_{p+1}\rangle\), to construct a map \(\widehat{f}:[r]\to[r]\) defined by \(\widehat{f}(\alpha)=\beta\), whenever \(\exists x\in X_{\alpha}\), \(f(x)\in X_{\beta}\). This construction satisfies \(\widehat{\mathrm{Id}}=\mathrm{Id}\), and \(\widehat{f\circ g}=\widehat{f}\circ\widehat{g}\), namely, it gives a group homomorphism from \(\langle\sigma_{p+1}\rangle\) to \(\mathfrak{S}_{r}\). We apply this to \(f=\sigma_{p+1}\) and consider the cycle decomposition of the permutation \(\widehat{\sigma_{p+1}}\), and focus on the cycle containing the element \(1\in[r]\), namely \((\alpha_{1}\ \alpha_{2}\ \cdots\ \alpha_{t})\), with \(\alpha_{1}=1\). We clearly have
\[\sigma_{p+1}(X_{1})\subset X_{\alpha_{2}}\,\ \sigma_{p+1}(X_{\alpha_{2}})\subset X _{\alpha_{3}}\,\ \cdots\,\ \sigma_{p+1}(X_{\alpha_{t-1}})\subset X_{\alpha_{t}}\,\ \sigma_{p+1}(X_{\alpha_{t}})\subset X_{1}\.\]
Hence \(X_{1}\cup X_{\alpha_{2}}\cup\cdots\cup X_{\alpha_{t}}\) is stable by \(\sigma_{p+1}\), in addition to being stable by \(\langle\sigma_{1},\ldots,\sigma_{p}\rangle\) since, each of the \(X\) blocks are. Given that the \((p+1)\)-tuple of permutations \((\sigma_{1},\ldots,\sigma_{p+1})\)
is assumed to act transitively, this can only happen if the previous union of \(X\) blocks is all of \([n]\), i.e., if \(t=r\). For notational convenience, we define the permutation \(\gamma\in\mathfrak{S}_{r}\), by letting \(\gamma(i)=\alpha_{i}\) for all \(i\in[r]\). In particular, \(\gamma(1)=1\), by construction. We now have,
\[\sigma_{p+1}(X_{1})\subset X_{\gamma(2)}\,\ \sigma_{p+1}(X_{\gamma(2)})\subset X _{\gamma(3)}\,\ \cdots\,\ \sigma_{p+1}(X_{\gamma(r-1)})\subset X_{\gamma(r)}\,\ \sigma_{p+1}(X_{\gamma(r)})\subset X_{1}. \tag{5}\]
Since \(\sigma_{p+1}\) is injective, it follows that
\[|X_{1}|\leq|X_{\gamma(2)}|\leq\cdots\leq|X_{\gamma(r)}|\leq|X_{1}|\,\]
and, therefore, all the \(X\) blocks must have the same cardinality say \(s\), so that \(n=rs\), namely, \(r\) must divide \(n\). The above argument also produces bijective maps
\[\tau_{i}:X_{\gamma(i)}\longrightarrow X_{\gamma(i+1)}\,\]
for \(1\leq i\leq r-1\), obtained by restriction (and corestriction) of \(\sigma_{p+1}\). We collect them into a tuple \(\tau=(\tau_{1},\ldots,\tau_{r-1})\). We now define the \(p\)-tuple of permutations of the first block \(X_{1}\) given by \(\widetilde{\sigma}=(\widetilde{\sigma}_{1},\ldots,\widetilde{\sigma}_{p})\) where, for all \(j\in[p]\), \(\widetilde{\sigma}_{j}\) is obtained from \(\sigma_{j}\) by restricting it to the subset \(X_{1}\). It is easy to see that \(\widetilde{\sigma}\) is a \(p\)-tuple of commuting permutations of the set \(X_{1}\), which altogether act transitively on it. Finally, we define the element \(z=(\sigma_{p+1})^{r}(1)\) of the block \(X_{1}\). This concludes the definition of the map mentioned in (3) which to a tuple \((\sigma_{1},\ldots,\sigma_{p+1})\in\mathscr{C}_{p+1,n,1}\) associates the data \((\widetilde{\sigma},\gamma,\tau,z)\). Once we establish that this construction is bijective, the reduction formula (4) will follow easily. Indeed, after identification of \(X_{1}\) with \([s]\), we see that there are \(A(p,s,1)\) possible choices for \(\widetilde{\sigma}\). Deciding on the permutation \(\gamma\), which fixes \(1\), results in \((r-1)!\) choices. The number of possibilities for the bijective maps in \(\tau\) accounts for a factor \(s!^{r-1}\), and there are \(s\) possibilities for \(z\). Summing over the unordered set partition \(X\) can be done with the multinomial coefficient \(n!/s!^{r}\) for ordered set partitions and correcting for the overcounting by dividing by \(r!\), as in the proof of Lemma 2.1. All that remains in order to complete the proof of (4) is to show our map (3) is indeed bijective.
**Injectivity:** We will show how the tuple \((\sigma_{1},\ldots,\sigma_{p+1})\) is determined by the data \((\widetilde{\sigma},\gamma,\tau,z)\), and the a priori knowledge of the fixed partition \(X\). By construction, for all \(j\), \(1\leq j\leq p\), the restriction of \(\sigma_{j}\) to \(X_{1}\) must be
\[\sigma_{j}|_{X_{1}}=\widetilde{\sigma}_{j}. \tag{6}\]
Strictly speaking, there is also a change of codomain involved (from \(X_{1}\) to \([n]\)), but we ignored this and will continue to do this for the next similar statements. We must also have, for all \(i\), \(1\leq i\leq r-1\),
\[\sigma_{p+1}|_{X_{\gamma(i)}}=\tau_{i}. \tag{7}\]
From the commutation relation \(\sigma_{j}\circ(\sigma_{p+1})^{i}=(\sigma_{p+1})^{i}\circ\sigma_{j}\), restricted to \(X_{1}\), we deduce that for all \(i\), \(2\leq i\leq r\), we must have
\[\sigma_{j}\circ\tau_{i-1}\circ\cdots\circ\tau_{1}=\tau_{i-1}\circ\cdots\circ \tau_{1}\circ\widetilde{\sigma}_{j}\]
i.e.,
\[\sigma_{j}|_{X_{\gamma(i)}}=\tau_{i-1}\circ\cdots\circ\tau_{1}\circ\widetilde{ \sigma}_{j}\circ\tau_{1}^{-1}\circ\cdots\circ\tau_{i-1}^{-1}. \tag{8}\]
Hence \(\sigma_{1},\ldots,\sigma_{p}\) are known, while \(\sigma_{p+1}\) is almost entirely determined. We are only missing the restriction of \(\sigma_{p+1}\) on the last block \(X_{\gamma(r)}\). Since \(z\) is in the orbit \(X_{1}\) of the element \(1\) for
the action of \(\sigma_{1},\ldots,\sigma_{p}\), or equivalently \(\widetilde{\sigma}_{1},\ldots,\widetilde{\sigma}_{p}\), there exists \(g\in\langle\widetilde{\sigma}_{1},\ldots,\widetilde{\sigma}_{p}\rangle\), such that \(g(1)=z\). We claim that we must have
\[\sigma_{p+1}|_{X_{\gamma(r)}}=g\circ\tau_{1}^{-1}\circ\cdots\circ\tau_{r-1}^{-1 }. \tag{9}\]
Indeed, let \(x\in X_{\gamma(r)}\), then \(x=(\sigma_{p+1})^{r-1}(y)\) for some \(y\in X_{1}\). Again, by transitivity on \(X_{1}\), there exists \(h\in\langle\sigma_{1},\ldots,\sigma_{p}\rangle\) such that \(y=h(1)\). As a consequence of the Abelian property of the group \(\langle\sigma_{1},\ldots,\sigma_{p+1}\rangle\), we must have
\[\sigma_{p+1}(x) = (\sigma_{p+1})^{r}\circ h(1)\] \[= h\circ(\sigma_{p+1})^{r}(1)\] \[= h(z)\] \[= h(g(1))\] \[= g(h(1))\] \[= g(y)\] \[= g\circ\tau_{1}^{-1}\circ\cdots\circ\tau_{r-1}^{-1}(x)\.\]
We now have recovered the restrictions of all \(p+1\) permutations \(\sigma_{j}\) on all blocks \(X_{i}\) of the decomposition of \([n]\), from the output of our map, which shows that it is injective.
**Surjectivity:** We start from the data \((\widetilde{\sigma},\gamma,\tau,z)\) and construct \((\sigma_{1},\ldots,\sigma_{p+1})\in\mathscr{C}^{X}_{p+1,n,1}\) which maps to it. This time, we use the equations (6), (7), (8), (9) as definitions of \(\sigma_{1},\ldots,\sigma_{p+1}\) as maps \([n]\to[n]\). The use of (9) requires some care, namely showing the uniqueness of \(g\). Let \(\widetilde{H}:=\langle\widetilde{\sigma}_{1},\ldots,\widetilde{\sigma}_{p}\rangle\). The hypothesis on the tuple \(\widetilde{\sigma}\) is that it is made of \(p\) commuting permutations of the set \(X_{1}\), such that the permutation action of \(\widetilde{H}\) on \(X_{1}\) is transitive. Suppose \(g_{1}(1)=g_{2}(1)=z\) for some \(g_{1},g_{2}\in\widetilde{H}\). If \(x\in X_{1}\), then \(\exists h\in\widetilde{H}\), \(h(1)=x\). By the Abelian property of \(\widetilde{H}\), we have
\[g_{i}(x)=g_{i}\circ h(1)=h\circ g_{i}(1)=h(z)\,\]
for \(i=1\) as well as \(i=2\), and thus \(g_{1}(x)=g_{2}(x)\). Since \(x\) is arbitrary, we have \(g_{1}=g_{2}\). This justifies the use of (9) as a definition of a map. We now have constructed the maps \(\sigma_{1},\ldots,\sigma_{p+1}\). It is immediate, from (6) and (8), that \(\sigma_{1},\ldots,\sigma_{p}\) are bijective within each \(X_{\gamma(i)}\), \(1\leq i\leq r\), and therefore over all of \([n]\). One easily checks also the commutation relations \(\sigma_{j}\circ\sigma_{\ell}=\sigma_{\ell}\circ\sigma_{j}\), \(1\leq j,\ell\leq p\), on each \(X\) block, and therefore on \([n]\). From (7), we see that \(\sigma_{p+1}\) is injective on each \(X_{\gamma(i)}\), \(1\leq i\leq r-1\), and the images of these restrictions are disjoint because \(\gamma\) is a permutation. From (9), it holds that \(\sigma_{p+1}|_{X_{\gamma(r)}}:X_{\gamma(r)}\to X_{1}\) is bijective. As a result, \(\sigma_{p+1}:[n]\to[n]\) is bijective. From (7) and (8), we also obtain
\[\sigma_{j}\circ\sigma_{p+1}|_{X_{\gamma(i)}}=\tau_{i}\circ\cdots\circ\tau_{1} \circ\widetilde{\sigma}_{j}\circ\tau_{1}^{-1}\circ\cdots\circ\tau_{i-1}^{-1}= \sigma_{p+1}\circ\sigma_{j}|_{X_{\gamma(i)}}\,\]
for all \(i,j\) such that \(1\leq j\leq p\) and \(1\leq i\leq r-1\). Finally, for all \(j\), \(1\leq j\leq p\), the restrictions of \(\sigma_{j}\circ\sigma_{p+1}\) and \(\sigma_{p+1}\circ\sigma_{j}\) on \(X_{\gamma(r)}\) coincide, because \(g\) and \(\widetilde{\sigma}_{j}\) must commute. We have now checked that \((\sigma_{1},\ldots,\sigma_{p+1})\) is a commuting tuple of permutations of \([n]\). The corresponding action is transitive because (5) holds by construction and \(\widetilde{\sigma}\) is assumed to act transitively on \(X_{1}\). Checking that the produced tuple \((\sigma_{1},\ldots,\sigma_{p+1})\in\mathscr{C}^{X}_{p+1,n,1}\) indeed maps to \((\widetilde{\sigma},\gamma,\tau,z)\) is straightforward. Therefore, our map is surjective.
In order to finish the proof of Theorem 1.1, we define \(C(p,n):=\frac{A(p,n,1)}{(n-1)!}\). Since \(A(1,n,1)=(n-1)!\) counts cyclic permutations of \(n\) elements, we have \(C(1,n)=1=B(1,n)\). The, now established, recursion (4) implies that \(C\) satisfies
\[C(p+1,n)=\sum_{rs=n}s\ C(p,s)\.\]
By a trivial induction on \(p\), \(C(p,n)\) must coincide with \(B(p,n)\) defined, e.g., in (2). We plug \(A(p,n,1)=(n-1)!\times B(p,n)\) in the result of Lemma 2.1, and Theorem 1.1 follows.
## 3. On conjecture 1.1
As mentioned in the introduction, the case \(p=1\) of Conjecture 1.1 is well established. The opposite extreme "\(p=\infty\)" is settled in the companion article [2]. Let us now focus on the \(p=2\) case, and relate it to an already large body of literature, in particular, the work of Heim, Neuhauser, and many others. Since, for \(p=2\), \(B(p-1,m)=B(1,m)=1\), the Bryan-Fulman identity (1) simply reads
\[\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}A(2,n,k)x^{k}u^{n}=\prod_{m=1}^{ \infty}(1-u^{m})^{-x}\.\]
On the other hand, the so-called D'Arcais polynomials \(P_{n}(x)\) are defined [6] by the generating function identity
\[\prod_{m=1}^{\infty}(1-u^{m})^{-x}=\sum_{n=0}^{\infty}P_{n}(x)u^{n}\.\]
The D'Arcais polynomials can therefore be expressed in terms of commuting pairs of permutations
\[P_{n}(x)=\frac{1}{n!}\sum_{k=0}^{n}A(2,n,k)\ x^{k}. \tag{10}\]
We are not aware of the commuting permutation interpretation (10) of D'Arcais polynomials having been used in the number theory literature reviewed, e.g., in [8], and we hope it could be of help in this area. If one shifts the variable \(x\) by one, one gets the standard formulation of the Nekrasov-Okounkov formula [13, 16]
\[\prod_{m=1}^{\infty}(1-u^{m})^{-x-1}=\sum_{n=0}^{\infty}Q_{n}(x)u^{n}\]
where
\[Q_{n}(x)=\sum_{\lambda\vdash n}\prod_{\square\in\lambda}\left(1+\frac{x}{h( \square)^{2}}\right)\.\]
Namely, the sum is over integer partitions \(\lambda\) of \(n\). The product is over cells in the usual Ferrers-Young diagram of the partition \(\lambda\), and \(h(\square)\) denotes the hook length number of that cell. Clearly \(Q_{n}(x)=P_{n}(x+1)\) and therefore, the log-concavity (of the coefficients of) the polynomial \(P_{n}\) would imply that of \(Q_{n}\) as well as the unimodality of the latter which was conjectured by Heim and Neuhauser as well as Amdeberhan (see [8] and references therein). As a strengthening of this unimodality conjecture, the log-concavity of the \(P_{n}(x)\)'s, i.e., the \(p=2\) case of Conjecture 1.1 was stated as Challenge 3 in [8]. The authors also reported on
checking this numerically for all \(n\leq 1500\). For recent progress towards such log-concavity properties in the \(p=2\) case, see [9, 18].
Using Mathematica, we checked that Conjecture 1.1 is true for \(p=3,4,5\) for all \(n\leq 100\). One can also test the conjecture by considering the dilute polymer gas regime, in the terminology of statistical mechanics, i.e., when \(k\) is close to \(n\) and most orbits are singletons, as in the next proposition.
**Proposition 3.1**.: _The inequality in Conjecture 1.1 holds for all \(p\geq 1\), and \(n\geq 3\), when \(k=n-1\)._
**Proof:** Let
\[\Delta(p,n):=A(p,n,n-1)^{2}-A(p,n,n)\ A(p,n,n-2)\.\]
From Theorem 1.1, we easily deduce
\[A(p,n,n) = 1\] \[A(p,n,n-1) = \binom{n}{2}\ (2^{p}-1)\] \[A(p,n,n-2) = \binom{n}{3}\ (3^{p}-1)+\binom{n}{4}\ 3(2^{p}-1)^{2}\.\]
Therefore
\[\Delta(p,n)=\left[\binom{n}{2}^{2}-3\binom{n}{4}\right](2^{p}-1)^{2}-\binom{ n}{3}\ (3^{p}-1)\.\]
As mentioned before, the conjecture is known for \(p=1\), so now we focus on \(p\geq 2\). If \(p\geq 3\), then \(2\left(\frac{1}{2}\right)^{p}+\left(\frac{3}{4}\right)^{p}\leq\frac{43}{64}\), the \(p=3\) value. Therefore, for \(p\geq 3\), we have \(4^{p}\geq 2\times 2^{p}+3^{p}\) which implies
\[4^{p}-2\times 2^{p}+1\geq 3^{p}-1\.\]
The last inequality being also true for \(p=2\), we have that for all \(p\geq 2\), the inequality \((2^{p}-1)^{2}\geq 3^{p}-1\) holds. Hence
\[\Delta(p,n) \geq \left[\binom{n}{2}^{2}-3\binom{n}{4}-\binom{n}{3}\right](2^{p}-1 )^{2}\] \[= \frac{1}{24}n(n-1)(3n^{2}+5n-10)(2^{p}-1)^{2}\.\]
Since \(n\geq 3\) implies \(3n^{2}+5n-10\geq 32>0\), we have \(\Delta(p,n)>0\). \(\Box\)
**Acknowledgements:** The first author thanks Ken Ono for introducing him to the Nekrasov-Okounkov formula, and the unimodality conjecture of Amdeberhan, Heim and Neuhauser.
|
2302.00037 | Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees | Hierarchical Clustering is a popular unsupervised machine learning method
with decades of history and numerous applications. We initiate the study of
differentially private approximation algorithms for hierarchical clustering
under the rigorous framework introduced by (Dasgupta, 2016). We show strong
lower bounds for the problem: that any $\epsilon$-DP algorithm must exhibit
$O(|V|^2/ \epsilon)$-additive error for an input dataset $V$. Then, we exhibit
a polynomial-time approximation algorithm with $O(|V|^{2.5}/
\epsilon)$-additive error, and an exponential-time algorithm that meets the
lower bound. To overcome the lower bound, we focus on the stochastic block
model, a popular model of graphs, and, with a separation assumption on the
blocks, propose a private $1+o(1)$ approximation algorithm which also recovers
the blocks exactly. Finally, we perform an empirical study of our algorithms
and validate their performance. | Jacob Imola, Alessandro Epasto, Mohammad Mahdian, Vincent Cohen-Addad, Vahab Mirrokni | 2023-01-31T19:14:30Z | http://arxiv.org/abs/2302.00037v2 | # Differentially-Private Hierarchical Clustering with Provable Approximation Guarantees
###### Abstract
Hierarchical Clustering is a popular unsupervised machine learning method with decades of history and numerous applications. We initiate the study of _differentially private_ approximation algorithms for hierarchical clustering under the rigorous framework introduced by Dasgupta (2016). We show strong lower bounds for the problem: that any \(\epsilon\)-DP algorithm must exhibit \(O(|V|^{2}/\epsilon)\)-additive error for an input dataset \(V\). Then, we exhibit a polynomial-time approximation algorithm with \(O(|V|^{2.5}/\epsilon)\)-additive error, and an exponential-time algorithm that meets the lower bound. To overcome the lower bound, we focus on the stochastic block model, a popular model of graphs, and, with a separation assumption on the blocks, propose a private \(1+o(1)\) approximation algorithm which also recovers the blocks exactly. Finally, we perform an empirical study of our algorithms and validate their performance.
## 1 Introduction
Hierarchical Clustering is a staple of unsupervised machine learning with more than 60 years of history (Ward Jr, 1963). Contrary to _flat_ clustering methods (such as \(k\)-means, Jain (2010)), which provide a single partitioning of the data, _hierarchical_ clustering algorithms produce a recursive refining of the partitions into increasingly fine-grained clusters. The clustering process can be described by a tree (or dendrogram), and the objective of the tree is to cluster the most similar items in the lowest possible clusters, while separating dissimilar items as high as possible.
The versatility of such methods is apparent from the widespread use of hierarchical clustering in disparate areas of science, such as social networks analysis (Leskovec et al., 2014, Mann et al., 2008), bioinformatics (Diez et al., 2015), phylogenetics (Sneath and Sokal, 1962, Jardine and Sibson, 1968), gene expression analysis (Eisen et al., 1998), text classification (Steinbach et al., 2000) and finance (Tumminello et al., 2010). Popular hierarchical clustering methods (such as linkage (Jain, 2010)) are commonly available in standard scientific computing packages (Virtanen et al., 2020) as well as large-scale production systems (Bateni et al., 2017, Dhulipala et al., 2022).
Despite the fact that many of these applications involve private and sensitive user data, all research on hierarchical clustering (with few exceptions (Kolluri et al., 2021, Xiao et al., 2014) discussed later) has ignored the problem of defining _privacy-preserving_ algorithms. In particular, to the best of our knowledge, no work has provided _differentially-private (DP)_(Dwork et al., 2014) algorithms for hierarchical clustering with provable approximation guarantees.
In this work, we seek to address this limitation by advancing the study of differentially-private approximation algorithms for hierarchical clustering under the rigorous optimization framework introduced by Dasgupta (2016). This celebrated framework introduces an objective function for hierarchical clustering (see Section 3 for a formal definition) formalizing the goal of clustering similar items lower in the tree.
Our algorithms are edge-level _Differentially Private (DP)_ on an input similarity graph, which is relevant when edges of the input graph represents sensitive user information. Designing an edge-level DP algorithm requires proving that the algorithm is insensitive to changes to a single edge of the similarity graph. As we shall see, this is especially challenging for hierarchical clustering. In fact, commonly-used hierarchical clustering algorithms (such as linkage-based ones (Jain, 2010)) are _deterministically_ sensitive to a single edge, thus leaking directly the input edges. Moreover, as we show, strong inapproximability bounds exist for Dasgupta's objective under differential privacy, highlighting the technical difficulty of the problem.
Main contributionsFirst, we show in Section 4 that no edge-level \(\epsilon\)-DP algorithm (even with exponential time) exists for Dasgupta's objective with less than \(O(|V|^{2}/\epsilon)\) additive error. This prevents defining private algorithms with meaningful approximation guarantees for _arbitrary_ sparse graphs.
Second, on the positive side, we provide the first polynomial time, edge-level approximation algorithm for Dasguta's objective with \(O(|V|^{2.5}/\epsilon)\) additive error and multiplicative error matching that of the best non-private algorithm (Agarwal et al., 2022). This algorithm is based on recent advances in private cut sparsifiers (Elias et al., 2020). Moreover, we show an (exponential time) algorithm with \(O(|V|^{2}\log n/\epsilon)\) additive error, almost matching the lower bound.
Third, given the strong lower bounds, in Section 6 we focus on a popular model of graphs with a planted hierarchical clustering based on the _Stochastic Block Model (SBM)_(Cohen-Addad et al., 2017). For such graphs, we present a private \(1+o(1)\) approximation algorithm recovering almost exactly the hierarchy on the blocks. Our algorithm uses, as a black-box, any reconstruction algorithm for the stochastic block model.
Fourth, we introduce a practical and efficient DP SBM community reconstruction algorithm (Section 6). This algorithm is based on perturbation theory of graph spectra combined with dimensionality reduction to avoid adding high noise in the Gaussian mechanism. Combined with our clustering algorithm, this results in the first private approximation algorithm for hierarchical clustering in the HSBM model.
Finally, we show in Section 7 that this algorithm can be efficiently implemented and works well in practice.
## 2 Related Work
Our work spans the areas of differential privacy, hierarchical clustering and community detection in stochastic block model. For a complete discussion, see Appendix A.
Graph algorithms under DPDifferential privacy (Dwork et al., 2006) has recently the gold standard of privacy. We refer to Dwork et al. (2014) for a survey. Relevant to this work is the area of differential privacy in graphs. Definitions based on edge-level (Epasto et al., 2022; Elias et al., 2020) and node-level (Kasiviswanathan et al., 2013) privacy have been proposed. The most related work is that on graph cut approximation (Elias et al., 2020; Arora and Upadhyay, 2019), as well as that of private correlation clustering (Bun et al., 2021; Cohen-Addad et al., 2022).
Hierarchical ClusteringUntil recently, most work on hierarchical clustering were heuristic in nature, with the most well-known being the linkage-based ones (Jain, 2010; Bateni et al., 2017). Dasgupta (2016) introduced a combinatorial objective for hierarchical clustering which we study in this paper. Since this work, many authors have designed algorithms for variants of the problem with no privacy (Cohen-Addad et al., 2017, 2019; Charikar and Chatziafratis, 2017; Moseley and Wang, 2017; Agarwal et al., 2022; Chatziafratis et al., 2020).
Limited work has been devoted to DP hierarchical clustering algorithms. One paper (Xiao et al., 2014) initiates private clustering via MCMC methods, which are not guaranteed to be polynomial time. Follow-up
work (Kolluri et al., 2021) shows that sampling from the Boltzmann distribution (essentially the exponential mechanism (McSherry and Talwar, 2007) in DP) produces an approximation to the maximization version of Dasgupta's function, which is a different problem formulation. Again, this algorithm is not provably polynomial time.
Private flat clusteringContrary to hierarchical clustering, the area of private _flat_ clustering on metric spaces has received large attention. Most work in this area has focused on improving the privacy-approximation trade-off (Ghazi et al., 2020; Balcan et al., 2017) and on efficiency (Hegde et al., 2021; Cohen-Addad et al., 2022b, a).
Stochastic block modelsThe Stochastic Block Model (SBM) is a classic model for random graphs with planted partitions which has received a significant attention in the literature (Guedon and Vershynin, 2016; Montanari and Sen, 2016; Moitra et al., 2016; Fei and Chen, 2020; Ding et al., 2022; Liu and Moitra, 2022). For our work, we focus on a variant which has nested ground-truth communities arranged in hierarchical fashion. This model has received attention for hierarchical clustering (Cohen-Addad et al., 2017).
The study of private algorithms for SBMs is instead very recent. One of the only results known for private (non-hierarchical) SBMs is the work of Seif et al. (2022) which provides quasi-polynomial time community detection algorithms for some regimes of the model. Finally, concurrently to our work, the manuscript of Chen et al. (2023) provides strong approximation guarantees using semi-definite programming for recovering SBM communities.
No results are known for approximating hierarchical clustering on HSBMs. For this reason, in Section 6 we design a hierarchical clustering algorithm (Algorithm 1) which uses community detection as a black-box. Moreover, we show a novel algorithm for hierarchical SBMs (Algorithm 2), independent of Chen et al. (2023), which is of practical interest because, unlike solving a complex semidefinite program, it does not have a large polynomial run-time.
## 3 Preliminaries
Our results involve the key concepts of hierarchical clustering and differential privacy. We define these two concepts in the next sections.
### Hierarchical Clustering
Hierarchical clustering seeks to produce a tree clustering a set \(V\) of \(n\) items by their similarity. It takes as input an undirected graph \(G=(V,E,w)\), where \(E\subseteq V\times V\) is the set of edges and \(w:V\times V\rightarrow\mathbb{R}^{+}\) is a weight function indicating similarity; i.e. a higher \(w(u,v)\) indicates \(u,v\) are more similar. We often extend the function \(w\) and say that \(w(u,v)=0\) if \(w(u,v)\notin E\).
A hierarchical clustering (HC) of \(G\) is a tree \(T\) whose leaves are \(V\). The tree can be viewed as a sequence of merges of subtrees of \(T\), with the final merge being the root node. A good hierarchical clustering merges more similar items closer to the bottom of the tree. The cost function \(\omega_{G}(T)\) of Dasgupta (Dasgupta, 2016), captures this intuition. We have
\[\omega_{G}(T)=\sum_{(u,v)\in V^{2}}w(u,v)|\text{leaves}(T[u\wedge v])|, \tag{1}\]
where \(T[u\wedge v]\) indicates the smallest subtree containing \(u,v\) in \(T\) and \(|\text{leaves}(T[u\wedge v])|\) indicates the number of leaves in this subtree. This cost function charges a tree \(T\) for each edge based on the similarity \(w(u,v)\) and how many leaves are in the subtree in which it is merged.
Additional NotationWe let \(\omega_{G}^{*}=\min_{T}\omega_{G}(T)\) denote the best possible cost attained by any tree \(T\). We write \(w(A,B)=\sum_{a\in A,b\in B}w(a,b)\) and we say \(w(G)=w(G,G)\). Let \(\mathcal{A}(G)\) be a hierarchical clustering algorithm. We say \(\mathcal{A}\) is an \((a_{n},b_{n})\)-approximation if
\[\mathbb{E}[\omega_{G}(\mathcal{A}(G))]\leq a_{n}\omega_{G}^{*}+b_{n}, \tag{2}\]
where the expectation is over the random coins of \(\mathcal{A}\).
### Differential Privacy
For hierarchical clustering we use the notion of graph privacy known as edge differential privacy. Intuitively, our private algorithm behaves similarly whether or not the total weight of the edges in \(G\) are altered by up to \(1\). Specifically, we say \(G=(V,E,w)\) and \(G^{\prime}=(V,E^{\prime},w^{\prime})\) are _adjacent graphs_ if \(\sum_{u,v\in V}|w(u,v)-w^{\prime}(u,v)|\leq 1\). For weighted graphs, we may generalize this to a difference by any constant, but we consider the above notion of adjacency for simplicity. This notion has many real-world applications, such as when the graph is a social network and the edges between users encode relationships between them (Epasto et al., 2022). The definition of edge-DP is as follows:
**Definition 1**.: _An algorithm \(\mathcal{A}:\mathcal{G}\rightarrow\mathcal{Y}\) satisfies \((\epsilon,\delta)\)-edge DP if, for any \(G=(V,E,w),G^{\prime}=(V,E^{\prime},w^{\prime})\) that are adjacent, and any set of trees \(\mathcal{T}\),_
\[\Pr[\mathcal{A}(G^{\prime})\in\mathcal{T}]\leq e^{\epsilon}\Pr[\mathcal{A}(G) \in\mathcal{T}]+\delta.\]
Edge DP states that given any output \(\mathcal{T}\) of \(\mathcal{A}\), it is impossible to tell whether an adjacent \(G\) or \(G^{\prime}\) was used. This gives plausible deniability to each edge.
## 4 Lower Bounds
We show that for the both objective functions considered, there are unavoidable lower bounds on the objective function for any differentially private algorithm. Our theorem applies a packing-style argument (Hardt and Talwar, 2010), in which we construct a large family \(\mathcal{F}\) of graphs such that no tree can cluster more than one graph in \(\mathcal{F}\) well. However, a DP algorithm \(\mathcal{A}\) is forced to place mass on all trees. This limits its utility as significant mass must be placed on trees which do not cluster the input graphs well. Formally, we prove the following theorem:
**Theorem 1**.: _For any \(\epsilon\leq\frac{1}{20}\) and \(n\) sufficiently large, let \(\mathcal{A}(G)\) be a hierarchical clustering algorithm which satisfies \(\epsilon\)-edge differential privacy. Then, there is a weighted graph \(G\) with \(\omega_{G}^{*}\leq O(\frac{n}{\epsilon})\) such that_
\[\mathbb{E}[\omega_{G}(\mathcal{A}(G))]\geq\Omega(\frac{n^{2}}{\epsilon}).\]
We prove this theorem in Section 4.1; we discuss the implications of the theorem here. Since there exists a graph such that \(\omega_{G}^{*}\leq O(\frac{n}{\epsilon})\), yet \(\omega_{G}(\mathcal{A}(G))\geq\Omega(\frac{n^{2}}{\epsilon})\), this means that no differentially private algorithm \(\mathcal{A}\) can be a \((O(n^{\alpha}),O(\frac{n^{2\alpha}}{\epsilon}))\) approximation to hierarchical clustering for any \(\alpha<1\). It is possible for \(\mathcal{A}\) to be a \((1,O(\frac{n^{2}}{\epsilon}))\)-approximation-- in this case, for graphs with \(W\) total weight, it easy to see that \(\omega_{G}^{*}\leq O(nW)\) and can be as small as \(O(W)\). Thus, it is necessary for \(W\) to be much bigger than \(\frac{n}{\epsilon}\), meaning that \(G\) cannot be too sparse.
### Proof of Theorem 1
To construct our lower bound, we consider the family of graphs \(\mathcal{P}(n,5)\) consisting of \(\frac{n}{5}\) cycles of size \(5\). We observe the following facts:
* Each \(G\in\mathcal{P}(n,5)\) has \(n\) edges. Thus, any \(G_{1},G_{2}\in\mathcal{P}(n,5)\) differ in at most \(2n\) edges.
* For any \(G\in\mathcal{P}(n,5)\), any binary tree which splits the graph into its cycles before splitting any edges in the cycles incurs a cost of at most \(\frac{n}{5}W_{5}\), where \(W_{5}=\omega_{C_{5}}^{*}\leq 18\).
It will be convenient to define a _balanced cut_ of \(G\) to be any partition \((A,B)\) of \(V\) such that \(\frac{n}{3}\leq|A|,|B|\leq\frac{2n}{3}\). Any hierarchical clustering \(T\) can be mapped to a balanced cut on \(G\) in the following way:
**Definition 2**.: _For a binary tree \(T\) whose leaves are \(V\), let the sequence \(N_{0},N_{1},\ldots,N_{r}\) denote a recursive sequence of internal nodes such that \(N_{0}\) is the root node, and \(N_{i}\) is child of \(N_{i-1}\) with more leaves in its subtree. Finally, \(N_{r}\) is the first node in the sequence with fewer than \(\frac{2n}{3}\) leaves in its subtree. Then, the balanced cut \((A,B)\) of \(T\) is the partition \((\text{leaves}(N_{r}),V\setminus\text{leaves}(N_{r}))\)._
It is easy to see that \((A,B)\) is indeed a balanced cut of \(G\), and for any edge \((u,v)\) crossing \((A,B)\), we have \(|\text{leaves}(T[u\wedge v])|\geq\frac{2n}{3}\).
Our class \(\mathcal{C}\) of graphs is a subset of \(\mathcal{P}(n,5)\) for which no tree clusters more than one element of \(\mathcal{C}\) well. We characterize a condition for which a tree \(T\) definitely does not cluster \(G\in\mathcal{P}(n,5)\) well:
**Definition 3**.: _For a binary tree \(T\), let \((A,B)\) be its balanced cut. We say \((A,B)\) misses a cycle \(C\subseteq G\) if at least one vertex of \(C\) lies in \(A\) and at least one vertex lies in \(B\)._
Now, we show that if \(T\) misses many cycles in its balanced cut, it must incur high cost.
**Lemma 1**.: _For a graph \(G\in\mathcal{P}(n,5)\), let \(T\) be a HC with balanced cut \((A,B)\), and suppose that \(B\) misses at least \(\alpha\frac{n}{5}\) of the cycles in \(G\), for \(0<\alpha\leq 1\). Then,_
\[\omega_{G}(T)\geq\frac{4\alpha}{15}n^{2}.\]
_Proof:_ From the given information, we have that \(w(A,B)\geq 2\alpha\frac{n}{5}\), as a missed cycle implies at least two edges are cut. Thus,
\[\omega_{G}(T) \geq\sum_{u\in A,v\in B}w(u,v)|\text{leaves}(T[u\wedge v])|\] \[\geq\tfrac{2n}{3}w(A,B)\geq\tfrac{4\alpha}{15}n^{2}.\qed\]
We generate graphs from \(\mathcal{P}(n,5)\) at random, showing that the probability that there exists a balanced cut \((A,B)\) which misses few cycles in both \(G_{1},G_{2}\) is exponentially small. This will allow us to generate a large family of graphs such that no balanced cut misses few cycles in more than one graph. This results in the following lemma--in the following, let \(\mathcal{B}(G,r)=\{T\in\mathcal{T}_{n}:\omega_{G}(T)<r\}\).
**Lemma 2**.: _For \(n\) sufficiently large, there exists a family \(\mathcal{F}\subseteq\mathcal{P}(n,5)\) of size \(2^{0.2n}\) such that \(\mathcal{B}(G,r)\cap\mathcal{B}(G^{\prime},r)=\emptyset\) for any \(G,G^{\prime}\in\mathcal{F}\) with \(r=\frac{n^{2}}{400}\)._
The proof of this lemma appears in Appendix B. Thus, no tree can cluster more than one of our random graphs well, and we can apply the packing argument to obtain Theorem 1. We prove it as follows.
_Proof of Theorem 1:_ Let \(\mathcal{F}\) be the set of graphs guaranteed by Lemma 2. We have \(|\mathcal{F}|=2^{0.2n}\). Let \(\mathcal{F}_{W}\) contain the same graphs of \(\mathcal{F}\), but with each edge weighted by a positive integer \(W\) satisfying \(0.02\leq\epsilon W<0.07\). Each \(G,G^{\prime}\in\mathcal{F}\) differs by up to \(2n\) edges, and applying group privacy \(W\) times, we have that an algorithm \(A\) which satisfies \(\epsilon\)-DP satisfies \(2nW\epsilon\)-DP on the graphs in \(\mathcal{F}_{W}\).
Now, suppose \(A\) satisfies \(\mathbb{E}[\text{cost}_{G}(A(G))]<\frac{W}{800}n^{2}\) for any \(G\in\mathcal{F}_{W}\). This implies \(\Pr[\text{cost}_{G}(A(G))\in\mathcal{B}(G,\frac{W}{400}n^{2})]\geq\frac{1}{2}\) for all \(G\in\mathcal{F}_{W}\). However, we know these balls are disjoint because of the disjointness property on \(\mathcal{F}\). Furthermore, we have that \(\Pr[A(G)\in\mathcal{B}(G^{\prime},\frac{W}{400}n^{2})]\geq e^{-2nW\epsilon} \frac{1}{2}>2^{-0.2n}\) for all \(G^{\prime}\in\mathcal{F}_{W}\).
\[1 \geq\sum_{G^{\prime}\in\mathcal{F}_{W}}\Pr[A(G)\in\mathcal{B}(G^{ \prime},\frac{W}{400}n^{2})]\] \[>2^{0.2n}2^{-0.2n}=1.\]
This is a contradiction, and thus the algorithm \(A\) must have error higher than \(\frac{W}{800}n^{2}\geq\Omega(\frac{n^{2}}{\epsilon})\) on some graph.
Algorithms for Private Hierarchical Clustering
In this section, we design private algorithms for hierarchical clustering which work on any input graph. In Section 5.1, we propose a polynomial time \((\alpha,O(\frac{n^{2.5}}{\epsilon}))\) approximation algorithm, where \(\alpha\) is the best approximation ratio of a black-box, _non-private_ hierarchical clustering algorithm. Then, in Section 5.2, we show that the exponential mechanism is a \((1,O(\frac{n^{2}\log n}{\epsilon}))\)-approximation algorithm, implying our lower bound is tight. The proofs of the results in this section appear in Appendix C.2
### Polynomial-Time Algorithm
Our algorithm makes use of a recent algorithm which releases a sanitized, synthetic graph \(G^{\prime}\) that approximates the cuts in the private graph \(G\)[14, 1]. Via post-processing, it is then possible to run a non-private, black-box clustering algorithm. We are able to relate the cost in \(G^{\prime}\) to that of \(G\) by reducing the cost \(\omega_{G}(T)\) to a sum of cuts. We start by defining the notion of \(G^{\prime}\) approximating the cuts in \(G\).
**Definition 4**.: _For a given graph \(G=(V,E,w)\), we say \(G^{\prime}=(V,E^{\prime},w^{\prime})\) is an \((\alpha_{n},\beta_{n})\)-approximation to cut queries in \(G\) if for all \(S\subseteq V\), we have_
\[(1-\alpha_{n})w(S,\overline{S})-\beta_{n}\min\{|S|,n-|S|\}\\ \leq w^{\prime}(S,\overline{S})\leq(1+\alpha_{n})w(S,\overline{S })+\beta_{n}\min\{|S|,n-|S|\}.\]
As we alluded, earlier work shows that it is possible to release an \((\tilde{O}(\frac{1}{\epsilon\sqrt{n}}),\tilde{O}(\frac{\sqrt{n}}{\epsilon}))\)-approximation to cut queries while satisfying differential privacy. Using this result, we are able to run any blackbox hierarchical clustering algorithm, and by post-processing, the final clustering \(T^{\prime}\) will still satisfy privacy. Even though \(T^{\prime}\) is computed only viewing \(G^{\prime}\), we are able to relate \(\omega_{G}(T^{\prime})\) to \(\omega_{G}^{*}\) using the fact that \(G^{\prime}\) approximates the cuts in \(G\), and a decomposition of \(\omega_{G^{\prime}}(T^{\prime})\) into a sum of cuts. This idea recently appeared in Agarwal et al. [2022], and is a critical component of our theorem. In the end, we obtain the following:
**Theorem 2**.: _Given an \((a_{n},0)\)-approximation to the cost objective of hierarchical clustering, there exists an \((\epsilon,\delta)\)-DP algorithm which, with probability at least \(0.8\), is a \(((1+o(1))a_{n},O(n^{2.5\frac{\log^{2}n\log^{2}\frac{1}{\epsilon}}{\epsilon}}))\)-approximation algorithm to the cost objective._
Plugging in a state-of-the-art, \(\sqrt{\log n}\) hierarchical clustering algorithm of Charikar and Chatziafratis [2017], we obtain a \(((1+o(1))\sqrt{\log n},\tilde{O}(\frac{n^{2.5}}{\epsilon}))\)-approximation. In a graph with total edge weight \(W\), we have \(W\leq\omega_{G}(T)\leq nW\), and thus an approximation is possible if \(W>\frac{n^{1.5}}{\epsilon}\). This means the graph can have an average degree of \(\frac{\sqrt{n}}{\epsilon}\).
### Exponential Mechanism
We consider an algorithm based on the well-known exponential mechanism [13]. This algorithm takes exponential time, but achieves greater performance that is nearly tight with our lower bound (showing that the lower bound can't be improved significantly from an information-theoretic point of view).
The exponential mechanism \(M:\mathcal{X}\rightarrow\mathcal{Y}\) releases an element from \(\mathcal{Y}\) with probability proportional to
\[\Pr[M(X)=Y]\propto e^{\epsilon u_{X}(Y)/(2S)},\]
where \(u_{X}(Y)\) is a utility function, and \(S=\max_{X,X^{\prime},Y}|u_{X}(Y)-u_{X^{\prime}}(Y)|\) is the sensitivity of the utility function in \(X\). This ubiquitous mechanism satisfies \((\epsilon,0)\)-DP.
In our setting, we use the utility function \(u_{G}(T)=-\omega_{G}(T)\). The sensitivity is bounded in the following fact.
**Fact 1**.: _For two adjacent input graphs \(G=(V,E,w)\) and \(G^{\prime}=(V,E,w^{\prime})\), we have for all trees \(T\) that \(|\omega_{G}(T)-\omega_{G^{\prime}}(T)|\leq n\)._
_Proof:_ We can write the difference as as
\[|\omega_{G}(T)-\omega_{G^{\prime}}(T)|\] \[=\left|\sum_{u,v\in V^{2}}(w(u,v)-w^{\prime}(u,v))|\mathtt{leaves }(T[u\wedge v])|\right|\] \[\leq\sum_{u,v\in V^{2}}|w(u,v)-w^{\prime}(u,v))|\cdot|\mathtt{leaves }(T[u\wedge v])|\] \[\leq n\mathtt{\sum}_{u,v\in V^{2}}\left|w(u,v)-w^{\prime}(u,v) \right|\leq n.\qed\]
Having controlled the sensitivity, we can apply utility results for the exponential mechanism.
**Lemma 3**.: _There exists an \((\epsilon,0)\)-DP, \((1,O(\frac{n^{2}\log n}{\epsilon}))\)-approximation algorithm for hierarchical clustering._
Thus, the exponential mechanism improves on the cost, and shows that private hierarchical clustering can be done on graphs with average degree \(O(\frac{n}{\epsilon})\).
## 6 Private Hierarchical Clustering in the Stochastic Block Model
In this section, we propose a hierarchical clustering algorithm designed for input graph generated from the hierarchical stochastic block model (HSBM), a graph model with planted communities arranged in a hierarchical structure. We define this model in Section 6.1. Next, in Section 6.2, we outline \(\mathsf{DPClusterHSBM}\), a lightweight private hierarchical clustering algorithm in the HSBM, which takes in the blocks as a black box. The black-box approach enables any DP community (block) detection algorithm to be used as a sub-routine. Finally, in Section 6.3, we propose a practical, private community detection algorithm which is the first to work in the general HSBM.
### Hierarchical Stochastic Block Model of Graphs
In this section, we consider unweighted graphs \((V,E)\) where each edge has weight \(1\). Observe that differential privacy (Definition 1) corresponds to adding or removing an edge from \(G\). In the HSBM [Cohen-Addad et al., 2017], there is a partition of \(V\) into blocks \(B_{1},B_{2},\ldots,B_{k}\) of \(V\) with the properties that two items in the same block have the same set of edge probabilities, and that items in different blocks are less likely to be connected with these probabilities following a hierarchical structure.
The probabilities of the edges in \(B\) are specified by a tree \(P\) with leaves \(B=B_{1},\ldots,B_{k}\), internal nodes \(N\), and a function \(f:N\cup B\rightarrow[0,1]\). To capture the decreasing probability of edges, \(f\) must satisfy \(f(n_{1})<f(n_{2})\) whenever \(n_{1}\) is an ancestor of \(n_{2}\) in \(P\). Formally, we have [Cohen-Addad et al., 2017]
**Definition 5**.: _Let \(B=B_{1},\ldots,B_{k}\); \(P\) be a tree with leaves in \(B\) and internal nodes \(N\); and \(f:N\cup B\rightarrow[0,1]\) be a function satisfying that \(f(n_{1})<f(n_{2})\) whenever \(n_{1}\) is an ancestor of \(n_{2}\) in \(P\). We refer to the triplet \((B,P,f)\) as a ground-truth tree. Then, \(\operatorname{HSBM}(B,P,f)\) is a distribution over graphs \(G\) whose edges are drawn independently, such that for \(u,v\in P\), we have_
\[\Pr[(u,v)\in G]=f(LCA_{P}(B_{u},B_{v})),\]
_where \(LCA_{P}\) denotes the least common ancestor of the blocks \(B_{u},B_{v}\) containing \(u,v\) in \(P\)._
Due to the randomness of the graph \(G\), it would be unreasonable to expect to be able to recover the exact \((B,P,f)\) from \(G\). Our algorithms will recover an approximate ground-truth tree, according to the following definition:
**Definition 6**.: _(From Cohen-Addad et al. [2017]): Let \((B,P,f)\) be a ground-truth tree, and let \((B,T,f^{\prime})\) be another ground-truth tree with the same set of blocks. We say \((B,T,f^{\prime})\) is a \(\gamma\) approximate ground-truth tree if for all \(u,v\in B\), \(\gamma^{-1}f(LCA_{P}(u,v))\leq f^{\prime}(LCA_{P^{\prime}}(u,v))\leq\gamma f( LCA_{P}(u,v))\)._
For \(\gamma\approx 1\), an approximate ground-truth tree means that \(\text{HSBM}(B,P,f)\) and \(\text{HSBM}(B,P^{\prime},f^{\prime})\) are essentially the same distribution.
### Producing a DP HC given the communities
Given the blocks of an HSBM, we now propose \(\mathsf{DPClusterHSBM}\), a lightweight, private algorithm for returning a \(1+o(1)\)-approximation to the Dasgupta cost. Our algorithm uses some ideas from the non-private algorithm proposed in Cohen-Addad et al. (2017, 2019).
\(\mathsf{DPClusterHSBM}\) takes in \(G\) generated from \(\text{HSBM}(B,P,f)\), as well as the blocks \(B\). To produce an approximate ground-truth tree, it considers similarities \(sim(B_{i},B_{j})=\frac{w_{G}(B_{i},B_{j})}{|B_{i}||B_{j}|}\) for every pair of blocks. It then performs a process similar to single linkage: until all blocks are merged, it greedily merges the groups with the highest similarity, and considers the similarity between this new group and any other groups to be the maximum similarity of any pair of blocks between the groups. Privacy comes from addition of Laplace noise in the similarity calculation, which is the only place in which the private graph \(G\) is used. \(\mathsf{DPClusterHSBM}\) appears as Algorithm 1.
\(\mathsf{DPClusterHSBM}\) accesses the graph via the initial similarities \(sim(B_{i},B_{j})\). By observing the sensitivity \(\max_{B_{i},B_{j}}|w_{G^{\prime}}(B_{i},B_{j})-w_{G}(B_{i},B_{j})|\) is at most \(1\), we are able to prove its privacy. We also use the fact that adding an edge can only affect \(sim(B_{i},B_{j})\) for just one choice of \(B_{i},B_{j}\).
**Theorem 3**.: \(\mathsf{DPClusterHSBM}\) _satisfies \(\epsilon\)-edge DP in the parameter \(G\)._
Proof.: Observe the algorithm can be viewed as a post-processing of the set \(\mathcal{B}=\{sim(B_{i},B_{j})+\mathcal{L}_{ij}:i,j\in k\}\) where \(\mathcal{L}_{ij}\sim Lap(\frac{1}{\epsilon})\) i.i.d. Suppose an edge is added between \(B_{i},B_{j}\). Then, \(sim(B_{i},B_{j})+\mathcal{L}_{ij}\) is protected by \(\epsilon\)-edge DP by the Laplace mechanism, observing the sensitivity of \(w_{G}(B_{i},B_{j})\) is \(1\). The other quantities in \(\mathcal{B}\) follow the same distribution, so \(\mathcal{B}\) itself satisfies \(\epsilon\)-edge DP.
We stress that, crucially, Algorithm 1 and all our algorithms are DP for any input graph \(G\), even if the graphs do not come from the HSBM model. We will use the input distribution assumptions only in the utility proofs.
We are also able to show a utility guarantee that \(\mathsf{DPClusterHSBM}\) is a \((1+o(1),0)\)-approximation to the cost objective. In order to prove this, we need to assume that the blocks in the HSBM are sufficiently large (at least \(n^{2/3}\)) and that the edge probabilities are at least \(\frac{\log n}{\sqrt{n}}\). These assumptions are necessary to ensure concentration of the graph cuts between blocks, so that an accurate approximate tree may be formed. Also, it requires that \(\epsilon\geq\frac{1}{\sqrt{n}}\)-this is an extremely light assumption, and it still permits us to use a small, constant value of \(\epsilon\) to guarantee strong privacy. Formally,
**Theorem 4**.: _For \(\epsilon\geq\frac{1}{\sqrt{n}}\) and a graph \(G\) drawn from \(\text{HSBM}(B,P,f)\) such that \(|B_{i}|\geq n^{2/3}\) and \(f\geq\frac{\log n}{\sqrt{n}}\), with probability \(1-\frac{2}{n}\), the tree \(T\) outputted by \(\mathsf{DPClusterHSBM}\) satisfies \(\omega_{G}(T)\leq(1+o(1))\omega_{G}(T^{\prime})\)._
In fact, we show a stronger result that the tuple \((B,T,f^{\prime})\) returned by \(\mathsf{DPClusterHSBM}\) is a \(1+o(1)\)-approximate ground-truth tree for \(\text{HSBM}(B,P,f)\). By a result from Cohen-Addad et al. (2019), this implies it achieves the approximation guarantee. We defer the proof to Appendix D.1.
### DP Community Detection in the HSBM
We now develop a DP method of identifying the blocks \(B\) of graph drawn from the HSBM. Combined with our clustering algorithm \(\mathsf{DPClusterHSBM}\), this forms an end-to-end algorithm for hierarchical clustering in the HSBM in which the communities are not known.
In order to describe our algorithm, \(\mathsf{DPCCommunity}\), we introduce some notation. For a model \(\text{HSBM}(B,P,f)\), we associate an \(n\times n\) expectation matrix \(A\) given by the probabilities that edge \((i,j)\) appears in \(G\). We then let \(\hat{A}\) be a randomized rounding of \(A\) to \(\{0,1\}\) which is simply the adjacency matrix of \(G\). \(\mathsf{DPCCommunity}\) recovers communities when they are separated in the sense defined by
\[\Delta=\min_{u\in B_{i},v\in B_{j};i\neq j}\|A_{u}-A_{v}\|_{2},\]
where \(A_{u}\) is the \(u\)th column of \(A\). Next, we let \(\sigma_{1}(A),\ldots,\sigma_{n}(A)\) denote the singular values of \(A\) in order of decreasing magnitude. Finally, we let \(\Pi_{A}^{(k)}\) denote the projection onto the top \(k\) left singular values of \(A\)--formally, if \(U_{k}\) consists of the top \(k\) singular values of \(A\), then \(\Pi_{A}^{(k)}=U_{k}U_{k}^{T}\).
\(\mathsf{DPCommunity}\) is given the adjacency matrix \(\hat{A}\) of a graph drawn from \(\operatorname{HSBM}(B,P,f)\), as well as \(k\), the number of blocks. In practice, \(k\) may be treated as a hyperparameter to be optimized. \(\mathsf{DPCommunity}\) uses the spectral method (McSherry, 2001; Vu, 2014) to cluster the columns of \(\hat{A}\). These results show that the columns in \(F=\Pi_{\hat{A}}^{(k)}(\hat{A})\) forms a clustering of the points into their original blocks. To make this private, we use stability results of the SVD to compute (an upper bound of) the sensitivity \(\Gamma\) of \(F\), and add noise \(N\) via the Gaussian mechanism. Since \(N,F\) are both \(n\times n\) matrices, the \(l_{2}\) error introduced by \(N\) grows with \(\sqrt{n}\), which is large. Our final observation is that, since the distances in \(F\) are all that matter, we may project \(F\) to \(\log(n)\)-dimensional space using Johnson-Lindenstrauss (Johnson, 1984), and then add Gaussian noise whose error grows with \(\sqrt{\log n}\). \(\mathsf{DPCommunity}\) is shown in Algorithm 2.
```
Input:\(G=(V,E)\) drawn from the \(\operatorname{HSBM}\); blocks \(B_{1},\ldots B_{k}\) partitioning \(V\). Output: Tree \(T\). for\(i=1\) to \(k\)do \(T_{i}\) is a random HC with leaves \(B_{i}\) endfor \(sim(B_{i},B_{j})\leftarrow\frac{w_{G}(B_{i},B_{j})+\mathcal{L}_{ij}}{|B_{i}| |B_{j}|}\), where \(\mathcal{L}_{ij}\sim Lap(\frac{1}{\epsilon})\). \(\mathcal{C}=\{B_{1},\ldots,B_{k}\}\) \(T=forest(T_{1},\ldots,T_{k})\) while\(|\mathcal{C}|\geq 1\)do \(A_{1},A_{2}=\operatorname*{arg\,max}_{A_{1},A_{2}\in\mathcal{C}}sim(A_{1},A_{2})\) Merge \(A_{1},A_{2}\) in \(T\); \(C=A_{1}\cup A_{2}\) \(f^{\prime}(C)=sim(A_{1},A_{2})\) \(\mathcal{C}=(\mathcal{C}\setminus\{A_{1},A_{2}\})\cup\{C\}\) for\(S\in\mathcal{C}\setminus\{C\}\)do \(sim(S,C)\leftarrow\max_{B_{i}\in S,B_{j}\in C}sim(B_{i},B_{j})\) endfor endwhile Return:\((B,T,f^{\prime})\).
```
**Algorithm 1**\(\mathsf{DPClusterHSBM}\), a hierarchical clustering algorithm in the \(\operatorname{HSBM}\).
There are two important remarks about \(\mathsf{DPCommunity}\). First, to ensure an accurate, private upper bound on \(\Gamma\), we need the mild assumption that the spectral gap \(\sigma_{k}(\hat{A})-\sigma_{k+1}(\hat{A})\) is not too small, and if it is, the algorithm returns \(\bot\). For most choices of parameters in the \(\operatorname{SBM}\), the spectral gap is always much larger than needed--the check is only to ensure privacy even for input graphs not from the \(\operatorname{SBM}\). Second, due to ease of theoretical analysis, \(\hat{A}\) is split into two parts, and one part is projected onto the top \(k\) singular values of the other. This removes probabilistic dependence between variables, but the high level ideas are the same.
We now analyze privacy and utility. Full proofs of the results in this section appear in Appendix 6. Our privacy analysis involves analyzing the release of the singular values \(\sigma_{1},\sigma_{k},\sigma_{k+1}\), and \(\tilde{F}\). The bulk of this analysis comes from analyzing the sensitivity of \(\tilde{F}\), which uses the accuracy of the Johnson-Lindenstrauss transform and spectral perturbation bounds.
**Theorem 5**.: _(Privacy): For \(\epsilon<1\), Algorithm 2 satisfies \((\epsilon,\delta)\)-DP with respect to a change of one edge in \(\hat{A}\)._
To prove the utility of \(\mathsf{DPCommunity}\), we prove that recovery is possible provided that \(\Delta\) is larger than some threshold depending on \(\epsilon\), the singular values of \(A\), the minimum edge probability, and the minimum block size, along with other mild assumptions on \(k\) and the block sizes. These assumptions are necessary, as there will be too little data for concentration otherwise. Formally,
**Theorem 6**.: _(Utility): Let \(\hat{A}\) be drawn from \(\operatorname{HSBM}(B,P,f)\), \(\tau=\max f(x)\), and \(s=\min_{i=1}^{k}|B_{i}|\). There is a universal constant \(C\) such that if \(\tau\geq C\frac{\log n}{n}\), \(s\geq C\sqrt{n\log n}\), \(k<n^{1/4}\), \(\delta<\frac{1}{n}\), \(\sigma_{k}(A)\geq C\max\{\sqrt{n\tau},\frac{1}{\epsilon}\ln\frac{4}{\delta}\}\), and_
\[\Delta>C\max\left\{\frac{k(\ln\frac{1}{\epsilon})^{3/2}}{\epsilon}\frac{ \sigma_{1}(A)}{\sigma_{k}(A)},\sqrt{\frac{n\tau}{s}}+\sqrt{k\tau\log n}+\frac {\sqrt{nk\tau}}{\sigma_{k}}\right\},\]
_then with probability at least \(1-3n^{-1}\), DPCommunity returns a set of points \(\tilde{F}=\{f_{i}:i\in Z_{2}\}\) such that_
\[\|f_{i}-f_{j}\|_{2} \leq\tfrac{2\Delta}{5} \text{if }\exists u.\ i,j\in B_{u}\] \[\|f_{i}-f_{j}\|_{2} \geq\tfrac{4\Delta}{5} \text{otherwise}.\]
Thus, if the assumptions are met, then \(\tilde{F}\) consists of \(k\) well-separated clusters which indicate the communities of each point in the sampled set \(Z_{2}\subset V\). In order to cluster all of \(V\), we can simply divide the privacy budget into \(\log n\) parts, run \(\mathsf{DPCommunity}\log n\) times, and merge the clusters.
To illustrate our theorem in a simple example, consider the HSBM with \(k\) equal-sized blocks, and let \(f_{P}(n)=p\) when \(n\) is a parent of a leaf in \(P\), and \(f_{P}(n)=q\) otherwise, with \(p\geq q\). This corresponds to probability \(p\) of an edge within a block and probability \(q\) of an edge between any two blocks. In this case, we obtain the following.
**Corollary 1**.: _In the above HSBM, DPCommunity recovers the exact communities when \(\delta\leq\frac{1}{n}\), \(k<n^{1/4}\), and \(\sqrt{p}-\sqrt{q}\geq\Omega(\frac{k\ln\frac{1}{\delta}}{\sqrt{\epsilon n^{1/4} }})\)._
Compared to previous work in the SBM with privacy, our algorithm requires a larger assumption on \(\sqrt{p}-\sqrt{q}\)(Seif et al. (2022); Chen et al. (2023) require \(\sqrt{p}-\sqrt{q}\geq\sqrt{\frac{k}{\epsilon n}}\)). However, previous work either uses semi-definite programming or does not run in polynomial time, whereas \(\mathsf{DPCommunity}\) is a practical use of the significantly more efficient Singular Vector Decomposition. Furthermore, our algorithm works in the fully-general HSBM, whereas previous work has no analogue of Theorem 6.
## 7 Experiments
The purpose of this section is evaluate Algorithm 1 designed for the HSBM model. First, we outline our methods and then we discuss our results.
Experimental SetupWe generated synthetic graphs from the HSBM model and compared the performance of \(\mathsf{DPClusterHSBM}\) to several baseline algorithms. We ran algorithms at \(\epsilon\in\{0.5,1.0,2.0\}\), as well as with no privacy.
DatasetsWe generated graphs from \(\mathrm{HSBM}(B,P,f)\) with \(k=\{4,8\}\) blocks, with block sizes chosen proportional to \(\{1,\gamma,\ldots,\gamma^{k-1}\}\), where \(\gamma^{k-1}=3\). This has the effect of creating differently-sized blocks. We selected \(P\) to be a balanced tree over the blocks, and \(f\) that increases uniformly in the interval \([0.1,0.9]\) as the tree is descended.
AlgorithmsOur approach was to use \(\mathsf{DPC}\mathsf{Community}\) to identify communities, then use \(\mathsf{DPClusterHSBM}\) to produce a hierarchical clustering. We refer to this method simply as \(\mathsf{DPClusterHSBM}\) in this section. In our empirical implementation, we made some changes to \(\mathsf{DPC}\mathsf{Community}\) for practicality. This does not affect the privacy guarantees but it simplifies the algorithm. In particular, we privately release \(\tilde{A}_{1}\) using the Laplace mechanism, and compute \(\Pi_{\tilde{A}_{1}}(\tilde{A}_{2})\) without projection. We are then able to add Gaussian noise tailored to the sensitivity of \(\Pi_{\tilde{A}_{1}}\), rather than to \(\Gamma\) which proved to be a rough upper bound in practice.
For our baselines, we considered a naive private approach in which we release \(A\) using the Laplace mechanism and truncate these values to be non-negative to form a sanitized, weighted graph. Then, we ran single, complete, and average linkage, and recorded the best of these methods. We refer collectively to these baselines as \(\mathsf{Linkage}\). Second, we formed a tree by recursively partitioning the graph into its (approximately) sparsest cut. As shown in Charikar and Chatziafratis (2017), this is an \(O(\sqrt{\log n},0)\)-approximation in the _sanitized_ graph. We refer to this baseline as \(\mathsf{SparseCut}\).
MetricsFor each graph and clustering algorithm, and the value of \(\epsilon\), we computed \(\omega_{G}(T)\), averaged over 5 runs.
### Results
Our results appear in Figure 1. In addition to the cost for each algorithm, we included the cost of a random tree. The data had low variance: for each of the 5 runs used to compute each bar, the values were within 0.5% of each other.
The cost of \(\mathsf{Linkage}\) was much higher than the other two algorithms; even with \(\epsilon=2\), \(\mathsf{Linkage}\) did not offer improvement of more than 10% reduction in cost over the random tree. Thus, the rest of our discussion focuses on \(\mathsf{DPClusterHSBM}\) and \(\mathsf{SparseCut}\).
The cost of \(\mathsf{DPClusterHSBM}\) is lower than \(\mathsf{SparseCut}\), particularly when \(\epsilon=0.5\). In this case, when \(k=4\) (resp. 8), \(\mathsf{DPClusterHSBM}\) offered a 14.4% (resp. 14.2%) reduction in cost over the random tree, whereas \(\mathsf{SparseCut}\) offered an 11.5% (resp. 10.3%) reduction. Thus, \(\mathsf{DPClusterHSBM}\) offers up to 38% more reduction in cost than \(\mathsf{SparseCut}\), over the cost of a random tree. Even when \(\epsilon=0.5\), the cost of \(\mathsf{DPClusterHSBM}\) is just 5.8% (resp. 9.6%) higher than the cost of the best tree with no privacy.
For \(\epsilon=1,2\), the costs of \(\mathsf{SparseCut}\) and \(\mathsf{DPClusterHSBM}\) fall to within 1% of each other, though \(\mathsf{DPClusterHSBM}\) consistently outperforms the former for all values of \(\epsilon\). Moreover, notice that for \(\epsilon=2\), the costs of both algorithms are within 1% of the non-private tree, indicating that for higher \(\epsilon\) the cost of privacy becomes negligible.
## 8 Conclusion
We have considered hierarchical clustering under differential privacy in Dasgupta's cost framework. While strong lower bounds exist for the problem, we have proposed are algorithms with nearly matching approximation guarantees. Furthermore, we showed the lower bounds can be overcome in the SBM, and nearly optimal trees can be found in this setting using efficient methods. For future work, one could consider private hierarchical clustering in a less structured model than the HSBM in hopes of overcoming the lower bound here as well.
Figure 1: Cost for HSBM graphs with 2048 nodes and \(k\) clusters. |
2309.14330 | Noise-in, Bias-out: Balanced and Real-time MoCap Solving | Real-time optical Motion Capture (MoCap) systems have not benefited from the
advances in modern data-driven modeling. In this work we apply machine learning
to solve noisy unstructured marker estimates in real-time and deliver robust
marker-based MoCap even when using sparse affordable sensors. To achieve this
we focus on a number of challenges related to model training, namely the
sourcing of training data and their long-tailed distribution. Leveraging
representation learning we design a technique for imbalanced regression that
requires no additional data or labels and improves the performance of our model
in rare and challenging poses. By relying on a unified representation, we show
that training such a model is not bound to high-end MoCap training data
acquisition, and exploit the advances in marker-less MoCap to acquire the
necessary data. Finally, we take a step towards richer and affordable MoCap by
adapting a body model-based inverse kinematics solution to account for
measurement and inference uncertainty, further improving performance and
robustness. Project page: https://moverseai.github.io/noise-tail | Georgios Albanis, Nikolaos Zioulis, Spyridon Thermos, Anargyros Chatzitofis, Kostas Kolomvatsos | 2023-09-25T17:55:24Z | http://arxiv.org/abs/2309.14330v1 | # Noise-in, Bias-out: Balanced and Real-time MoCap Solving
###### Abstract
Real-time optical Motion Capture (MoCap) systems have not benefited from the advances in modern data-driven modeling. In this work we apply machine learning to solve noisy unstructured marker estimates in real-time and deliver robust marker-based MoCap even when using sparse affordable sensors. To achieve this we focus on a number of challenges related to model training, namely the sourcing of training data and their long-tailed distribution. Leveraging representation learning we design a technique for imbalanced regression that requires no additional data or labels and improves the performance of our model in rare and challenging poses. By relying on a unified representation, we show that training such a model is not bound to high-end MoCap training data acquisition, and exploit the advances in marker-less MoCap to acquire the necessary data. Finally, we take a step towards richer and affordable MoCap by adapting a body model-based inverse kinematics solution to account for measurement and inference uncertainty, further improving performance and robustness. Project page: moverseai.github.io/noise-tail.
## 1 Introduction
Human Motion Capture (MoCap) technology has benefited from the last decade's data-driven breakthroughs mostly due to significant research on the human-centric visual understanding that focuses on unencumbered capture using raw color inputs. The golden standard of MoCap technology - referred to as "optical" - still uses markers attached to the body, often through suits, for robust and accurate captures, and has received little attention in the literature. These scarce works [25, 21, 20, 14, 29, 13] mainly focus on processing (raw) archival MoCap data for direct marker labeling [21, 20] or labeling through regression [25], solving the skeleton's joints [14, 13] or transforms [29], while [13] also addressing the case of commodity sensor captures and the noise levels associated with it.
As even high-end systems produce output with varying noise levels, be it either information- (swaps, occlusions, and ghosting), or measurement-related (jitter, positional shifts), these works exploit the plain nature of raw marker representation to add synthetic noise during training. Still, for data-driven systems, the variability of marker placements comprises another challenge that needs to be addressed. Some works [13, 29] address this implicitly,
relying on the learning process, while others [14] address this quasi-explicitly, considering them as input to the model. Another way to overcome this involves fitting the raw data to a parametric model after manually [25, 44, 49], or automatically [21, 20] labeling and/or annotating correspondences, standardizing the underlying representation.
In this work, we explore the next logical step stemming from prior work, bridging standardized representations and consumer-grade sensing, and delivering real-time data-driven MoCap that is robust to tracking errors. Most works [20, 14, 21, 29, 13] leverage high-end MoCap to acquire training data, a process that is expensive, laborious and difficult to scale, apart from [25] that used low-cost sensor acquired data, but nonetheless, applied the model to a high-end capturing system.
Instead, by relying on a standardized representation using a parametric human body model, we benefit from modern markerless capture technology, greatly increasing data acquisition rates at a fraction of the costs and labor. Still, there are certain challenges that need to be addressed, such as the distribution of MoCap data and the input optical sensing noise.
The nature of human motion, albeit high-dimensional, instills a significant level of data redundancy in MoCap datasets. Indeed, standing still or walking poses dominate most captures and affect the training data distribution in two ways. First by introducing bias in the learning process, and second, by further skewing the long-tailed distribution. The latter is an important problem [67] that data-driven methods need to overcome as rare poses exist, not only due to their reduced appearance frequency, but also due to biomechanical limitations of the captured subjects in fast movements, body balancing, and striking challenging poses. Prior work crucially neglects this, resorting to uniform temporal downsampling, which only helps in reducing data samples, yet not redundancy nor long-tailed distribution.
Another typical assumption is that the raw marker data are relatively high quality, most common to labeling works [20, 21] that solve using the raw positions. Even though synthetic noise is added during training, this is mostly to regularize training as the noisy nature of inputs is not taken into account post-labeling. Those works that directly infer solved estimates [13, 14, 29] solely rely on the model's capacity to simultaneously denoise the inputs and solve for the joints' positions. Nonetheless, even the models' outputs are uncertain, a situation that will be increasingly magnified when the raw marker input is affected by higher noise levels, as common when relying on consumer-grade sensors. This lack of solutions that increase noise robustness hinders the adoption of more accessible sensing options.
To that end, we present techniques to address MoCap dataset challenges as well as noisy inputs, resulting in a MoCap framework that \(\bullet\)_does not_ necessarily require data from high-end MoCap systems, \(\bullet\)_does not_ require additional data to boost long-tail performance, and \(\bullet\)_does not_ require specialised hardware. More specifically we:
* Leverage representation learning to jointly oversample and perform utility-based regression, addressing the redundancy and long-tailed MoCap data distribution.
* Introduce a noise-aware body shape and pose solver that models the measurement uncertainty region during optimization.
* Demonstrate a real-time inference capable and artifact-free MoCap solving model, running at \(60Hz\) on a system comprising just 3 consumer-grade sensors.
* Harness a human parametric representation to cold-start data-driven optical MoCap models using data through markerless acquisition methods.
## 2 Related Work
### MoCap Solving
Solving the joints' positions or transforms from marker data is a cascade of numerous (sometimes optional) steps. The markers need to be labeled, ghost markers need to be removed, occluded markers should be predicted and then an articulated body structure needs to be fit to the observed marker data. Various works address errors at different stages of MoCap solving, with contemporary ones relying on smoothness and bone-related (angles, offsets and lengths) constraints [27, 66, 31, 6, 18, 53, 73]. Recent approaches started resorting to existing data for initialization [69] or marker cleaning [5]. MoSh [44] moved one step ahead and instead of relying on plain structures employed a parametric human body to solve labeled marker data and estimate pose articulation and joint positions, even accounting for marker layout inconsistencies and/or soft tissue motion.
Nonetheless the advent of modern - deep - data-driven technologies have stimulated new approaches for MoCap solving. A label-via-regression approach was employed in [25] where a deep model was used to regress marker positions and then perform maximum assignment matching for labeling the input. Labeling was also formulated as permutation learning problem [21], albeit with constraints on the input, which were then relaxed in [20] by adding a ghost category. However, labeling assumes that the raw data are of a certain quality as the raw measurements are then used to solve for the joints' transforms or extra processing steps are required to denoise the input.
Consequently, end-to-end data-driven approaches that can simultaneously denoise and solve have been a parallel line of research. While end-to-end cleaning and solving is possible using solely a single feed-forward network [29], the process naturally benefits from using two cascaded
autoencoders [62], the first operating on marker data and cleaning them for the subsequent joint regressor. The staging from markers to joints was also shown to be important from a performance perspective in [13] which trained a convolutional network with coupled noisy and clean data captures to address noisy inputs. Recently, graph convolutional models were employed in [14] allowing for the explicit encoding of marker layout and skeleton hierarchy, two crucial factors of variation that were only implicitly handled in prior end-to-end solvers.
### MoCap Data
Learning to solve MoCap marker data requires supervision provided by collecting data using professional high-end MoCap systems [29, 20, 14, 13]. SOMA [20] standardized the representation using the AMASS dataset [49] which, in turn, relied on an extension of MoSh [44] to fit a parametric human body model to markers. All other works suffer from inconsistent marker layouts which is a problem that was either implicitly addressed [29, 13] or quasi-explicitly [14] using the layouts as inputs. Marker data can be (re-)synthesized in different layouts when higher-level information is available (_e.g._ marker-to-joint offsets, meshes) [29, 20]. Yet, it has been also shown that fitting a synthetic hand model to depth data acquired by consumer-grade sensors can also produce usable training data [25] for deploying a model to a high-end marker capturing system for data-driven MoCap. Compared to [25], we experimentally demonstrate this feasibility and even extend it to noisy inputs at run-time, something not considered in [25] as it relied on a high-end system for live capture.
Statistical parametric models [45, 61, 58, 59, 85, 87, 4, 88] are more expressive alternatives than the skinned mesh [83] used in [25] as, apart from realistic shape variations, deformation corrective factors can also be employed. They have been used to synthesize standardized training data before [82, 28, 38] but crucially rely on preceeding high-end MoCap acquisition. We also explore this path using multi-view markerless capture [33, 15, 92] to produce parametric model fits and synthesize marker positions as a solution to the cold-start problem of data-driven MoCap solving. Even though such data can be fit to marker data as done in AMASS [49] and Fit3D [19], the potential of acquiring them using less expensive capture solutions is very important, as long as it is feasible to train high quality models.
Still, one also needs to take into account the nature of human performance data and their collection processes. As seen in AMASS [49] and Fit3D [19], both contain significant redundancies and suffer from the long-tail distribution effect. Rare poses are challenging for regression models to predict, mainly stemming from the combined effect of the selected estimators and stochastic optimization with mini-batches. Various solutions have been surfacing in the literature, some tailored to the nature of the problem [67], leveraging a prototype classifier branch to initialize the learned iterative refinement, and others adapting works from imbalanced classification to the regression domain. Traditional approaches fall into either the re-sampling or re-weighting category, with the former focusing on balancing the frequency of samples and the latter on properly adjusting the parameter optimization process. Re-sampling strategies involve common sample under-sampling [79], rare sample over-sampling by synthesizing new samples via interpolation [81], re-sampling after perturbing with noise [9], and hybrid approaches that simultaneously under- and over-sample [10]. Yet interpolating high-dimensional samples like human pose is non-trivial or even defining the rare samples that need to be re-sampled.
Utility-based [80] - or otherwise, cost-sensitive - regression assigns different weights - or relevance - to different samples. Defining a utility function is also essential to re-sampling strategies for regression [79]. Recent approaches employ kernel density estimation [74], adapt evaluation metrics as losses [72], or resort to label/feature smoothing and binning [89]. Another family of methods that are now explored can be categorized as contrastive, with [22] regularizing training to enforce feature and output space proximity. BalancedMSE [64] is also a contrastive-like objective that employs intra-batch minimum error sample classification using a cross-entropy term that corresponds to an L2 error from a likelihood perspective. However, most approaches rely on stratified binning of the output space using distance measures that lose significance in higher dimensions. Further, binning can only be used with specific networks/architectures (proper feature representations for classifying bins or feature-based losses). It has not been shown to be applicable in high-performing dense networks relying on heatmap representations. Instead, we introduce a novel technique that can jointly over-sample and assign higher relevance to rare samples by leveraging representation learning and its synthesis and auto-encoding traits.
## 3 Approach
The MoCap representation we use is a parametric human body model \(\mathcal{B}\). Different variants exist, all data-driven, some relying on stochastic representations [87], others on explicit ones [45, 58], with a notable exception using an artist-made one [88] and all typically employ linear blend skinning [34] and pose corrective factors [45, 87] to overcome its artifacts. Generally, we consider it as a function \((\mathbf{v},\mathbf{f})=\mathcal{B}(\boldsymbol{\beta},\boldsymbol{\theta}, \mathbf{T})\), where \((\mathbf{v},\mathbf{f})\) are the vertices \(\mathbf{v}\in\mathbb{R}^{V\times 3}\) and faces \(\mathbf{f}\in\mathbb{N}^{F\times 3}\) of a triangular mesh surface that is defined by \(S\) blendshape coefficients \(\boldsymbol{\beta}\in\mathbb{R}^{S}\), articulated by \(P\) pose parameters \(\boldsymbol{\theta}\in\mathbb{SO}(3)^{P}\), and globally positioned by the transform \(\mathbf{T}\in\mathbb{SE}(3)\). Using linear functions \(r\) expressed as matrices \(\mathbf{R}\) it is possible to extract \(L\) differ
ent body landmarks \(\mathbf{\ell}:=r(\mathbf{v})=\mathbf{R}\times\mathbf{v}\), with \(\mathbf{\ell}\in\mathbb{R}^{L\times 3}\) and \(\mathbf{R}\in\mathbb{R}^{L\times V}\). This way, surface points \(\mathbf{\ell}^{\ast}\) can be extracted using delta (vertex picking) or barycentric (triangle interpolation) functions and joints \(\mathbf{\ell}^{j}\) using weighted average functions. Since markers are extruded by the marker radius \(d\) they correspond to \(\mathbf{\ell}^{m}=\mathbf{\ell}^{v}+d(\mathbf{R}\times\mathbf{n})\), with \(\mathbf{n}\) being the vertices' normals.
Following prior art [20], the input data are the parameters of a body model that synthesize markers, which due to their synthetic nature can be augmented, and corrupted with artifacts and noise [29, 14, 13]. Fig. 2 illustrates our model's training framework which is followingly explained starting with the technique addressing the redundancy and long-tailed nature of the data (Sec. 3.1), the marker denoising and joint solving model's design choices (Sec. 3.2), and finally the noise-aware body parameter solver (Sec. 3.3).
### Balancing Regression
Relevance functions drive utility regression and guide the re-/over-/inter-sample selection/generation [10, 79, 80, 81]. Instead of defining relevance or sample selection based on an explicit formula or set of rules, we employ representation learning to learn it from the data. Autoencoding synthesis models [41, 65] jointly learn a reconstruction model as well as a generative sampler:
\[\mathbf{\theta}^{\ddagger}=\mathcal{G}(\mathcal{E}(\mathbf{\theta})),\qquad\mathbf{\theta }^{\star}=\mathcal{G}(\mathcal{S}(\cdot)), \tag{1}\]
with varying constraints on the input \(\mathbf{\theta}\) and latent \(\mathbf{z}=\mathcal{E}(\mathbf{\theta}),\mathbf{z}\in\mathbb{R}^{Z}\) spaces. An encoder \(\mathcal{E}(\mathbf{\theta})\) maps input \(\mathbf{\theta}\) to a latent space \(\mathbf{z}\) which gets reconstructed to \(\mathbf{\theta}^{\ddagger}\) by a generator \(\mathcal{G}(\mathbf{z})\). Using a sampling function \(\mathcal{S}\) to sample the latent space it is also possible to generate novel output samples \(\mathbf{\theta}^{\star}\). We exploit the hybrid nature of such models to design a novel imbalanced regression solution that simultaneously over-samples the distribution at the tail and adjusts the optimization by re-weighting rarer samples. Our solution is based on a deep Variational AutoEncoder (VAE) [41].
**Relevance via Reconstructability.** Autoencoding models are expected to reflect the bias of their training data, with redundant/rare samples being easier/harder to properly reconstruct respectively. This bias in reconstructability can be used to assign relevance to each sample as those more challenging to reconstruct properly are more likely to be tail samples. We define a relevance function \(\rho\) (see Fig. 2 re-weighting) using a reconstruction error \(\epsilon\):
\[\rho(\theta)=1+exp(\nicefrac{{\epsilon}}{{\sigma}}),\quad\epsilon=\sqrt{ \frac{1}{J}\sum_{i=1}^{J}||\bar{\ell}_{i}^{j}-\bar{\ell}_{i}^{\ddagger}||_{2}}, \tag{2}\]
with \((\cdot)\) denoting unit normalization using the input joints' bounding box diagonal, \(\epsilon\) the normalized-RMSE over the reconstructed and original joints, and \(\sigma\) a scaling factor controlling the relevance \(\rho\). Using landmark positions we can preserve interpretable semantics in \(\rho\) and \(\sigma\) as they are unidirectionally interchangeable (linear mapping) with the pose \(\mathbf{\theta}\) given fixed shape \(\mathbf{\beta}\). Fig. 3 shows exemplary poses as scored by our relevance function.
**Balance via Controlled Synthesis.** Even though the tail samples are not reconstructed faithfully, the generative
Figure 2: Overview of the balanced and real-time MoCap solving training model. Starting from an existing data corpus (_bottom left_), a set of encoded tail anchor poses \(\mathcal{A}\) are selected (Sec. 3.1 - _top left_) and randomly blended via \(\mathcal{S}\) and a generator \(\mathcal{G}\). This oversamples the tail, adding extra synthetic rare samples during training. A UNet model (Sec. 3.2 - bottom middle) receives two orthographic depth map renders (\(xy\) and \(yz\) planes) of augmented and corrupted marker 3D positions \(\mathbf{\ell}_{in}^{\star}\) extracted from the body’s \(\mathcal{B}\) surface, producing \(2\) orthogonal heatmaps which are marginally fused along the \(y\) coordinate, producing 3D positions \(\tilde{\mathbf{\ell}}_{est}\) (Sec. 3.2 - _bottom right_). The loss for each batch item is re-weighted by its relevance \(\rho\), computed after calculating the joint reconstruction error of its pose’s \(\mathbf{\theta}\) generative autoencoder reconstruction (Sec. 3.1 - _top right_).
and disentangling nature of modern synthesis models shape manifolds that map inputs to the underlying factors of data variation, effectively mapping similar poses to nearby latent codes which can be traversed across the latent space dimensions. Based on this, we define a controlled sampling scheme for synthesizing new tail samples (see Fig. 2 oversampling). Using the relevance function from Eq. (2), it is possible to identify tail samples \(\boldsymbol{\theta}^{\dagger}\) via statistical thresholding that serve as anchor latent codes \(\mathcal{A}=\{\,\mathbf{z}^{\dagger}\mid\mathbf{z}^{\dagger}=\mathcal{E}( \boldsymbol{\theta}^{\dagger})\,\}\). This process adapts to the training data distribution instead of risking a mismatch via empiric manual picking when using a purely generative model (_e.g_. [78]). We then sample using the following function:
\[\mathcal{S}_{i,j}(\cdot)=\varsigma(\mathcal{N}(\mathbf{a}_{i},\mathbf{s}), \mathcal{N}(\mathbf{a}_{j},\mathbf{s}),b),\quad\mathbf{a}_{i,j}\in_{R}\mathcal{ A}. \tag{3}\]
Specifically, we sample from a normal distribution centered around two random anchors \(i\) and \(j,i\neq j,\) from \(\mathcal{A}\) using a standard deviation \(\mathbf{s}\), and blend them using spherical linear interpolation [70]\(\varsigma\) with a uniformly sampled blending factor \(b\in\mathcal{U}(0,B),B\in[0,1]\). Non-linear interpolation between samples avoids dead manifold regions as not all directions lead to meaningful samples [35, 37] and increases our samples' plausibility [86], as illustrated in Fig. 4.
### Real-time Landmark Estimation
Compared to pure labeling [20, 21] or pure solving approaches [29, 14] we design our model around simultaneous denoising, solving and hallucination.
While some approaches use the raw marker positions as input, we opt to leverage the maturity of structured heatmap representations and employ a convolutional model, similar to [25, 13] instead of relying on unstructured regression [14, 29] using MLPs. This improves the convergence of the model and by using multi-view fusion we can also improve accuracy via robust regression. First, we augment and corrupt the input markers \(\boldsymbol{\ell}_{gt}\) into \(\tilde{\boldsymbol{\ell}}_{in}^{\star}\). Then, we normalize and render \(\tilde{\boldsymbol{\ell}}_{in}^{\star}\) from two orthographic viewpoints as in [13], but with a notable difference when processing the model's output; instead of predicting the \(3^{rd}\) dimension, we manage to predict normalized 3D coordinates by learning to solve a single 2D task. To achieve that, we use the two rendered views as input to the model, predict the corresponding view's heatmaps, and fuse them with a variant of marginal heatmap regression [56, 90] (see Fig. 2 fusion). We assume the gravity direction along the \(y\) axis and use the orthogonal and orthographic views denoted as \(xy\) and \(yz\) which share the \(y\) axis. To estimate the landmarks' normalized positions \(\tilde{\boldsymbol{\ell}}_{est}\), we employ center-of-mass regression [48, 75, 55, 77] taking the average expectation [56, 90] for \(y\) from the two views. The model is supervised by:
\[\mathcal{L}=\rho(\lambda\mathcal{L}_{JS}(\mathbf{H}_{gt},\mathbf{H}_{est})+ \mathcal{L}_{w}^{\nu}(\tilde{\boldsymbol{\ell}}_{gt},\tilde{\boldsymbol{\ell} }_{est})), \tag{4}\]
where \(\mathcal{L}_{JS}\) is the \(\lambda-\)weighted Jensen-Shannon divergence [52] between the normalized ground truth and soft-max normalized predicted heatmaps, while \(\mathcal{L}_{w}^{\nu}\) is the robust Welsch penalty function [30, 17], with the support parameter \(\nu\), between the normalized landmark ground-truth \(\tilde{\boldsymbol{\ell}}_{gt}\) and estimated \(\tilde{\boldsymbol{\ell}}_{est}\) coordinates. Overall, \(\mathcal{L}_{JS}\) accelerates training while \(\mathcal{L}_{w}^{\nu}\) facilitates higher levels of sub-pixel accuracy since even though we reconstruct the heatmaps \(\mathbf{H}\) using the normalized - un-quantized - coordinates [93], discretization artifacts can never be removed entirely.
Note that the fusion outcome \(\tilde{\boldsymbol{\ell}}_{est}\) comprises both marker and joint estimations, essentially estimating a complete, labeled, and denoized marker set, as well as solving for the joints' positions.
Finally, we use U-Net [68] as a regression backbone for its runtime performance and its efficiency in high-resolution regression.
Figure 4: Tail oversampling using latent anchors \(\mathcal{A}\). Random latent vector blending using **non-linear** interpolation generates diverse and realistic tail samples, compared to the **linear** one which produces less diverse or unrealistic samples, or to random sampling which produces more biased samples.
Figure 3: Color-coded (turbo colormap [54] at the bottom) autoencoding relevance \(\rho\) of various poses.
### Noise-aware Fitting
Given the denoised and complete set of landmarks \(\tilde{\ell}_{est}\in\mathbb{R}^{L\times 3}\), we can fit the body to these estimates and obtain the pose \(\mathbf{\theta}\) and shape \(\mathbf{\beta}\) which implies an articulated skeleton and mesh surface. This is a non-linear optimization problem with the standard solution being MoSh [44] and its successor MoSh++ [49]. However, MoSh(++) also solves for the marker layout which in our case is known apriori as the model was trained with a standard \(53\) marker configuration. Compared to prior works that assume the estimates are of high-quality or low signal-to-noise ratios, we seek to relax this assumption to support additional sensing options. The solution to this is robust optimization but typical approaches that involve robust kernels/estimators require confident knowledge about the underlying data distribution. This is not easily available in practice, and moreover, it varies with different sensing options but more importantly, when involving a data-driven model, it is skewed by another challenging-to-model distribution. The Barron loss [7] is a robust variant that also adapts to the underlying distribution and interpolates/generalizes many known variants by adjusting their shape and scale jointly.
Following likelihood-based formulations [39, 24] that have been presented for multi-task/robust stochastic optimization, we formulate a noise-aware fitting objective that is adaptive and optimizes the Gaussian uncertainty region \(\mathbf{\sigma}\in\mathbb{R}^{L}\) jointly with the data and prior terms:
\[\operatorname*{argmin}_{\mathbf{\theta}^{*},\mathbf{\beta}^{*},\mathbf{\Upsilon}^{*},\bm {\sigma}^{*}}\mathcal{E}_{data}+\mathcal{E}_{prior}. \tag{5}\]
We use standard prior terms [44, 49, 61]\(\mathcal{E}_{prior}=\lambda_{\mathbf{\beta}}\sum||\mathbf{\beta}||_{2}+\lambda_{\mathbf{ \mathrm{z}}}\sum||\mathbf{z}||_{2}\), and a data term formulated as:
\[\mathcal{E}_{data}=\sum_{i}^{L}\frac{1}{2\sigma_{i}^{2}}||\tilde{\ell}_{est, i}-\tilde{\ell}_{i}^{*}||_{2}+log\sigma_{i}. \tag{6}\]
As in MoSh(++) we perform staged annealed optimization but with only 2 stages as there is no marker layout optimization. The first stage optimizes over \(\mathbf{\beta}^{*},\mathbf{\theta}^{*},\mathbf{\Upsilon}^{*}\), while the second stage fixes \(\mathbf{\beta}\) and \(\mathbf{\Upsilon}\) and optimizes \(\mathbf{\theta}^{*},\mathbf{\sigma}^{*}\).
## 4 Results
We base our implementation on the SMPL(-X) body model \(\mathcal{B}\)[45, 61]. Our models are implemented using PyTorch [60], optimized with Adam [40], initialized with Kaiming init. [26], and trained for a fixed number of epochs and with a fixed seed, with the best parameters selected using the performance indicators presented in Sec. 5 of the supplement. UNet receives \(160\times 160\) depth maps and outputs heatmaps of the same resolution for all landmarks (\(53\) markers and \(18\) joints in all cases apart from the experiments in Tab. 3 where \(56\) markers and \(24\) joints are used for consistency). The autoencoding generator is implemented as a robust variant of VPoser [61]1. To fit the body to the estimated landmarks we use quasi-Newton optimization [57]. For the evaluation, the \(\tilde{\mathbf{\ell}}_{est}\) are denormalized to \(\mathbf{\ell}_{est}\). Finally, the Tables are color-coded with the best result being visualized in pink and bolded, the second in green, and the third (where it is needed) in yellow.
Footnote 1: Description and comparison can be found in Sec. 7.1 of the suppl.
We use a variety of datasets that provide corresponding parametric body \(\mathcal{B}\) parameters from which we can extract input (markers) and ground truth (joints and markers). We additionally curate a custom test set comprising \(4\) categories of tail samples. Note that all models' performance is validated using _unseen_ data comprising entire datasets, thus, ensuring different capturing contexts. For a lack of space, we moved all preprocessing (see supp. Sec. 3), datasets (see supp. Sec. 4), and metrics (see supp. Sec. 5) details in the supplement, as well as an in-the-wild supp. video.
_Are high-end MoCap data necessary?_
Relying on an intermediate body model \(\mathcal{B}\) representation opens up new opportunities for data acquisition. We seek to validate the hypothesis that training an optical MoCap model does not necessarily require data acquired by high-end optical MoCap systems. Recent multi-view datasets [92, 15, 63] rely on markerless capturing technology to fit parametric body models to estimated keypoint observations. We train our model (without the imbalanced regression adaptation) on the combined GeneBody [15] and THuman2.0 (TH2) [92] multi-view marker-less data (_Markerless_), and on \(3\) high-end MoCap dataset combinations from AMASS [49], specifically, EKUT [50], HumanEva [71], MoSh [44], and SOMA [20] (_Optical #1_); CNRS and HumanEva (_Optical #2_); and, solely HumanEva (_Optical #3_) to progressively reduce the diversity of the samples. We equalize the different markerless and optical training data via temporal downsampling to a total of \(9\)mins of MoCap. By evaluating these models using ACCAD [2] (see Tab. 7), we observe a correlation between pose diversity and performance, and that the markerless data result in comparable performance to the high-end MoCap data. The latter indicates that it is possible to acquire data for optical MoCap without having access to any high-end system.
\begin{table}
\begin{tabular}{c c c c c} & RMSE\({}_{\downarrow}\) & PCK1\({}^{\dagger}\) & PCK3\({}^{\dagger}\) & PCK7\({}^{\dagger}\) \\ \hline Optical\#1 & **50.4**\(mm\) & 36.14\% & **84.89\%** & **90.90\%** \\ Optical\#2 & 89.9 \(mm\) & **41.11\%** & 81.18\% & 86.24\% \\ Optical\#3 & 92.9 \(mm\) & 39.16\% & 79.74\% & 86.08\% \\ \hline Markerless & **59.4**\(mm\) & 21.70\% & 79.96\% & 90.08\% \\ \hline \end{tabular}
\end{table}
Table 1: Markerless vs optical data tested on ACCAD.
### Addressing the bias and long-tail
To evaluate our novel imbalanced regression discussed in Sec. 3.1, we design an experiment simulating a progressive data collection process by aggregating the DFaust [8], EYES [47], EKUT, HumanEva, MoSh, PosePrior [3], SFU [91], SOMA, SSM, and Transitions parts from AMASS, captured with varying acquisition protocols and settings. Tab. 2 presents the results compared to a baseline model trained without re-weighting/oversampling, and the BMSE [64] imbalanced regression loss, which is properly adapted to consider joint distances and not scalars.
Tab. 2 (top) presents the results on TH2, a dataset of diverse static poses that also includes challenging poses (_e.g._ extreme bending, inversion, etc.), where our approach improves overall performance compared to BMSE that presents inferior results to the baseline model. Tab. 2 (bottom) presents the results on our "tail" (rare) poses that include "_high kicks_", "_crouching_", "_crossed arms_", and "_crossed legs_". Both imbalanced regression approaches improve the long-tail performance, with our oversampling and re-weighting method achieving the best results almost horizontally. These results highlight that our approach overcomes the known weakness of the BMSE balancing the data distribution at the expense of performance on more common poses. Ablation experiments showcasing the orthogonality of oversampling and re-weighting can be found in the supplementary material (Sec. 7.2, Tab. 4).
### Direct joint solving
We proceed with evaluating our model's ability to accurately estimate the skeleton joints \(\mathbf{\ell}^{j}\) from the input markers (_i.e._ joint-solving). We compare our model against two SotA joint-solving approaches: a) MoCap-Solver [14] that uses graph convolutions and temporal information, and b) DeMoCap [13] that employs an HRNet [84] backbone and frontal-back fusion. All models are trained and evaluated on the CMU [11] dataset as in [14]. For MoCap-Solver we rerun the evaluation without normalizing the markers and the skeletons as this information should be unknown during testing. At the same time, we employ the joint position error (JPE) from [14] for a more fair comparison. From the results in Tab. 3 we observe that our model outperforms the SotA in both positional metrics (RMSE, JPE) while having the best or the second-best accuracy for different PCK.
### Explicit vs implicit labeling
Our next experiment aims to showcase the advances of fitting a parametric body model on landmarks estimated with regression instead of explicitly labeling them. We compare our model that de-noises, completes, and implicitly labels landmarks via regression with SOMA, a SotA explicitly labeling method, by fitting the body to the markers similar to [44]. Note that in order to have a fair comparison we solve **only for markers** and not for markers \(\&\) joints (as discussed in Sec. 3.2). We train our model using the same datasets that SOMA was trained on, and then test on TH2 and our "Tail" test set using the clean body-extracted markers, and the same MoSh-like fitting without uncertainty region optimization and without considering latent markers as the marker layout is fixed to the nominal one. Tab. 4 showcases
\begin{table}
\begin{tabular}{l l l l l l} & \multicolumn{2}{c}{RMSE\(\downarrow\)} & \multicolumn{1}{c}{PCK1\(\uparrow\)} & \multicolumn{1}{c}{PCK3\(\uparrow\)} & \multicolumn{1}{c}{PCK7\(\uparrow\)} \\ \hline
[14] & **21.1\(\,mm\)** & 17.4\(\,mm\)** & 38.11\% & 84.70\% & **99.17\%** \\
[13] & 27.0\(\,mm\) & 17.5\(\,mm\) & **51.08\%** & 89.39\% & 97.24\% \\ Ours & **20.1\(\,mm\)** & **15.9\(\,mm\)** & 50.14\% & **92.23\%** & 98.14\% \\ \hline \end{tabular}
\end{table}
Table 3: Direct joint solving on CMU test set [11].
\begin{table}
\begin{tabular}{l l l l l l} & \multicolumn{2}{c}{RMSE\(\downarrow\)} & \multicolumn{1}{c}{PCK1\(\uparrow\)} & \multicolumn{1}{c}{PCK3\(\uparrow\)} & \multicolumn{1}{c}{PCK7\(\uparrow\)} \\ \hline \multirow{3}{*}{[14]} & Base & 21.4\(\,mm\) & 28.69\% & 92.08\% & 98.60\% \\ & [64] & 22.0\(\,mm\) & 25.51\% & 91.90\% & 98.62\% \\ & Ours & **19.1\(\,mm\)** & **32.38\%** & **93.55\%** & **99.11\%** \\ \hline \multirow{3}{*}{[14]} & Base & 35.8\(\,mm\) & 22.04\% & 80.27\% & 94.31\% \\ & [64] & 32.9\(\,mm\) & **27.66\%** & 81.98\% & 94.92\% \\ \cline{1-1} & Ours & **29.3\(\,mm\)** & 23.42\% & **84.70\%** & **97.24\%** \\ \hline \end{tabular}
\end{table}
Table 2: Imbalanced regression results.
Figure 5: Fits to our regressed vs SOMA labeled markers. Incorrect labeling results in highly erroneous fits.
that the fits on our model's markers \(\mathbf{\ell}^{m}\) deliver better performance, a fact that is mainly attributed to the robustness of regression compared to the larger error margin of fitting to incorrectly labeled markers. This is evident in all test sets but more pronounced in the tail (rare) poses. Indicative qualitative examples are depicted in Fig. 14. For completion (not direct comparison with SOMA), we include the results for solving both markers and joints (\(\mathbf{\ell}\)) estimated by our model, which clearly achieves the best overall performance.
### Addressing input noise
Finally, we design an experiment for showcasing our model's fitting robustness to noisy marker input as discussed in Sec. 3.3. Tab. 12 presents results when fitting to noisy landmarks between the uncertainty optimization method and MoSh(++) like fitting (ignoring the latent marker optimization as the markers are extracted from the body's surface and placed using the nominal layout). The TH2 dataset is used for evaluation, with the body extracted input markers corrupted with high levels of noise (see Sec. 3.2 of the supp. for the applied types of noise) prior to fitting the body model to them. Naturally, optimizing the uncertainty region improves fitting performance to noisy observations. Compared to a more complex optimization objective that also considers the shape of the data distribution [7] we find that the proposed Gaussian uncertainty region optimization delivers improved fits. This can be attributed to the complexity of tuning it, as well as the increased parameter count. Fig. 6 depicts qualitative examples with body fits in the noisy inputs acquired with just \(3\) viewpoints (same capture session as Fig. 1) and shows that jointly optimizing the uncertainty region allows for robustness to input-related measurement noise, as well as model-related information noise. Some interesting noise-aware fitting ablations along with visualizations can be found in Sec. 9 of the supplementary material.
### Real-time performance.
We validate our end-to-end method by implementing a real-time system using sparse consumer-grade sensors (see details in Sec. 11 of the supp.). Leveraging the orthogonal view two-pass approach we deploy an optimized ONNX [1] model where we flatten the two passes across the batch dimension, performing only the light-weight marginal heatmap fusion in a synchronized manner. Our system achieves under \(16\)ms inference even on a laptop equipped with a mobile-grade RTX 2080. Nonetheless, we understand that high-quality MoCap requires greater efficiency to achieve processing rates of at least 120Hz and we set this rate as the next goal.
## 5 Conclusion
MoCap data are highly imbalanced and in this work we have presented a novel technique for imbalanced regression. Still we believe we have but scratched the surface of exploiting representation learning for addressing the long-tail and bias, as different architectures, samplers and relevance functions can be explored. At the same time, this work contributes to integrating machine learning in real-time optical MoCap, while also making it more accessible. However, there is room for improvements, as temporal information is not integrated in our approach, and a single, fixed marker layout is only supported.
\begin{table}
\begin{tabular}{l l|l l l l l} \hline & & RMSE\(\downarrow\) & MAE\(\downarrow\) & PCK\(\uparrow\) & PCK\(\hat{\gamma}\) & PCK\(\hat{\gamma}\) \\ \hline \multirow{3}{*}{\(\mathbf{\ell}^{m}\)} & [20] & 29.7 \(mm\) & 3.49\({}^{\circ}\) & 28.33\(\mathbf{\ell}\) & 87.78\(\mathbf{\ell}\) & 96.11\% \\ & Ours (\(\mathbf{\ell}^{m}\),\(\mathbf{1}\)) & 11.9 \(mm\) & **2.68** & **26.49\%** & **93.72\%** & 99.26\% \\ & Ours (\(\mathbf{\ell}\)) & **17.6 \(mm\)** & - & **33.92\%** & **98.13\%** & **99.35\%** \\ \hline \multirow{3}{*}{\(\mathbf{\ell}^{m}\)} & [20] & 68.6 \(mm\) & 6.76\({}^{\circ}\) & 11.78\% & 60.87\% & 84.84\% \\ & Ours (\(\mathbf{\ell}^{m}\),\(\mathbf{\ast}\)) & 30.1 \(mm\) & **2.89** & 12.11\% & 73.13\% & **96.87\%** \\ & Ours (\(\mathbf{\ell}\)) & **28.3 \(mm\)** & - & **27.31\%** & **83.12\%** & 95.35\% \\ \hline \end{tabular}
\end{table}
Table 4: Explicit (SOMA [20]) vs implicit (Ours) labeled marker fits and direct landmarks’ \(\mathbf{\ell}\) solving comparison.
\begin{table}
\begin{tabular}{l|l l l l l l} \hline & & RMSE\(\downarrow\) & MAE\(\downarrow\) & PCK\(\uparrow\) & PCK\(\hat{\gamma}\) & PCK\(\hat{\gamma}\) \\ \hline
[44, 49] & 30.1 \(mm\) & 3.49\({}^{\circ}\) & 11.79\% & 66.85\% & **98.34\%** \\
[7] & 30.8 \(mm\) & 3.10\({}^{\circ}\) & 12.71\% & 67.06\% & 97.71\% \\ Ours (\(\mathbf{\ell}^{m}\)) & **28.9 \(mm\)** & **2.98\({}^{\circ}\)** & **14.71\%** & **69.86\%** & 98.18\% \\ \hline \end{tabular}
\end{table}
Table 5: Noisy landmark fits comparison on TH2.
Figure 6: Plain vs uncertainty-based fit. Input markers from the consumer-grade system and the model inferred ones are colored with with green, and violet respectively. |
2309.04123 | BMT Independence | We introduce the notion of BMT independence, allowing us to take arbitrary
mixtures of boolean, monotone, and tensor independence and generalizing the
notion of BM independence of Wysoczanski. Pair-wise independence relations are
encoded through a directed graph, which in turn determines the way mixed
moments must be computed. Corresponding Central and Poisson-Type Limit Theorems
are provided along with an explicit construction to realize BMT independent
random variables as bounded operators on certain Hilbert space. | Octavio Arizmendi, Saul Rogelio Mendoza, Josué Vazquez-Becerra | 2023-09-08T04:56:54Z | http://arxiv.org/abs/2309.04123v1 | # BMT Independence
###### Abstract
We introduce the notion of BMT independence, allowing us to take arbitrary mixtures of boolean, monotone, and tensor independence and generalizing the notion of BM independence of Wysoczanski. Pair-wise independence relations are encoded through a directed graph, which in turn determines the way mixed moments must be computed. Corresponding Central and Poisson-Type Limit Theorems are provided along with an explicit construction to realize BMT independent random variables as bounded operators on certain Hilbert space.
###### Contents
* 1 Introduction
* 2 Set Partitions and Graphs
* 2.1 Set partitions
* 2.2 Digraphs
* 2.3 The kernel notation
* 3 BMT Independence
* 3.1 Notions of Independence
* 3.2 BMT independence
* 3.3 Weak BM independence
* 3.4 Consistency
* 4 Construction of BMT algebras of operators
* 5 BMT Central Limit Theorem
* 5.1 Central Limit Theorem
* 5.2 Further properties of CLT
* 6 Poisson Limit Theorem
* 7 Concluding Remarks
Introduction
This paper tries to extend some notions in non-commutative probability, where the fundamental framework is a _non-commutative probability space_, this is, a pair \((\mathcal{A},\varphi)\) where \(\mathcal{A}\) is a complex algebra with multiplicative identity \(1_{\mathcal{A}}\) and \(\varphi:\mathcal{A}\to\mathbb{C}\) is a linear functional so that \(\varphi(1_{\mathcal{A}})=1\).
In this framework, from the probabilistic viewpoint, a fundamental notion is that of independence and the generality of this framework allows considering multiple notions of independence. In order to give a sense of this notion, and in the seek to try to find a way to decide which of these independences one should look at, Ben Ghorbal and Schurmann [4] axiomatized the notion of a _universal products_. For each of these universal products of probability spaces one can associate a notion of independence. They showed that there are three notions of independence satisfying the axioms coming from the universal products. These are Boolean, free and tensor independence. The story was complemented by the work of Muraki [18], when he generalized the notion of universal product, introducing _natural products_, by removing the _commutativity axiom_ (now known as symmetry axiom), which states, in simple words, that for algebras \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), that fact that \(\mathcal{A}_{1}\) is independent of \(\mathcal{A}_{2}\), implies that \(\mathcal{A}_{2}\) should be independent of \(\mathcal{A}_{1}\).
In this new setting, Muraki, showed in [18], that there are only five natural notions of independence, these are, the free, tensor, Boolean, monotone and antimonotone independences. In this paper we will be interested in tensor, Boolean and monotone (or antimonotone 1) cases.
Footnote 1: From the fact that \(a\) monotone independent to \(b\) if and only if \(b\) independent antimonotone to \(a\), we usually work only with monotone independence (implying the corresponding for the antimonotone independence).
While from the above, one may think of free, Boolean, monotone and tensor independence as parallel theories with no interaction between them, in this paper we will be interested in mixtures of them. This is, of course, not the first paper that considers such a setting. In 2007, Wysoczanski [24] introduced BM independence, a generalization of monotone and Boolean independence, giving a framework where some mixtures (following a partial order) of those independences can be represented, giving, in addition, a central limit theorem whose limit is a symmetric distribution that has as particular cases the symmetric Bernoulli and the arcsine distribution. The work [9] gives a framework that combines Boolean and free independences. On a similar vein, the called \(\Lambda-\)freeness defined in [14] mixes free and tensor independence in an algebraic framework. Following the last work, Speicher and Wysoczanski ([22]) represented any mixture of tensor and free variables in terms of a symmetric matrix \(\epsilon\) with non-diagonal entries either \(0\) or \(1\), where \(\epsilon_{ij}=0\) represent that algebras \(\mathcal{A}_{i}\) and \(\mathcal{A}_{j}\) are free, and \(\epsilon_{ij}=1\) represent that algebras \(\mathcal{A}_{i}\) and \(\mathcal{A}_{j}\) commute and are tensor independent. Finally, the work of Lenczewski [10], considers \(\Lambda\)-Boolean independence and \(\Lambda\)-monotone independence which mix tensor independence with Boolean independence and with monotone independence, respectively.
In this paper we create a framework that generalizes BM independence, \(\Lambda\)-Boolean and \(\Lambda\)-monotone independence, allowing to consider mixtures of tensor, Boolean and monotone independent algebras. This appears to be the first time 3 of the natural independences are combined. We call this new notion of independence BMT (Boolean, monotone and tensor) independence.
To describe such notion we need a graph telling the information about pairwise independence. The idea is that, vertices correspond to random variables (or algebras), and the relation between two of them is as follows: Vertices joined with a directed edge will correspond to monotone independence between elements, vertices joined by an undirected edge will correspond to tensor independence, and unjoined vertices will correspond to Boolean independence. As an example, if we have the variables \(\{X_{1},X_{2},X_{3},X_{4},X_{5}\}\) and its independence graph \(G\) is the one in the Figure 1, we want the following relations.
* \(X_{5}\) is Boolean independent from \(X_{1},X_{2},X_{3},X_{4}\)
* \(X_{4}\) is Boolean independent from \(X_{3}\) and \(X_{1}\).
* \(X_{2}\) is classical independent from \(X_{4}\).
* \(X_{2}\) is monotonically independent from \(X_{3}\).
* \(X_{1}\) monotonically independent from \(X_{2}\) and from \(X_{3}\).
The main question is, of course, how to define the whole joint distribution, involving not only pairwise relations, and if this joint distribution is representable in a \(C^{*}\)-probability space. To be clear, BMT variables will have a graph \(G\) telling us the independence relations between pairs of variables/algebras, and in the framework we develop, we will have a way for computing all the mixed moments between these variables.
The main contributions of this paper are, firstly, to give a consistent way to calculate such mixed moment, see Definition 3.4, and secondly, to give a concrete construction which satisfies such notion of independence.
Now, for each of the five notions of independence, Boolean, Free, monotone and tensor, there exists a central limit theorem associated with it. Under the tensor independence hypothesis, the limit distribution is the normal distribution; with free independence, the limit is the semicircle distribution; the symmetrical Bernoulli distribution corresponds to Boolean independence: while the arcsine distribution is the limit for the monotone central limit theorem. Here we note that in the usual monotone central limit theorem a total order is imposed in the algebras. In this framework, it is possible to induce monotone and antimonotone independent variables without assuming any specific order. Our third main contribution is proving a central limit theorem for BMT independent variables with a sequence of independence graphs, having as particular cases the monotone, tensor and Boolean central limit theorems. Similarly, we prove s Poisson limit theorem or law of small numbers.
This work is distributed as follows. In Section 2, we introduce important definitions on graphs and partitions. We proving some new properties of these objects, which will be used crucially in the rest of the paper. In section 3, we introduce the notion of BMT independence with respect to a graph. We provide the first properties and relation with other notions of independence.
In Section 4, we present a model where we deploy the variables whose pairing independence relations are given by a graph \(G\).
In Section 5, we present the BMT central limit theorem. We give examples, including Boolean, Free and Monotone CLT's. We also considers some relations between different graphs. Similarly, in Section 6 we consider a Poisson Limit theoerem. We conclude with some open questions and remarks.
## 2 Set Partitions and Graphs
In this section, we introduce the main combinatorial objects that are used in our analysis of BMT independent random variables. First, we describe different types of set partitions. Second, we review some terminology on directed graphs. Finally, we define the kernel of a function subordinated to a directed graph, which is the object that determines how joint moments of BMT independent random variables must be computed.
### Set partitions
**Definition 2.1**.: A _partition_\(\pi=\{B_{1},B_{2},\ldots,B_{r}\}\) of a non-empty set \(S\) is a set of non-empty and pair-wise disjoint subsets of \(S\) whose union is \(S\), i.e., \(B\subset S\) and \(B\neq\emptyset\) for every \(B\in\pi\), \(B\cap B^{\prime}\neq\emptyset\) implies \(B=B^{\prime}\) for all \(B,B^{\prime}\in\pi\), and \(\cup_{B\in\pi}B=S\).
We refer to the elements of a partition as _blocks_, and the total number of blocks in partition \(\pi\) will be denoted by \(\#(\pi)\). Moreover, a block is said to be _even_ if it has even cardinality and _odd_ otherwise. A partition containing only even blocks is called _even_, but if all of its blocks have exactly two elements we will refer to it as a _pairing_.
_Example 2.2_.: The sets \(\pi_{1}=\{\{1,3\},\{2,4,5,6\}\}\), \(\pi_{2}=\{\{1,3,6\},\{2\},\{4,5\}\}\), and \(\pi_{3}=\{\{1,3\}\linebreak\{4,6\},\{2,5\}\}\) are all partitions of \(\{1,2,3,4,5,6\}\). The partitions \(\pi_{1}\) and \(\pi_{3}\) are both even, but while \(\pi_{3}\) is a pairing, \(\pi_{1}\) is not. The partition \(\pi_{2}\) is neither even nor odd since it contains two odd blocks, \(\{2\}\) and \(\{1,6,3\}\), and one even block, \(\{4,2\}\).
The set of all partitions of \(S\), the set of all even partitions of \(S\), and the set of all pairing partitions of \(S\) are denoted by \(P(S)\), \(P_{\rm even}(S)\), and \(P_{2}(S)\), respectively. We let \([m]\) and \([\ell,m]\) denote the set of integers \(\{1,2,\ldots,m\}\) and \(\{\ell,\ell+1,\ldots,m\}\), respectively, for any integers \(0\leq\ell\leq m\). When referring to set partitions of \([m]\), we will omit the square brackets. For instance, we will write \(P_{\rm even}(m)\) instead of \(P_{\rm even}([m])\).
Each partition \(\pi\in P(m)\) can be represented graphically by writing the numbers \(1,2,\ldots,m\) on a line, drawing vertical lines above each number with matching heights for numbers in the
same block, and joining with a horizontal line the vertical lines of the numbers that belong to the same block, see Figure 2. This leads to the concepts of _nesting_ and _crossing_ blocks.
**Definition 2.3**.: Suppose we are given two distinct blocks \(B=\{b_{1}<b_{2}<\cdots<b_{r}\}\) and \(C=\{c_{1}<c_{2}<\cdots<c_{s}\}\) from a partition \(\pi\in P(m)\). We say that \(B\)_is nested inside_\(C\) if there exists \(k\in[1,r-1]\) such that \(c_{k}<b_{j}<c_{k+1}\) for every \(b_{j}\in B\). Additionally, we say \(B,C\in\pi\)_cross each other_ if there exist \(b_{i},b_{j}\in B\) and \(c_{k},c_{\ell}\in C\) such that \(b_{i}<c_{k}<b_{j}<c_{\ell}\).
Every partition \(\pi\in P(S)\) is equivalent to an equivalence relation \(\sim_{\pi}\) on \(S\) where \(k\sim_{\pi}k^{\prime}\) if and only if \(k\) and \(k^{\prime}\) belong to the same block of \(\pi\). Hence, the equivalence relation \(\sim_{\pi}\) has the blocks of \(\pi\) as equivalence classes. The set of partitions \(P(S)\) is partially ordered by refinement: we put \(\pi\leq\theta\), and say that \(\pi\) is a _refinement_ of \(\theta\), if every block of \(\pi\) is contained in some block of \(\theta\). Notice that \(\pi\leq\theta\) if and only if \(k\sim_{\pi}k^{\prime}\) implies \(k\sim_{\theta}k^{\prime}\) for all \(k,k^{\prime}\in S\). In Example 2.2, the partition \(\pi_{3}\) is a refinement of \(\pi_{1}\), and there is no other refinement between \(\pi_{1}\), \(\pi_{2}\), and \(\pi_{3}\).
### Digraphs
**Definition 2.4**.: A _directed graph_, or simply _digraph_, is a pair \(G=(V,E)\) where \(V\) is a non-empty set, called _vertex set_, and \(E\) is a (possibly empty) subset of the Cartesian product \(V\times V\), called _edge set_.
Given two digraphs \(G_{1}=(V_{1},E_{2})\) and \(G_{2}=(V_{2},E_{2})\), we say that \(G_{1}\) is a _subgraph_ of \(G_{2}\), and denote this by \(G_{1}\subset G_{2}\), if \(V_{1}\subset V_{2}\) and \(E_{1}\subset E_{2}\). All digraphs in this paper are assumed to be _simple_, i.e., they contain no _loops_ --edges of the form \((v,v)\). The following are some types of digraphs that will be considered in this paper as they concern Boolean, monotone, and tensor independence:
* _Empty digraph._ These are digraphs \(G=(V,E)\) without edges, so \(E\) is the empty set.
* _Complete digraph._ These are digraphs \(G=(V,E)\) containing all possible edges, so every ordered pair \((v,w)\in V\times V\) with \(v\neq w\) belongs to \(E\).
* _Digraph of a partial order._ These are digraphs \(G=(V,E)\) where the vertex set \(V\) has a partial order \(\preceq\) and the edge set \(E\) contains an ordered pair \((v,w)\in V\times V\) if and only if \(v\prec w\), i.e., \(v\preceq w\) and \(v\neq w\).
Figure 2: Graphical representation for partitions \(\pi_{1}\), \(\pi_{2}\), and \(\pi_{3}\) from Example 2.2
### The kernel notation
Let \(S\) and \(V\) be non-empty sets. Given any function \(\mathbf{i}:S\to V\), we make the convention of taking \(i_{k}=\mathbf{i}(k)\) for every \(k\in S\). Additionally, if \(S=[m]\) for some integer \(m\geq 1\), then each function \(\mathbf{i}:S\to V\) will be identified with the tuple \((i_{1},i_{2},\ldots,i_{m})\).
**Definition 2.5**.: The _kernel_ of a function \(\mathbf{i}:S\to V\) is the partition of \(S\) determined by the equivalence relation \(k\sim k^{\prime}\) if and only if \(i_{k}=i_{k^{\prime}}\). This partition is denoted by \(\ker[\mathbf{i}]\).
_Remark 2.6_.: Notice that \(\ker[\mathbf{i}]\) coincides with the partition of \(S\) whose blocks are all non-empty pre-images of \(\mathbf{i}\), so we have \(\ker[\mathbf{i}]=\{\{k\in S\mid i_{k}=v\}\neq\emptyset\mid v\in V\}\). Furthermore, for any partition \(\pi\in P(S)\), the condition \(\pi\leq\ker[\mathbf{i}]\) holds if and only if \(\mathbf{i}\) is constant when restricted to each block of \(\pi\), i.e., \(i_{k}=i_{\ell}\) whenever \(k,\ell\in B\) and \(B\in\pi\).
_Example 2.7_.: The function \(\mathbf{i}:[6]\to[5]\) given by \((i_{1},i_{2},i_{3},i_{4},i_{5},i_{6})=(4,1,3,4,1,4)\) has kernel \(\ker[\mathbf{i}]=\{\{1,4,6\},\{2,5\},\{3\}\}\). The function \(\mathbf{j}:[6]\to[5]\) given by \((j_{1},j_{2},j_{3},j_{4},j_{5},j_{6})=(5,1,5,4,1,4)\) has kernel \(\ker[\mathbf{i}]=\{\{1,3\},\{2,5\},\{4,6\}\}\).
**Definition 2.8**.: The _kernel_ of a function \(\mathbf{i}:S\to V\)_subordinated_ to a digraph \(G=(V,E)\) is the partition of \(S\) determined by the equivalence relation \(k\sim k^{\prime}\) if and only if \(i_{k}=i_{k^{\prime}}\) and \((i_{\ell},i_{k})\) is an edge of \(G\) whenever \(i_{k}\neq i_{\ell}\) and either \(k<\ell<k^{\prime}\) or \(k^{\prime}<\ell<k\). We denote this partition by \(\ker_{G}[\mathbf{i}]\).
_Remark 2.9_.: The partition \(\ker_{G}[\mathbf{i}]\) is a refinement of the partition \(\ker[\mathbf{i}]\). Moreover, the second condition in the equivalence relation defining \(\ker_{G}[\mathbf{i}]\) only concerns \(G_{\mathbf{i}}\) --the restriction of \(G\) to the vertices \(i_{1},i_{2},\ldots,i_{m}\)-- and not the whole graph \(G\). Hence, if \(G_{\mathbf{i}}\) is the complete graph, the second condition is immediately satisfied and \(\ker_{G}[\mathbf{i}]\) is defined only by the relation \(k\sim k^{\prime}\) whenever \(i_{k}=i_{k^{\prime}}\), which is exactly the definition of \(\ker[\mathbf{i}]\), yielding \(\ker_{G}[\mathbf{i}]=\ker[\mathbf{i}]\) in this case. Therefore, \(\ker_{G}[\mathbf{i}]\) is not only a refinement but also a generalization of \(\ker[\mathbf{i}]\).
We have given a sufficient but not necessary condition for \(\ker_{G}[\mathbf{i}]=\ker[\mathbf{i}]\) to hold in Remark 2.9. This equality plays an important role in our analysis of BMT random variables, so it will be convenient to establish an equivalent condition to it in terms of digraphs. To this end, we introduce the following.
**Definition 2.10**.: The _nesting-crossing graph_ of a partition \(\pi\in P(m)\) is the digraph \(G_{\pi}=(V_{\pi},E_{\pi})\) with vertex set \(V_{\pi}=\pi\) and edge set \(E_{\pi}=\{(B,C)\in\pi\times\pi:B\neq C\) and either \(C\) is nested inside \(C\) or \(B\) and \(C\) cross each other\(\}\). Additionally, given a function \(\mathbf{i}:[m]\to[N]\) with \(\ker[\mathbf{i}]=\pi\), we let \(G_{\pi(\mathbf{i})}\) denote the graph obtained from \(G_{\pi}\) after relabeling each vertex \(W\in V_{\pi}\) as \(i_{k}\) with \(k\in W\).
_Remark 2.11_.: The edge set \(E_{\pi}\) can be equivalently defined as the set of all ordered pairs \((B,C)\in\pi\times\pi\) with \(B\neq C\) such that there exists \(\ell\in B\) with \(\min C<\ell<\max C\). The latter is also equivalent to the existence of elements \(k,k^{\prime}\in C\) and \(\ell\in B\) with \(k<\ell<k^{\prime}\).
_Example 2.12_.: Consider \(\pi=\{\{1,5\},\{2,3,7\},\{4\},\{6\}\}\) and take \(B_{1}=\{1,5\}\), \(B_{2}=\{2,3,7\}\), \(B_{3}=\{4\}\), and \(B_{4}=\{6\}\). The nesting-crossing graph \(G_{\pi}\) has then vertex set \(V_{\pi}=\{B_{1},B_{2},B_{3},\)
\(B_{4}\)\(\}\) and edge set \(E_{\pi}=\{(B_{1},B_{2}),(B_{2},B_{1}),(B_{3},B_{1}),(B_{3},B_{2}),(B_{4},B_{2})\}\). Moreover, the function \(\mathbf{i}=(1,8,8,4,1,5,8)\) satisfies \(\ker[\mathbf{i}]=\pi\), so the graph \(G_{\pi(\mathbf{i})}\) has vertex set \(V_{\pi(\mathbf{i})}=\{1,4,5,8\}\) and edge set \(E_{\pi(\mathbf{i})}=\{(1,8),(8,1),(4,1),(4,8),(5,8)\}\). See Figure 3.
**Lemma 2.13**.: _Let \(\mathbf{i}:[m]\to V\) be a function with \(\ker[\mathbf{i}]=\pi\) for some partition \(\pi\in P(m)\). Then for any digraph \(G=(V,E)\) the equality \(\ker_{G}[\mathbf{i}]=\ker[\mathbf{i}]\) holds if and only if \(G_{\pi(\mathbf{i})}\) is a subgraph of \(G\)._
Since the vertex set of \(G_{\pi(\mathbf{i})}\) is always a subset of the vertex set of \(G\), and the definition of \(\ker_{G}[\mathbf{i}]\) and \(\ker[\mathbf{i}]\) implies that the equality \(\ker_{G}[\mathbf{i}]=\ker[\mathbf{i}]\) holds if and only if \((i_{\ell},i_{k})\) is an edge of \(G\) whenever \(k<\ell<k^{\prime}\), \(i_{k}\neq i_{\ell}\), and \(i_{k}=i_{k^{\prime}}\), the previous lemma is an immediate consequence of the next proposition.
**Proposition 2.14**.: _Suppose \(\pi\) is a partition in \(P(m)\). Then, for any function \(\mathbf{i}:[m]\to[N]\) with \(\ker[\mathbf{i}]=\pi\), the edge set \(E_{\pi(\mathbf{i})}\) of the graph \(G_{\pi(\mathbf{i})}\) satisfies_
\[E_{\pi(\mathbf{i})}=\left\{(i_{\ell},i_{k})\;|\;\;i_{k}\neq i_{\ell}\text{ and there exists }k^{\prime}\in[m]\text{ with }k<\ell<k^{\prime},\ i_{k}=i_{k^{\prime}}\right\}.\]
Proof.: Put \(\vec{E}=\{(i_{\ell},i_{k})\;|\;\;i_{k}\neq i_{\ell}\text{ and there exists }k^{\prime}\in[m]\text{ with }k<\ell<k^{\prime},\ i_{k}=i_{k^{\prime}}\}\). Suppose first \((i_{\ell},i_{k})\) is an edge in \(E_{\pi(\mathbf{i})}\) and let \(C\) and \(B\) be the blocks of \(\pi\) with \(\ell\in C\) and \(k\in B\). Since \(\ker[\mathbf{i}]=\pi\) and \((i_{\ell},i_{k})\in E_{\pi(\mathbf{i})}\), we must have \(i_{k}\neq i_{\ell}\). Moreover, the pair \((C,B)\) must be an edge in \(E_{\pi}\), and hence either \(C\) is nested inside \(B\) or \(C\) and \(B\) cross each other. Now, if \(C\) is nested inside \(B\), then we can take \(k^{\prime}=\min B\) and \(k^{\prime\prime}=\max B\) to get \((i_{\ell},i_{k})=(i_{\ell},i_{k^{\prime}})\), \(i_{k^{\prime}}\neq i_{\ell}\), \(k^{\prime}<\ell<k^{\prime\prime}\), and \(i_{k}=i_{k^{\prime}}\). On the other hand, if \(C\) and \(B\) cross each other, there must exist \(k^{\prime},k^{\prime\prime}\in B\) and \(\ell\in C\) with \(k^{\prime}<\ell<k^{\prime\prime}\); additionally, we get \((i_{\ell},i_{k})=(i_{\ell},i_{k^{\prime}})\), \(i_{k^{\prime}}\neq i_{\ell}\), and \(i_{k}=i_{k^{\prime}}\). In any case, we obtain that \((i_{\ell},i_{k})\) belongs to \(\vec{E}\).
Suppose now \((i_{\ell},i_{k})\) belongs to \(\vec{E}\) and let \(k^{\prime}\in[m]\) with \(k<\ell<k^{\prime}\) and \(i_{k}=i_{k^{\prime}}\). Take \(C\) and \(B\) as above. Since \(\ker[\mathbf{i}]=\pi\), we have \(k^{\prime}\) belongs to \(B\); moreover, \(i_{k}\neq i_{\ell}\) implies the blocks \(B\) and \(C\) are distinct. And hence, either \(C\) is nested inside \(B\) or \(C\) and \(B\) cross each other due to \(k<\ell<k^{\prime}\). In any case, we obtain \((C,B)\) is an edge in \(E_{\pi}\), and therefore \((i_{\ell},i_{k})\) belongs to \(E_{\pi(\mathbf{i})}\) since \(i_{\ell}\) and \(i_{k}\) are the replacements of \(C\) and \(B\), respectively, in the construction of \(G_{\pi(\mathbf{i})}\) from \(G_{\pi}\).
BMT Independence
This is one of the main sections of the paper. We first recall some notions from non-commutative probability, in particular, the notions of boolean, monotone, and tensor independence. We then introduce the notion of BMT independence, which provides a framework that allows for arbitrary mixtures of boolean, monotone, and tensor independence and, consequently, generalizing the notion of BM independence.
### Notions of Independence
**Definition 3.1**.: A _non-commutative probability space_ is a pair \((\mathcal{A},\varphi)\) where \(\mathcal{A}\) is a complex algebra with multiplicative identity \(\mathbf{1}_{\mathcal{A}}\) and \(\varphi:\mathcal{A}\rightarrow\mathbb{C}\) is a linear functional so that \(\varphi(\mathbf{1}_{\mathcal{A}})=1\). The elements of \(\mathcal{A}\) are called _random variables_.
Suppose \(a\in\mathcal{A}\) is a random variable in a non-commutative probability space \((\mathcal{A},\varphi)\). The value \(\varphi(a^{k})\) is called the _k-th moment_ of \(a\). Moreover, the sequence \((\varphi(a^{k}))_{k=1}^{\infty}\) of all moments of \(a\) is called the _(algebraic) distribution_ of \(a\). We say that a probability measure \(\mu\) on the real line \(\mathbb{R}\) is the _analytical distribution_ of \(a\) if for every integer \(k\geq 1\) we have
\[\varphi(a^{k})=\int_{-\infty}^{+\infty}t^{k}d\mu(t).\]
**Definition 3.2**.: Let \((\mathcal{A},\varphi)\) be a non-commutative probability space. An infinite sequence \(a_{1},a_{2},a_{3}\ldots\) contained in \(\mathcal{A}\) is said to _converge in moments_ to a random variable \(a\in\mathcal{A}\) (resp., \(\mu\)) if for every integer \(k\geq 1\) we have
\[\lim_{N\rightarrow\infty}\varphi(a_{N}^{k})=\varphi(a^{k})\quad\left(\text{ resp.},\ =\int_{-\infty}^{+\infty}t^{k}d\mu(t)\right).\]
For any sequence of random variables \(a_{1},a_{2},\ldots,a_{m}\in\mathcal{A}\) and any set \(B=\{k_{1}<k_{2}<\cdots<k_{r}\}\subset[m]\), we let
\[(a_{k})|_{B}=a_{k_{1}}a_{k_{2}}\cdots a_{k_{r}}=\prod_{k\in B}^{\rightarrow}a _{k}.\]
Suppose \((\mathcal{A}_{i})_{i\in I}\) is a family of sub-algebras of \(\mathcal{A}\) and \(a_{1}\in\mathcal{A}_{i_{1}},\ldots,a_{m}\in\mathcal{A}_{i_{m}}\) for some indexes \(i_{1},\ldots,i_{m}\in I\). We say \(a_{1}a_{2}\cdots a_{m}\) is _an alternating product of elements of_\((\mathcal{A}_{i})_{i\in I}\) if the indexes \(i_{k}\) satisfy that \(i_{1}\neq i_{2}\), \(i_{2}\neq i_{3}\), \(\ldots\), \(i_{m-1}\neq i_{m}\).
**Definition 3.3**.: Let \((\mathcal{A},\varphi)\) be a non-commutative probability space. A family \((\mathcal{A}_{i})_{i\in I}\) of subalgebras of \(\mathcal{A}\) is said to be
* _boolean independent_ if for any alternating product \(a_{1}\cdots a_{m}\) of elements of \((\mathcal{A}_{i})_{i\in I}\) we have \[\varphi(a_{1}\cdots a_{m})=\varphi(a_{1})\cdots\varphi(a_{m})\]
* _monotone independent_ if \(I\) has a linear order \(<\) and for any alternating product \(a_{1}\cdots a_{m}\) of elements of \((\mathcal{A}_{i})_{i\in I}\) with \(a_{j}\in\mathcal{A}_{i_{j}}\) we have
* \(\varphi(a_{1}\cdots a_{m})=\varphi(a_{k})\varphi(a_{1}\cdots a_{k-1}a_{k+1}\cdots a _{n})\) if \(i_{k-1}<i_{k}>i_{k+1}\) for some \(k\in[2,m-1]\)
* \(\varphi(a_{1}\cdots a_{m})=\prod_{\ell=1}^{m}\varphi(a_{\ell})\) if \(i_{1}>\cdots>i_{k-1}>i_{k}<i_{k+1}<\cdots<i_{m}\) for some \(k\in[m]\).
* _tensor independent_ if for any (not necessarily alternating) product \(a_{1}\cdots a_{m}\) of elements of \((\mathcal{A}_{i})_{i\in I}\) with \(a_{j}\in\mathcal{A}_{i_{j}}\) we have \[\varphi(a_{1}\cdots a_{m}) =\prod_{B\in\ker(\mathfrak{i})}\varphi\left(\prod_{k\in B}^{ \rightarrow}a_{k}\right)\] where \(\prod_{k\in B}^{\rightarrow}a_{k}:=a_{k_{1}}a_{k_{2}}\cdots a_{k_{r}}\) provided \(B=\{k_{1}<k_{2}<\cdots<k_{r}\}\).
### BMT independence
We now present the main definition of this paper, the notion of BMT independence, which relies upon \(\ker_{G}[\mathfrak{i}]\), the kernel of a function subordinated to a digraph \(G\), see Definition 2.8.
**Definition 3.4**.: Let \((\mathcal{A},\varphi)\) be a non-commutative probability space. Suppose \((\mathcal{A}_{i})_{i\in I}\) is a family of sub-algebras of \(\mathcal{A}\) and \(G=(I,E)\) is a digraph on the set of indices \(I\). The family \((\mathcal{A}_{i})_{i\in I}\) is said to be _BMT independent_ with respect to the pair \((\varphi,G)\) if for every integer \(m\geq 1\) and variables \(a_{1}\in\mathcal{A}_{i_{1}},a_{2}\in\mathcal{A}_{i_{2}},\ldots,a_{m}\in \mathcal{A}_{i_{m}}\) we have
\[\varphi(a_{1}a_{2}\cdots a_{m}) =\prod_{B\in\ker_{G}[\mathfrak{i}]}\varphi[(a_{k})|_{B}].\]
The above digraph \(G=(I,E)\) gives the pair-wise independence relations between sub-algebras from boolean, mononote, and tensor. For any two distinct sub-algebras \(\mathcal{A}_{i}\) and \(\mathcal{A}_{j}\), we have that \(\mathcal{A}_{i}\) and \(\mathcal{A}_{j}\) are boolean independent if neither \((i,j)\) nor \((j,i)\) is an edge of \(G\), \(\mathcal{A}_{i}\) is monotone independent from \(\mathcal{A}_{j}\) if \((i,j)\) is an edge of \(G\) but \((j,i)\) is not, and \(\mathcal{A}_{i}\) and \(\mathcal{A}_{j}\) are tensor independent if both \((i,j)\) and \((j,i)\) are edges of \(G\). Thus, our notion of BMT independence has boolean, mononote, and tensor independence as particular cases. This is proved next.
**Proposition 3.5**.: _Let \((\mathcal{A},\varphi)\) be a non-commutative probability space. Suppose \((\mathcal{A}_{i})_{i\in I}\) is a family of sub-algebras BMT independent with respect to a digraph \(G=(I,E)\). Thus, we have_
* _the algebras_ \((\mathcal{A}_{i})_{i\in I}\) _are tensor independent if_ \(G\) _is the graph complete,_
* _the algebras_ \((\mathcal{A}_{i})_{i\in I}\) _are boolean independent if_ \(G\) _is the empty graph, and_
* _the algebras_ \((\mathcal{A}_{i})_{i\in I}\) _are monotone independent if_ \(I\) _has a total order_ \(<\) _and_ \(G\) _is the digraph of_ \(<\)_, i.e.,_ \((i,j)\) _is an edge of_ \(G\) _if and only if_ \(j<i\)_._
Proof.: We will prove _(i)_, _(ii)_, and _(iii)_ separately. To this end, let us assume \(a_{1}\cdots a_{m}\) is a (not necessarily alternating) product of elements of \(({\cal A}_{i})_{i\in I}\) with \(a_{k}\in{\cal A}_{i_{k}}\).
**Proof of _(i)_. If \(G\) is the complete graph, then \(\ker_{G}[\mathbf{i}]=\ker[\mathbf{i}]\) for any \(\mathbf{i}=(i_{1},\ldots,i_{m})\), see Remark 2.9. Thus, by Definitions 3.3 and 3.4, the algebras \(({\cal A}_{i})_{\in I}\) are tensor independent since
\[\varphi(a_{1}\cdots a_{m}) =\prod_{B\in\ker_{G}[\mathbf{i}]}\varphi((a_{k})|_{B}) = \prod_{B\in\ker[\mathbf{i}]}\varphi((a_{k})|_{B}).\]
**Proof of _(ii)_. Suppose now \(G\) is the empty graph. Note that if \(a_{1}\cdots a_{m}\) is an alternating product, then \(\ker_{G}[\mathbf{i}]=\{\{1\},\{2\},\ldots,\{m\}\}\). Indeed, \(a_{1}\cdots a_{m}\) is alternating only if \(i_{1}\neq i_{2}\), \(i_{2}\neq i_{3}\), \(\ldots\), \(i_{m-1}\neq i_{m}\), and hence \(i_{k}=i_{k^{\prime}}\) only if \(k=k^{\prime}\) or \(k+1\leq k^{\prime}\). But then, since \(G\) contains no edges, the second condition defining \(\ker_{G}[\mathbf{i}]\) is never satisfied when \(k+1\leq k^{\prime}\), see Definition 2.8. And therefore, \(\ker_{G}[\mathbf{i}]\) contains only singletons \(\{k\}\) as equivalence classes. Thus, Definitions 3.3 and 3.4 imply the algebras \(({\cal A}_{i})_{\in I}\) are boolean independent since
\[\varphi(a_{1}\cdots a_{m}) =\prod_{B\in\ker_{G}[\mathbf{i}]}\varphi((a_{k})|_{B}) = \varphi(a_{1})\varphi(a_{2})\cdots\varphi(a_{m}).\]
**Proof of _(iii)_. Suppose that \(\leq\) is a total order of the set \(I\) with associated digraph \(G=(I,E)\) where \(E=\{(i,j)\in I\times I\mid j<i\}\). Assume \(a_{1}\cdots a_{m}\) is an alternating product. We will show that M.1 and M.2 from Definition 3.3 hold.
For M.1, we have \(i_{k-1}<i_{k}>i_{k+1}\) for some \(k\in[2,m-1]\). Then, neither \((i_{k-1},i_{k})\) nor \((i_{k+1},i_{k})\) is an edge of \(G\), and hence \(\{k\}\) is a singleton in \(\ker_{G}[\mathbf{i}]\), see Definition 2.8. Thus, by BMT independence, we obtain
\[\varphi(a_{1}a_{2}\cdots a_{m}) = \varphi(a_{k})\prod_{\begin{subarray}{c}B\in\ker_{G}[\mathbf{i}]\\ B\neq\{k\}\end{subarray}}\varphi((a_{\ell})|_{B}).\]
Let \(\mathbf{i}^{\prime}=(i_{1},\ldots,i_{k-1},i_{k+1},\ldots,i_{m})\). It is enough to prove that \(B\in\ker_{G}[\mathbf{i}]\) and \(B\neq\{k\}\) if and only if \(B\in\ker_{G}[\mathbf{i}^{\prime}]\) since BMT independence also gives
\[\varphi(a_{1}\cdots a_{k-1}a_{k+1}\cdots a_{m}) =\prod_{B\in\ker_{G}[\mathbf{i}^{\prime}]}\varphi((a_{ \ell})|_{B}).\]
Take \(\pi=\ker_{G}[\mathbf{i}]\) and \(\theta=\ker_{G}[\mathbf{i}^{\prime}]\). Due to Definition 2.8, we need to show that \(r\sim_{\pi}r^{\prime}\) and \(r\sim_{\theta}r^{\prime}\) are equivalent for any \(r,r^{\prime}\in[m]\) with \(r,r^{\prime}\neq k\). However, \(r\sim_{\pi}r^{\prime}\) implies \(r\sim_{\theta}r^{\prime}\) already, so just the other implication is needed.
Suppose \(r\sim_{\theta}r^{\prime}\) with \(r<r^{\prime}\) and \(r,r^{\prime}\neq k\). Thus, we obtain \(i_{r}=i_{r^{\prime}}\) and \(i_{r}<i_{\ell}\) whenever \(r<\ell<r^{\prime}\), \(i_{\ell}\neq i_{r}\), and \(\ell\neq k\). To get \(r\sim_{\pi}r^{\prime}\), the restriction \(\ell\neq k\) in the latter condition needs to be lifted. This is immediate if \(r^{\prime}<k\) or \(k<r\) or \(i_{k}=i_{r}\). Thus, without loss of generality, we can assume \(r<k<r^{\prime}\) and \(i_{k}\neq i_{r}\). Now, if \(i_{r}=i_{k-1}\), then \(i_{r}<i_{k}\) by hypothesis. On the other hand, if \(i_{r}\neq i_{k-1}\), we must have \(r<k-1<r^{\prime}\), and hence \(i_{r}<i_{k-1}<i_{k}\). This shows the restriction \(\ell\neq k\) is not needed, and hence \(r\sim_{\pi}r^{\prime}\).
For M.2, we have \(i_{1}>\cdots>i_{k-1}>i_{k}<i_{k+1}<\cdots<i_{m}\) for some \(k\in[m]\). Notice \(\pi=\ker_{G}[\boldsymbol{i}]\) contains only singletons. Indeed, if \(r\neq r^{\prime}\) and either \(1\leq r,r^{\prime}\leq k\) or \(k\leq r,r^{\prime}\leq m\), then either \(i_{r}<i_{r^{\prime}}\) or \(i_{r}<i_{r}\), and hence \(r\nsim r^{\prime}\) since \(i_{r}\neq i_{r^{\prime}}\). Thus, for any \(r,r^{\prime}\in[m]\), we have that \(r\sim_{\pi}r^{\prime}\) only if \(r=r^{\prime}\) or \(r<k<r^{\prime}\) and \(i_{r}=i_{r^{\prime}}\). However, the latter case implies \(i_{k}<i_{r}\) and hence \(r\nsim r^{\prime}\). Therefore, \(\ker_{G}[\boldsymbol{i}]=\{\{1\},\{2\},\ldots,\{m\}\}\), and BMT independence gives
\[\varphi(a_{1}a_{2}\cdots a_{m})=\varphi(a_{1})\varphi(a_{2})\cdots\varphi(a_ {m}).\]
### Weak BM independence
A main motivation for this paper is the notion of _BM independence_, introduced and investigated by J. Wysoczanski in [24, 25] as a generalization of monotone and boolean independence.
**Definition 3.6**.: Let \((\mathcal{A},\varphi)\) be a non-commutative probability space. A family \((\mathcal{A}_{i})_{i\in I}\) of subalgebras of \(\mathcal{A}\) is said to be _bm-independent_ if the set \(I\) has a partial order \(\preceq\) and for any alternating product \(a_{1}\cdots a_{n}\) of elements of \((\mathcal{A}_{i})_{i\in I}\) with \(a_{k}\in\mathcal{A}_{i_{k}}\) the following two hold:
* if \(i_{k-1}\prec i_{k}\succ i_{k-1}\) or \(i_{k+1}\nsim i_{k}\succ i_{k+1}\) or \(i_{k-1}\prec i_{k}\nsim i_{k+1}\) for some \(k\in[2,m-1]\), then \[a_{1}\cdots a_{m}=\varphi(a_{k})a_{1}\cdots a_{k-1}a_{k+1}\cdots a_{m}.\]
* if \(i_{1}\succ\cdots\succ i_{k}\nsim\cdots\nsim i_{k+\ell}\prec\cdots\prec i_{n}\) for some \(k\in[m]\) and \(\ell\in[0,m-k]\), then \[\varphi(a_{1}\cdots a_{m})=\prod_{j=1}^{m}\varphi(a_{j}).\]
However, as we will show next, our notion of BMT independence contains a weaker version of BM independence as a particular case.
**Definition 3.7**.: The family of sub-algebras \((\mathcal{A}_{i})_{i\in I}\) from the previous definition is called _weak bm-independent_ if any alternating product \(a_{1}\cdots a_{n}\) satisfies (**BM2**) and **(weak BM1)**, namely, if \(i_{k-1}\prec i_{k}\succ i_{k+1}\) or \(i_{k-1}\nsim i_{k}\succ i_{k+1}\) or \(i_{k-1}\prec i_{k}\nsim i_{k+1}\) for some \(k\in[2,m-1]\), then
\[\varphi(a_{1}\cdots a_{n})=\varphi(a_{k})\varphi(a_{1}\cdots a_{k-1}a_{k+1} \cdots a_{n})\]
It follows from the linearity of \(\varphi\) that BM independence implies weak BM independence.
_Remark 3.8_.: Note that if \(G=(I,E)\) is the digraph of a partial order \((I,\preceq)\), then \(i\prec j\) if and only if \((j,i)\in E\) and \((i,j)\notin E\), and \(i\nsim j\) if and only if \((j,i)\notin E\) and \((i,j)\notin E\).
**Theorem 3.9**.: _If \(G=(I,E)\) is the digraph of a partial order \((I,\preceq)\), then BMT independence and weak BM independence coincide._
Proof.: Let \((I,\preceq)\) be a partial order with digraph \(G=(I,E)\) where \(E=\{(i,j)\in I\times I\mid j\prec i\}\). Suppose \((\mathcal{A},\varphi)\) is a non-commutative probability space and \((\mathcal{A}_{i})_{i\in I}\) is a family of sub-algebras of \(\mathcal{A}\). Consider an arbitrary alternating productof \(a_{1}\cdots a_{m}\) of elements of \((\mathcal{A})_{i\in I}\) with \(a_{k}\in\mathcal{A}_{i_{k}}\) and take \(\boldsymbol{i}=(i_{1},\ldots,i_{m})\).
Assume first that \((\mathcal{A})_{i\in I}\) are BMT independent with respect to \((\varphi,G)\). Suppose \(i_{k-1}\prec i_{k}\succ i_{k+1}\) or \(i_{k-1}\nsim i_{k}\succ i_{k+1}\) or \(i_{k-1}\prec i_{k}\nsim i_{k+1}\) or \(i_{k-1}\prec i_{k}\nsim i_{k+1}\). To show that weak BM1 holds, one follows a similar argument as in _(iii)_ from Proposition 3.5 to conclude it is enough to prove that \(r\sim_{\theta}r^{\prime}\) implies \(r\sim_{\pi}r^{\prime}\) for any \(r,r^{\prime}\in[m]\) with \(r,r^{\prime}\neq k\) where \(\pi=\ker_{G}[\boldsymbol{i}]\), \(\theta=\ker_{G}[\boldsymbol{i}^{\prime}]\), and \(\boldsymbol{i}=(i_{1},\ldots,i_{k-1},i_{k+1},\ldots,i_{m})\).
Suppose \(r\sim_{\theta}r^{\prime}\) with \(r<r^{\prime}\) and \(r,r^{\prime}\neq k\). Thus, we obtain \(i_{r}=i_{r^{\prime}}\) and \(i_{r}\prec i_{\ell}\) whenever \(r<\ell<r^{\prime}\), \(i_{\ell}\neq i_{r}\), and \(\ell\neq k\), see Definition 2.8 and Remark 3.8. Similarly to _(iii)_ from Proposition 3.5, we just need to lift the restriction \(\ell\neq k\) when \(r<k<r^{\prime}\) and \(i_{k}\neq i_{r}\). Now, if \(i_{r}=i_{k-1}\) or \(i_{r^{\prime}}=i_{k+1}\), then \(i_{r}\prec i_{k}\) by hypothesis. On the other hand, if \(i_{r}\neq i_{k-1}\) and \(i_{r^{\prime}}\neq i_{k+1}\), we must have \(r<k-1<k+1<r^{\prime}\), and hence \(i_{r}\prec i_{k-1}\) and \(i_{r}\prec i_{k+1}\), yielding \(i_{r}\prec i_{k}\) by transitivity. Thus, weak BM1 is satisfied.
Suppose \(i_{1}\succ\cdots\succ i_{k}\nsim\cdots\nsim i_{k+\ell}\prec\cdots\prec i_{m}\). To show that BM2 holds, we will prove \(\ker_{G}[\boldsymbol{i}]\) contains only singletons. Put \(\pi=\ker_{G}[\boldsymbol{i}]\) and take \(r,r^{\prime}\in[m]\) with \(r\sim_{\pi}r^{\prime}\) and \(r\leq r^{\prime}\). Thus, we have \(i_{r}=i_{r^{\prime}}\) and \(i_{r}\prec i_{\ell}\) whenever \(r<\ell<r^{\prime}\) and \(i_{\ell}\neq i_{r}\). We will show that \(r\) and \(r^{\prime}\) must be equal by contradiction. Suppose \(r<r^{\prime}\). Note that \(r+2\leq r^{\prime}\) since \(a_{1}\cdots a_{m}\) is alternating. Now, if \(r<k+l\), then \(i_{r+1}\neq i_{r}\) and \(r<r+1<r^{\prime}\) with \(i_{r}\not\prec i_{r+1}\). Hence, we obtain \(k+\ell\leq r\). On the other hand, if \(k<r^{\prime}\), then \(i_{r^{\prime}-1}\neq i_{r}\) and \(r<r^{\prime}-1<r^{\prime}\) with \(i_{r^{\prime}-1}\not\succ i_{r^{\prime}}\), yielding \(r^{\prime}\leq k\) and contradicting \(r<r^{\prime}\). Therefore, we must have \(r=r^{\prime}\) and so \(\ker_{G}[\boldsymbol{i}]=\{\{1\},\ldots,\{m\}\}\). It follows from BMT independence that \(\varphi(a_{1}a_{2}\cdots a_{m})=\varphi(a_{1})\varphi(a_{2})\cdots\varphi(a_{ m}).\) This proves \((\mathcal{A})_{i\in I}\) are weak BM independent.
Assume now that \((\mathcal{A})_{i\in I}\) are weak BM independent. Notice that in the first part we actually proved the following two properties: (1) \(\ker_{G}[\boldsymbol{i}]=\{\{k\}\}\cup\ker_{G}[\boldsymbol{i}^{\prime}]\) if \(i_{k-1}\prec i_{k}\succ i_{k+1}\) or \(i_{k-1}\nsim i_{k}\succ i_{k+1}\) or \(i_{k-1}\nsim i_{k+1}\) or \(i_{k-1}\prec i_{k}\nsim i_{k+1}\) for some \(k\in[2,m-1]\) and (2) \(\ker_{G}[\boldsymbol{i}]=\{\{1\},\ldots,\{m\}\}\) if \(i_{1}\succ\cdots\succ i_{k}\nsim\cdots\nsim i_{k+\ell}\prec\cdots\prec i_{m}\) for some \(k\in[m]\) and \(\ell\in[0,m-k]\). One can verify straightforward that the negation of the hypothesis in (1) is that for every \(k\in[2,m-1]\) one has \(i_{k-1}\nsim i_{k}\nsim i_{k+1}\) or \(i_{k-1}\nsim i_{k}\prec i_{k+1}\) or \(i_{k-1}\succ i_{k}\nsim i_{k+1}\) or \(i_{k-1}\succ i_{k}\succ i_{k+1}\). It then follows that any alternating sequence \(i_{1},\ldots,i_{m}\) satisfies either (1) or (2). Therefore, the BMT independence of \((\mathcal{A})_{i\in I}\) with respect to \((\varphi,G)\) is obtained from applying induction on the set of all integers \(n\geq 1\) such that \(\varphi(a_{1}\cdots a_{n})=\prod_{B\in\ker_{G}[\boldsymbol{i}]}\varphi((a_{k}) |_{B})\) for any alternating product \(a_{1}\cdots a_{n}\) and applying weak BM1 or BM2 accordingly.
### Consistency
To conclude this section, we show that BMT independence is consistent in the following way. If two families of algebras are BMT independent and each element in the first family is tensor (resp., monotone, Boolean) independent from any element in the second family, then the whole families are tensor (resp., monotone, Boolean) independent, regardless of the independence
relations within each family.
To illustrate this, let us consider algebras \(\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3}\) that are BMT independent with respect to a digraph \(G\) containing exactly two edges from \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) to \(\mathcal{A}_{3}\) and possibly more edges connecting \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) as follows:
Thus \(\mathcal{A}_{1}\) is monotone independent from \(\mathcal{A}_{3}\) and \(\mathcal{A}_{2}\) is also monotone independent from \(\mathcal{A}_{3}\). One can then ask what the independence relation between \(\mathcal{A}_{3}\) and \(\langle\mathcal{A}_{1},\mathcal{A}_{2}\rangle\), the algebra generated by \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), should be. Consistency for BMT independence means that \(\mathcal{A}_{3}\) and \(\langle\mathcal{A}_{1},\mathcal{A}_{2}\rangle\) must be monotone independent no matter what the relation between \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) is.
Before we prove that BMT independence is consistent, we need the following two technical lemmas.
**Lemma 3.10**.: _Suppose \((\mathcal{A}_{i})_{i\in I}\) is a family of sub-algebras BMT independent with respect to a digraph \(G=(I,E)\). Let \(a_{1}a_{2}\cdots a_{m}\) be a product (not necessarily alternating) of elements of \((\mathcal{A}_{i})_{i\in I}\) with \(a_{k}\in\mathcal{A}_{i_{k}}\) and \(\boldsymbol{i}=(i_{1},i_{2},\ldots,i_{m})\). If there exist \(1\leq\ell<\ell^{\prime}\leq m\) so that \(\ker_{G}[\boldsymbol{i}|_{S}]\) and \(\ker_{G}[\boldsymbol{i}|_{S^{c}}]\) are contained in \(\ker_{G}[\boldsymbol{i}]\) with \(S=\{\ell,\ell+1,\cdots,\ell^{\prime}-1\}\), then_
\[\varphi(a_{1}\cdots a_{m})=\varphi(a_{\ell}\cdots a_{\ell^{\prime}-1})\varphi (a_{1}\cdots a_{\ell-1}a_{\ell^{\prime}}\cdots a_{m})\]
Proof.: Since \(\ker_{G}[\boldsymbol{i}]\), \(\ker_{G}[\boldsymbol{i}|_{S}]\), and \(\ker_{G}[\boldsymbol{i}|_{S^{c}}]\) are partitions of \([m]\), \(S\), and \(S^{c}\), respectively, and \(S\cup S^{c}=[m]\), the assumption that both \(\ker_{G}[\boldsymbol{i}|_{S}]\) and \(\ker_{G}[\boldsymbol{i}|_{S^{c}}]\) are subsets of \(\ker_{G}[\boldsymbol{i}]\) implies the partition \(\ker_{G}[\boldsymbol{i}]\) is the disjoint union of \(\ker_{G}[\boldsymbol{i}|_{S}]\) and \(\ker_{G}[\boldsymbol{i}|_{S^{c}}]\). Therefore, the relation \(\varphi(a_{1}\cdots a_{m})=\varphi(a_{\ell}\cdots a_{\ell^{\prime}-1})\varphi (a_{1}\cdots a_{\ell-1}a_{\ell^{\prime}}\cdots a_{m})\) follows directly from the definition of BMT independence.
**Lemma 3.11**.: _Let \(G=(I,E)\) be a digraph. For any tuple \(\boldsymbol{i}=(i_{1},i_{2},\ldots,i_{m})\) with \(i_{k}\in I\) and any subset \(S\subset[m]\), we have \(\ker_{G}[\boldsymbol{i}|_{S}]\subset\ker_{G}[\boldsymbol{i}]\) if and only if \(k_{1}\sim_{\ker_{G}[\boldsymbol{i}]}k_{2}\) for any \(k_{1},k_{2}\in S\) with \(k_{1}\sim_{\ker_{G}[\boldsymbol{i}|_{S}]}k_{2}\) and \(k_{1}\nsim_{\ker_{G}[\boldsymbol{i}]}k_{2}\) for all \(k_{1}\in S\) and \(k_{2}\in[m]\setminus S\)._
Proof.: Put \(\pi=\ker_{G}[\boldsymbol{i}]\) and \(\pi_{S}=\ker_{G}[\boldsymbol{i}|_{S}]\). Suppose first \(k_{1}\sim_{\pi}k_{2}\) for any \(k_{1},k_{2}\in S\) with \(k_{1}\sim_{\pi_{S}}k_{2}\) and \(k_{1}\nsim_{\pi}k_{2}\) for \(k_{1}\in S\) and \(k_{2}\in[m]\setminus S\). Let \(V\) be an arbitrary block of \(\pi_{S}\). Take any \(k\in V\) and let \(W\) be the unique block of \(\pi\) such that \(k\in W\). We will show that \(V=W\). Note that \(V\subset W\) since for any \(k^{\prime}\in V\subset S\) we have \(k\sim_{\pi_{S}}k^{\prime}\), and hence \(k\sim_{\pi}k^{\prime}\) by hypothesis. Additionally, we have \(W\subset S\) since \(k\sim_{\pi}k^{\prime\prime}\) for any \(k^{\prime\prime}\in W\) and by hypothesis \(k_{1}\nsim_{\pi}k_{2}\) if \(k_{1}\in S\) and \(k_{2}\in[m]\setminus S\). Now, for any \(k^{\prime\prime}\in W\subset S\), we have \(k\sim_{\pi}k^{\prime\prime}\), and hence \(i_{k}=i_{k^{\prime\prime}}\) and \((i_{\ell},i_{k})\in E\) whenever \(k<\ell<k^{\prime\prime}\) if \(k\leq k^{\prime\prime}\), or \(k^{\prime\prime}<\ell<k\) if \(k^{\prime\prime}\leq k\), with \(\ell\in[m]\supset S\), so \(k\sim_{\pi_{S}}k^{\prime\prime}\) and \(k^{\prime\prime}\in V\). Thus, we get \(V=W\). Therefore, since \(V\) was arbitrary, we obtain \(\ker_{G}[\boldsymbol{i}|_{S}]\subset\ker_{G}[\boldsymbol{i}]\).
Suppose now \(\ker_{G}[\boldsymbol{i}|_{S}]\subset\ker_{G}[\boldsymbol{i}]\). Since \(\pi=\ker_{G}[\boldsymbol{i}]\) and \(\pi_{S}=\ker_{G}[\boldsymbol{i}|_{S}]\) are the partitions of \([m]\) and \(S\) determined by the equivalence relation \(k_{1}\sim k_{2}\) only if \(i_{k_{1}}=i_{k_{2}}\) and \((i_{\ell_{1}},i_{k_{1}})\in E\) whenever \(k_{1}<\ell_{1}<k_{2}\) with \(\ell_{1}\in[m]\) and \(\ell_{1}\in S\), respectively, we have that if \(k\nsim_{\pi_{S}}k^{\prime}\) for some \(k,k^{\prime}\in S\), then \(k\nsim_{\pi}k^{\prime}\) due to \(S\) being a subset of \([m]\). Thus, it only remains to show that \(k_{1}\nsim_{\pi}k_{2}\) for any \(k_{1}\in S\) and \(k_{2}\in[m]\setminus S\). Take any \(k\in S\) and let \(V\) the unique block of \(\pi_{S}\) so that \(k\in V\). By hypothesis, \(V\) is also a block of \(\pi\), and hence, we obtain \(k\nsim_{\pi}k^{\prime}\) for any \(k^{\prime}\in[m]\setminus V\), in particular, for any \(k^{\prime}\in[m]\setminus S\).
**Proposition 3.12**.: _Let \((\mathcal{A},\varphi)\) be a non-commutative probability space. Suppose \((\mathcal{A}_{i})_{i\in I}\) is a family of subalgebras bmt independent with respect to a digraph \(G=(I,E)\). If \(\{I_{j}:j\in J\}\) is a partition of \(I\) into non-empty pairwise disjoint subsets and \(\mathcal{B}_{j}=\operatorname{alg}(\mathcal{A}_{i}:i\in I_{j})\), then_
1. \((\mathcal{B}_{j})_{j\in J}\) _are tensor independent if_ \((i,i^{\prime})\in E\) _whenever_ \(i\in I_{j}\) _and_ \(i^{\prime}\in I_{j^{\prime}}\) _where_ \(j,j^{\prime}\in J\) _with_ \(j\neq j^{\prime}\)__
2. \((\mathcal{B}_{j})_{j\in J}\) _are boolean independent if_ \((i,i^{\prime})\notin E\) _whenever_ \(i\in I_{j}\) _and_ \(i^{\prime}\in I_{j^{\prime}}\) _where_ \(j,j^{\prime}\in J\) _with_ \(j\neq j^{\prime}\)__
3. \((\mathcal{B}_{j})_{j\in J}\) _are monotone independent if_ \(J\) _has a total order_ \(<\) _so that_ \((i,i^{\prime})\in E\) _and_ \((i^{\prime},i)\notin E\) _whenever_ \(i\in I_{j}\) _and_ \(i^{\prime}\in I_{j^{\prime}}\) _where_ \(j,j^{\prime}\in J\) _with_ \(j^{\prime}<j\)__
Proof.: Let \(b_{1}b_{2}\cdots b_{n}\) be alternating product of elements of \((\mathcal{B}_{j})_{j\in J}\) where each \(b_{r}\in\mathcal{B}_{j_{r}}\) is an alternating product of elements of \((\mathcal{A}_{i})_{i\in I_{j_{r}}}\). Hence, each \(b_{r}\) is of the form
\[b_{r}=a_{(m_{0}+\cdots+m_{r-1})+1}a_{(m_{0}+\cdots+m_{r-1})+2}\cdots a_{(m_{0} +\cdots+m_{r-1})+m_{r}}\]
with \(a_{k}\in\mathcal{A}_{i_{k}}\), \(i_{k}\in I_{j_{r}}\), and \(i_{k}\neq i_{k+1}\). Note that we have assumed \(m_{0}=0\). Take \(\ell_{k}=m_{0}+m_{1}+m_{2}+\cdots+m_{k}\) and \(m=\ell_{n}\). Thus, we have
\[\varphi(b_{1}b_{2}\cdots b_{n})\ =\ \varphi(a_{1}a_{2}\cdots a_{m}).\]
**Proof of _(i)_.: Suppose \((i,i^{\prime})\in E\) whenever \(i\in I_{j}\) and \(i^{\prime}\in I_{j^{\prime}}\) where \(j,j^{\prime}\in J\) with \(j\neq j^{\prime}\). Put \(\boldsymbol{j}=(j_{1},j_{2},\ldots,j_{n})\) and \(S_{U}=\cup_{r\in U}[l_{r-1}+1,l_{r}]\) for each \(U\in\ker[\boldsymbol{j}]\in P(n)\). We will show that \(\pi_{U}=\ker_{G}[\boldsymbol{i}|_{S_{U}}]\) is a subset of \(\pi=\ker_{G}[\boldsymbol{i}]\) for each \(U\in\ker[\boldsymbol{j}]\).
Take any \(U\in\ker[\boldsymbol{j}]\). Suppose \(k_{1}\sim_{\pi_{U}}k_{2}\) with \(k_{1},k_{2}\in S_{U}\) with \(k_{1}<k_{2}\) and let \(\ell\in[n]\) so that \(k_{1}<\ell<k_{2}\) and \(i_{\ell}\neq i_{k_{1}}\). Let \(r_{1},t\in[n]\) with \(r_{1}\leq t\) such that \(k_{1}\in[\ell_{r_{1}-1}+1,\ell_{r_{1}}]\) and \(\ell\in[\ell_{t-1}+1,\ell_{t}]\), so we have \(i_{k_{1}}\in I_{j_{r}}\) and \(i_{\ell}\in I_{j_{t}}\). Note that \(\ell\in S_{U}\) if and only if \(j_{t}=j_{r_{1}}\). Thus, if \(\ell\notin S_{U}\), then \((i_{\ell},i_{k_{1}})\in E\) by hypothesis since \(j_{t}\neq j_{r_{1}}\); on the other hand, if \(\ell\in S_{U}\), then \((i_{\ell},i_{k_{1}})\in E\) since \(k_{1}\sim_{\pi_{U}}k_{2}\). In any case, we obtain \(k_{1}\sim_{\pi}k_{2}\). Suppose now \(k_{1}\in S_{U}\) and \(k_{2}\in[m]\setminus S_{U}\). Let \(r_{1},r_{2}\in[n]\) such that \(k_{1}\in[l_{r_{1}-1}+1,l_{r_{1}}]\) and \(k_{2}\in[l_{r_{2}-1}+1,l_{r_{2}}]\), so we have \(i_{k_{1}}\in I_{j_{r_{1}}}\) and \(i_{k_{2}}\in I_{j_{r_{2}}}\). Since \(k_{2}\notin S_{U}\), we have \(j_{r_{2}}\neq j_{r_{1}}\), and hence \(i_{k_{1}}\neq i_{k_{2}}\) and \(k_{1}\nsim_{\pi}k_{2}\) since \(I_{j_{r_{1}}}\) and \(I_{j_{r_{2}}}\) are disjoint. Thus, Lemma 3.11 implies \(\ker_{G}[\boldsymbol{i}|_{S_{U}}]\) is a subset of \(\ker_{G}[\boldsymbol{i}]\).
Now, since \(\ker_{G}[\boldsymbol{i}]\) and \(\ker_{G}[\boldsymbol{i}|_{S_{U}}]\) are partitions of \([m]\) and \(S_{U}\), respectively, and \([m]=\cup_{U\in\ker[\boldsymbol{j}]}S_{U}\), the fact that \(\ker_{G}[\boldsymbol{i}|_{S_{U}}]\) is a subset of \(\ker_{G}[\boldsymbol{i}]\) for every \(U\in\ker[\boldsymbol{j}]\) implies the
partition \(\ker_{G}[\mathbf{i}]\) is the disjoint union of \(\ker_{G}[\mathbf{i}|_{S_{U}}]\) with \(U\in\ker[\mathbf{j}]\). It follows from BMT independence that
\[\begin{array}{llll}\varphi(b_{1}b_{2}\cdots b_{n})&=\prod_{U\in\ker( \mathbf{j})}\prod_{V\in\ker_{G}(\mathbf{i}|_{S_{U}})}\varphi \left((a_{k})|V\right)&=\prod_{U\in\ker(\mathbf{j})}\varphi\left((b_{r })|U\right).\end{array}\]
Therefore, \(({\cal B}_{j})_{j\in J}\) are tensor independent.
**Proof of _ii)_. Suppose now that \((i,i^{\prime})\notin E\) whenever \(i\in I_{j}\) and \(i^{\prime}\in I_{j^{\prime}}\) with \(j,j^{\prime}\in J\) and \(j\neq j^{\prime}\). Let \((\ell,\ell^{\prime})=(\ell_{0}+1,\ell_{1}+1)\) and \(S=\{\ell,\ell+1,\ldots,\ell^{\prime}-1\}\). Let \(\pi\), \(\pi_{S}\), and \(\pi_{S^{c}}\) denote \(\ker_{G}[\mathbf{i}]\), \(\ker_{G}[\mathbf{i}|_{S}]\), and \(\ker_{G}[\mathbf{i}|_{S^{c}}]\), respectively. We will show that the partitions \(\ker_{G}[\mathbf{i}|_{S}]\) and \(\ker_{G}[\mathbf{i}|_{S^{c}}]\) are contained in \(\ker_{G}[\mathbf{i}]\).
Since \(S\) and \(S^{c}\) are sub-intervals of \([m]\), if either \(k_{1}\sim_{\pi_{S}}k_{2}\) with \(k_{1},k_{2}\in S\) or \(k_{1}\sim_{\pi_{S^{c}}}k_{2}\) with \(k_{1},k_{2}\in S^{c}\), then \(k_{1}\sim_{\pi}k_{2}\). Suppose now \(k_{1}\in S\) and \(k_{2}\in S^{c}\). Thus, we have \(i_{k_{1}}\in I_{j_{1}}\) and \(i_{k_{2}}\in I_{j_{r}}\) for some \(r\geq 2\). If \(r=2\), then \(i_{k_{1}}\neq i_{k_{2}}\) since \(I_{j_{1}}\) and \(I_{j_{2}}\) are disjoint due to \(j_{1}\neq j_{2}\); on the other hand, if \(r>3\), then either \(i_{k_{1}}\neq i_{k_{2}}\) or \(i_{k_{1}}=i_{k_{2}}\) and \(k_{1}<\ell^{\prime}\leq\ell_{2}<k_{2}\) with \((i_{\ell^{\prime}},i_{k_{1}})\notin E\) since \(i_{\ell^{\prime}}\in I_{j_{2}}\) and \(j_{1}\neq j_{2}\). In any case, we obtain \(k_{1}\nsim_{\pi}k_{2}\). It follows from Lemma 3.11 that \(\ker_{G}[\mathbf{i}|_{S^{c}}]\) and \(\ker_{G}[\mathbf{i}|_{S^{c}}]\) are subsets of \(\ker_{G}[\mathbf{i}]\).
Hence, we obtain \(\varphi(b_{1}b_{2}\cdots b_{n})=\varphi(b_{1})\varphi(b_{2}\cdots b_{n})\) by Lemma 3.10. Repeating the same argument for \((\ell,\ell^{\prime})=(\ell_{1}+1,\ell_{2}+1),\ldots,(\ell_{n-2}+1,\ell_{n-1}+1)\), we obtain \(\varphi(b_{1}b_{2}\cdots b_{n})=\varphi(b_{1})\varphi(b_{2})\cdots\varphi(b_{n})\). Therefore, \(({\cal B}_{j})_{j\in J}\) are boolean independent.
**Proof of _iii)_. Finally, suppose now that \(J\) has a total order \(<\) so that \((i,i^{\prime})\in E\) and \((i^{\prime},i)\notin E\) whenever \(i\in I_{j}\) and \(i^{\prime}\in I_{j^{\prime}}\) where \(j,j^{\prime}\in J\) with \(j^{\prime}<j\). Assume first \(j_{1}>\cdots>j_{r-1}>j_{r}<j_{r+1}<\cdots<j_{n}\) for some \(r\). The same recursive argument used for boolean independence above can be applied in this case, first for \((\ell,\ell^{\prime})=(\ell_{0}+1,\ell_{1}+1),\ldots,(\ell_{r-2}+1,\ell_{r-1}+1)\), to get \(\varphi(b_{1}b_{2}\cdots b_{n})=\varphi(b_{1})\cdots\varphi(b_{r-1})\varphi(b_{ r}\cdots b_{n})\), and second for \((\ell^{\prime},\ell)=(\ell_{n-1}+1,\ell_{n}),\ldots,(\ell_{r-1}+1,\ell_{r})\). Note that we go in decreasing order from \(n\) to \(r\) in the latter case. We then obtain
\[\varphi(b_{1}b_{2}\cdots b_{n})=\varphi(b_{1})\cdots\varphi(b_{r-1})\varphi(b_ {r})\cdots\varphi(b_{n}).\]
Assume now \(j_{r-1}<j_{r}>j_{r+1}\) for some \(r\). Let \(\ell=\ell_{r-1}+1\), \(\ell^{\prime}=\ell_{r}+1\), and \(S=\{\ell,\ell+1,\ldots,\ell^{\prime}-1\}\). We will show that \(\ker_{G}[\mathbf{i}|_{S}]\) and \(\ker_{G}[\mathbf{i}|_{S^{c}}]\) are subsets of \(\ker_{G}[\mathbf{i}]\).
Since \(S\) is a sub-interval of \([m]\), we obtain \(k_{1}\sim_{\pi}k_{2}\) for any \(k_{1},k_{2}\in S\) with \(k_{1}\sim_{\pi_{S}}k_{2}\). Suppose now \(k_{1}\in S\) and \(k_{2}\in S^{c}\). Thus, we have \(i_{k_{1}}\in I_{j_{r}}\) and \(i_{k_{2}}\in I_{j_{t}}\) for some \(t\in\{1,\ldots,r-1,r+1,\ldots,n\}\). If \(t=r+1\), then \(i_{k_{1}}\neq i_{k_{2}}\) since \(I_{j_{r}}\) and \(I_{j_{r+1}}\) are disjoint due to \(j_{r}\neq j_{r+1}\); on the other hand, if \(t>r+1\), then either \(i_{k_{1}}\neq i_{k_{2}}\) or \(i_{k_{1}}=i_{k_{2}}\) and \(k_{1}<\ell^{\prime}\leq\ell_{r+1}<k_{2}\) with \((i_{\ell^{\prime}},i_{k_{1}})\notin E\) since \(i_{k_{1}}\in I_{j_{r}}\), \(i_{\ell^{\prime}}\in I_{j_{r+1}}\), and \(j_{r+1}<j_{r}\). Similar arguments work if \(t\leq r-1\). In any case, we obtain \(k_{1}\nsim_{\pi}k_{2}\). It then follows from Lemma 3.11 that \(\ker_{G}[\mathbf{i}|_{S^{c}}]\) is contained in \(\ker_{G}[\mathbf{i}]\).
To obtain that \(\ker_{G}[\mathbf{i}|_{S^{c}}]\) is also contained in \(\ker_{G}[\mathbf{i}]\), it only remains to prove that \(k_{1}\sim_{\pi}k_{2}\) whenever \(k_{1}\sim_{\pi_{S^{c}}}k_{2}\) with \(k_{1},k_{2}\in S^{c}\). So, let us assume \(k_{1},k_{2}\in S^{c}\) with \(k_{1}\sim_{\pi_{S^{c}}}k_{2}\). If \(k_{1},k_{2}\leq\ell-1\) or \(\ell^{\prime}\leq k_{1},k_{2}\), then \(k_{1}\sim_{\pi}k_{2}\) since \(\{1,2,\ldots,\ell-1\}\) and \(\{\ell^{\prime},\ell^{\prime}+1,\ldots,m\}\) are
sub-intervals of \([m]\). Thus, without loss of generality, we can assume \(k_{1}\leq\ell-l\) and \(\ell^{\prime}\leq k_{2}\). The condition \(k_{1}\sim_{\pi_{S^{c}}}k_{2}\) means \(i_{k_{1}}=i_{k_{2}}\) and \((i_{\ell^{\prime\prime}},i_{k_{1}})\in E\) for \(k_{1}<\ell^{\prime\prime}<k_{2}\) with \(i_{\ell^{\prime\prime}}\neq i_{k_{1}}\) and \(\ell^{\prime\prime}\in S^{c}\). Hence, to obtain \(k_{1}\sim_{\pi}k_{2}\), we only need to show that \((i_{\ell^{\prime\prime}},i_{k_{1}})\in E\) for \(k_{1}<\ell^{\prime\prime}<k_{2}\) with \(i_{\ell^{\prime\prime}}\neq i_{k_{1}}\) and \(\ell^{\prime\prime}\in S\). Take \(1\leq t\leq r-1\) so that \(i_{k_{1}}\in I_{j_{t}}\). Now, if \(i_{k_{1}}=i_{\ell-1}\) and \(\ell^{\prime\prime}\in S\), then \(j_{t}=j_{r-1}\) and \(i_{\ell^{\prime\prime}}\in I_{j_{r}}\), and hence \(i_{k_{1}}\neq i_{\ell^{\prime\prime}}\) since \(I_{j_{r-1}}\) and \(I_{j_{r}}\) are disjoint and \((i_{\ell^{\prime\prime}},i_{k_{1}})\in E\) since \(j_{r-1}<j_{r}\). On the other hand, if \(i_{k_{1}}\neq i_{\ell-1}\), then \(k_{1}<\ell-1<\ell^{\prime}\leq k_{2}\) with \(\ell-1\in S^{c}\) and \(i_{\ell-1}\in I_{j_{r-1}}\), so we must have \((i_{\ell-1},i_{k_{1}})\in E\) since \(k_{1}\sim_{\pi_{S^{c}}}k_{2}\); however, \((i_{\ell-1},i_{k_{1}})\in E\) only if \(j_{t}\leq j_{r-1}\), so we obtain \(j_{t}<j_{r}\), and hence \((i_{\ell^{\prime\prime}},i_{k_{1}})\in E\) for any \(\ell^{\prime\prime}\in S\) since \(i_{\ell^{\prime\prime}}\neq i_{k_{1}}\) and \(i_{k_{1}}\in I_{j_{t}}\). In any case, we get \(k_{1}\sim_{\pi}k_{2}\), and therefore \(\ker_{G}[\boldsymbol{i}|_{S^{c}}]\) is a subset of \(\ker_{G}[\boldsymbol{i}]\) by Lemma 3.11.
It follows from Lemma 3.10 that \(\varphi(b_{1}\cdots b_{n})=\varphi(b_{r})\varphi(b_{1}\cdots b_{r-1}b_{r+1} \cdots b_{n})\). Therefore, \((\mathcal{B}_{j})_{j\in J}\) are monotone independent.
_Remark 3.13_.: Associativity of Boolean, monotone, and tensor independence is recaptured by consistency for BMT independence. To demonstrate this, let us come back to the situation described above regarding \(\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3}\). We add an edge from \(\mathcal{A}_{1}\) to \(\mathcal{A}_{2}\), so the independence digraph becomes
By consistency, Proposition 3.12, we know that \(\mathcal{A}_{1}\) is monotone independent from \(\langle\mathcal{A}_{2},\mathcal{A}_{3}\rangle\) and \(\mathcal{A}_{2}\) is monotone independent from \(\mathcal{A}_{3}\). Moreover, \(\mathcal{A}_{1}\) is monotone independent from \(\mathcal{A}_{2}\), and \(\langle\mathcal{A}_{1},\mathcal{A}_{2}\rangle\) is monotone independent from \(\mathcal{A}_{3}\). Both situations are equivalent due to BMT independence, but this is nothing else than associativity of monotone independence.
## 4 Construction of BMT algebras of operators
The purpose of this section is to provide with an analytic framework for BMT random variables. Namely, we give an explicit construction to realize any finite family of BMT independent random variables as bounded operators on a Hilbert space. Additionally, we show that this construction produces random variables that are BM independent and not just weak BM independent when the corresponding independence digraph comes from a partial order.
Let \(H_{1},\ldots,H_{N}\) be complex Hilbert spaces with distinguished unit vectors \(\xi_{i}\in H_{i}\). Let \(I_{i}\) be the identity operator on \(H_{i}\) and let \(P_{i}:H_{i}\to H_{i}\) be the orthogonal projection defined by
\[P_{i}(x)=\langle x,\xi_{i}\rangle_{i}\xi_{i}.\]
We consider the non-commutative probability spaces \((B(H_{i}),\varphi_{i})\) where \(B(H_{i})\) is the space of bounded linear operators on \(H_{i}\) and \(\varphi_{i}:B(H_{i})\to\mathbb{C}\) is the vector-state given by
\[\varphi_{i}(A)=\langle A\xi_{i},\xi_{i}\rangle_{i}.\]
Let \(H\) denote the tensor product of Hilbert spaces \(H_{1}\otimes\cdots\otimes H_{N}\) with inner product
\[\langle h_{1}\otimes\cdots\otimes h_{N},h_{1}^{\prime}\otimes\cdots\otimes h_{N }^{\prime}\rangle=\prod_{i=1}^{N}\langle h_{i},h_{i}^{\prime}\rangle_{i}\]
and let us consider the non-commutative probability space \((B(H),\varphi)\) where \(\varphi(T)=\langle T\xi,\xi\rangle\) with unit vector \(\xi=\xi_{1}\otimes\cdots\otimes\xi_{N}\).
**Definition 4.1**.: Given a digraph \(G_{N}=(V_{N},E_{N})\) with \(V_{N}=[N]\) we define the \(*\)-homomorphism \(\pi_{i}:B(H_{i})\to B(H)\) as
\[\pi_{i}(A)=P_{i,1}\otimes\cdots\otimes P_{i,i-1}\otimes A\otimes P_{i,i+1} \otimes\cdots\otimes P_{i,N}\]
where \(P_{i,j}=I_{j}\), if \((i,j)\in E_{N}\), and \(P_{i,j}=P_{j}\), if \((i,j)\notin E_{N}\).
**Proposition 4.2**.: _The triple \((H,\pi_{i},\xi)\) is a representation of \((B(H_{i}),\varphi_{i})\), i.e., \(\pi_{i}:B(H_{i})\to B(H)\) is a \(*\)-homomorphism and \(\xi\) is a unit vector such that \(\varphi_{i}(A)=\langle\pi_{i}(A)\xi,\xi\rangle\) for each \(A\in B(H_{i})\)._
Proof.: For each \(A,B\in B(H_{i})\) we have that \(\pi_{i}(AB)=\pi_{i}(A)\pi_{i}(B)\) and \(\pi_{i}(A^{*})=\pi_{i}(A)^{*}\) since \((A_{1}\otimes\cdots\otimes A_{N})(B_{1}\otimes\cdots\otimes B_{N})=(A_{1}B_{1 }\otimes\cdots\otimes A_{N}B_{N})\), \((A_{1}\otimes\cdots\otimes A_{N})^{*}=(A_{1}^{*}\otimes\cdots\otimes A_{N}^{*})\) and \(P_{i,j}=P_{i,j}^{2}=P_{i,j}^{*}\). Thus, \(\pi_{i}\) is a \(*\)-homomorphism. Moreover, since \(P_{i,j}\xi_{j}=\xi_{j}\) we obtain \(\varphi_{i}(A)=\langle A\xi_{i},\xi_{i}\rangle_{i}\prod_{j\neq i}\langle P_{i, j}\xi_{j},\xi_{j}\rangle_{j}=\langle\pi_{i}(A)\xi,\xi\rangle\).
_Remark 4.3_.: Notice that for all \(A,B\in B(H_{i})\) the projection \(P_{i}\) satisfies
\[\langle AP_{i}B\xi_{i},\xi_{i}\rangle_{i}=\langle A\xi_{i},\xi_{i}\rangle_{i} \langle B\xi_{i},\xi_{i}\rangle_{i}\quad\text{and}\quad P_{i}AP_{i}=\langle A \xi_{i},\xi_{i}\rangle_{i}P_{i}.\]
Indeed, since \(P_{i}(x)=\langle x,\xi_{i}\rangle_{i}\xi_{i}\) we obtain \(\langle AP_{i}(B\xi_{i}),\xi_{i}\rangle_{i}=\langle A\langle B\xi_{i},\xi_{i} \rangle\xi_{i},\xi_{i}\rangle_{i}\) and \([P_{i}AP_{i}](x)=\langle x,\xi_{i}\rangle_{i}P_{i}(A\xi_{i})=\langle A\xi_{i},\xi_{i}\rangle_{i}P_{i}(x)\) for all \(x\in H_{i}\). In terms of the functionals \(\varphi_{i}\), these relations become
\[\varphi_{i}(AP_{i}B)=\varphi_{i}(B)\varphi_{i}(A)\quad\text{and}\quad P_{i}AP_ {i}=\varphi_{i}(A)P_{i}.\]
**Theorem 4.4**.: _The family of \(*\)-subalgebras \(\{\pi_{i}(B(H_{i}))\}_{i=1}^{N}\) is BMT independent in \((B(H),\varphi)\) with respect to \(G_{N}\)._
Proof.: Let us denote \(\mathcal{A}_{i}=\pi_{i}(B(H_{i}))\) for \(i=1,\ldots,N\). We consider the monomial \(a_{1}a_{2}\cdots a_{n}\) with \(a_{j}\in\mathcal{A}_{i_{j}}\). Let \(A_{j}\in B(H_{i_{j}})\) be such that \(a_{j}=\pi_{i_{j}}(A_{j})\) for \(j=1,\ldots,n\), this is
\[a_{j}=P_{i_{j},1}\otimes\cdots\otimes P_{i_{j},i_{j}-1}\otimes A_{j}\otimes P _{i_{j},i_{j}+1}\otimes\cdots\otimes P_{i_{j},N}.\]
Put \(\boldsymbol{i}=(i_{1},\ldots,i_{n})\) and \(m=\#\ker[\boldsymbol{i}]\). We also define
\[C=\{c_{1}<\cdots<c_{m}\}=\{i_{j}:1\leq j\leq n\}\quad\text{and}\quad D=\{d_{1} <\cdots<d_{N-m}\}=[N]\setminus C.\]
The product \(a_{1}a_{2}\cdots a_{n}\) can be written as \(B_{1}\otimes\cdots\otimes B_{N}\) where \(B_{d_{k}}\in\{I_{d_{k}},P_{d_{k}}\}\) for \(k=1,\ldots,N-m\) and \(B_{c_{k}}\) is of the form
\[Q_{r_{1}(k)}A_{s_{1}(k)}Q_{r_{2}(k)}A_{s_{2}(k)}\cdots Q_{r_{l(k)}(k)}A_{s_{l( k)}(k)}Q_{r_{l(k)+1}(k)}\]
for \(k=1,\ldots,m\) with \(\{s_{1}(k)<\cdots<s_{l(k)}(k)\}=\{j\in[n]:i_{j}=c_{k}\}\) and \(Q_{r_{w}(k)}=I_{c_{k}}\), if \((i_{l},i_{s_{w}(k)})\in E_{N}\) for all \(s_{w-1}(k)<l<s_{w}(k)\), and \(Q_{r_{w}(k)}=P_{c_{k}}\), otherwise. Note that \(Q_{r_{w}(k)}\in\{I_{c_{k}},P_{c_{k}}\}\) comes from the fact that the variables \(a_{j}\) not in \(\mathcal{A}_{c_{k}}\) have either \(P_{c_{k}}\) or \(I_{c_{k}}\) in their \(c_{k}\)-term in the tensor product that defines them, so between \(A_{s_{w-1}(k)}\) and \(A_{s_{w}(k)}\) there is a product of \(P_{c_{k}}\)'s and \(I_{c_{k}}\)'s which is \(I_{c_{k}}\) only if all the elements in between are \(I_{c_{k}}\). Then, we have that
\[\varphi(a_{1}a_{2}\cdots a_{n})=\left(\prod_{k=1}^{N-m}\langle B_{d_{k}}\xi_{d _{k}},\xi_{d_{k}}\rangle_{d_{k}}\right)\left(\prod_{k=1}^{m}\langle B_{c_{k}} \xi_{c_{k}},\xi_{c_{k}}\rangle_{c_{k}}\right)=\prod_{k=1}^{m}\langle B_{c_{k} }\xi_{c_{k}},\xi_{c_{k}}\rangle_{c_{k}}\]
Note that \(\langle B_{c_{k}}\xi_{c_{k}},\xi_{c_{k}}\rangle_{c_{k}}=\langle A_{s_{1}(k)} Q_{r_{2}(k)}A_{s_{2}(k)}\cdots Q_{r_{l(k)}(k)}A_{s_{l(k)}(k)}\xi_{c_{k}},\xi_{c_{k} }\rangle_{c_{k}}\). Indeed, if \(Q_{r_{1}(k)}=I_{c_{k}}\), since \(Q_{r_{l(k)+1}(k)}\xi_{c_{k}}=\xi_{c_{k}}\) we obtain that
\[\langle B_{c_{k}}\xi_{c_{k}},\xi_{c_{k}}\rangle_{c_{k}}=\langle A_{s_{1}(k)}Q_ {r_{2}(k)}A_{s_{2}(k)}\cdots Q_{r_{l(k)}(k)}A_{s_{l(k)}(k)}\xi_{c_{k}},\xi_{c_{ k}}\rangle_{c_{k}};\]
on the other hand, if \(Q_{r_{1}(k)}=P_{c_{k}}\), Remark 4.3 and the fact that \(Q_{r_{l(k)+1}(k)}\xi_{c_{k}}=P_{c_{k}}\xi_{c_{k}}\) imply
\[\langle B_{c_{k}}\xi_{c_{k}},\xi_{c_{k}}\rangle_{c_{k}}=\langle A_{s_{1}(k)}Q_ {r_{2}(k)}A_{s_{2}(k)}\cdots Q_{r_{l(k)}(k)}A_{s_{l(k)}(k)}\xi_{c_{k}},\xi_{c_{ k}}\rangle_{c_{k}}.\]
Let us consider \(W_{k}=\{w_{1}(k)<\cdots<w_{t(k)}(k)\}=\{w:Q_{r_{w}(k)}=P_{c_{k}}\}\) for each \(k=1,\ldots,m\). It follows from the definition of \(\varphi_{c_{k}}\) that
\[\varphi_{c_{k}}(B_{c_{k}}) =\varphi_{c_{k}}\left[\left(\prod_{l=1}^{w_{1}(k)-1}A_{s_{l}(k)} \right)P_{c_{k}}\cdots P_{c_{k}}\left(\prod_{l=w_{t(k)-1}(k)}^{w_{t(k)}(k)-1}A _{s_{l}(k)}\right)P_{c_{k}}\left(\prod_{l=w_{t(k)}(k)}^{l(k)}A_{s_{l}(k)} \right)\,\right]\] \[=\varphi_{c_{k}}\left(\prod_{l=1}^{w_{1}(k)-1}A_{s_{l}(k)}\right) \cdots\varphi_{c_{k}}\left(\prod_{l=w_{t(k)-1}(k)}^{w_{t(k)}(k)-1}A_{s_{l}(k) }\right)\varphi_{c_{k}}\left(\prod_{l=w_{t(k)}(k)}^{l(k)}A_{s_{l}(k)}\right)\]
where the second equality comes from Remark 4.3. Now, recall that \(\ker_{G_{N}}[\boldsymbol{i}]\) is a refinement of \(\ker[\boldsymbol{i}]\) where \(s_{w-1}(k)\sim s_{w}(k)\) if \((i_{l},i_{s_{w}(k)})\in E_{N}\) for all \(s_{w-1}(k)<l<s_{w}(k)\). Then, the block \(\{s_{1}(k)<\cdots<s_{l(k)}(k)\}\in\ker[\boldsymbol{i}]\) corresponding to the subalgebra \(c_{k}\) is decomposed in \(\ker_{G_{N}}[\boldsymbol{i}]\) into \(\{s_{1}(k),\ldots,s_{w_{1}(k)-1}(k)\}\), \(\ldots\), \(\{s_{w_{t(k)-1}(k)-1}(k),\ldots,s_{w_{t(k)}(k)-1}(k)\}\), \(\{s_{w_{t(k)}(k)}(k),\ldots,s_{l(k)}\}\). Thus, we obtain that
\[\varphi(a_{1}a_{2}\cdots a_{n})=\prod_{k=1}^{m}\langle B_{c_{k}}\xi_{c_{k}}, \xi_{c_{k}}\rangle_{c_{k}}=\prod_{V\in\ker_{G_{N}}[\boldsymbol{i}]}\varphi_{c_ {k}}\left(\prod_{s\in V}^{\rightarrow}A_{s}\right)\]
where \(c_{k}\) depends on each block \(V\) being the common value of all \(i_{j}\) with \(j\in V\). For each \(V\in\ker_{G_{N}}[i]\) we have
\[\varphi_{c_{k}}\left(\prod_{s\in V}^{\rightarrow}A_{s}\right)=\varphi\left[ \pi_{c_{k}}\left(\prod_{s\in V}^{\rightarrow}A_{s}\right)\right]=\varphi\left[ \prod_{s\in V}^{\rightarrow}\pi_{c_{k}}\left(A_{s}\right)\right]=\varphi\left[ \prod_{s\in V}^{\rightarrow}a_{s}\right]\]
due to \(\varphi_{c_{k}}=\varphi\circ\pi_{c_{k}}\), the fact that \(\pi_{c_{k}}\) is an algebra homomorphism, and the definition of the \(a_{s}\). Therefore, we get
\[\varphi(a_{1}a_{2}\cdots a_{n})=\prod_{V\in\ker_{G_{N}}[\textbf{i}]}\varphi \left[\prod_{s\in V}^{\rightarrow}a_{s}\right]=\prod_{V\in\ker_{G_{N}}[ \textbf{i}]}\varphi\left[(a_{i})|V\right].\]
This proves that the subalgebras \(\{\pi_{j}(B(H_{j}))\}_{1\leq j\leq N}\) are \(BMT\)-independent in \((B(H),\varphi)\) with respect to \(G_{N}\).
_Remark 4.5_.: If we consider the construction of the homomorphisms \(\pi_{i}\) and \(G\) is the digraph of a partial order \((V_{N},\preceq)\), then we will have \(\xi\prec\rho\) if and only if \(P_{\rho,\xi}=I_{\xi}\) and \(P_{\xi,\rho}=P_{\rho}\) and \(\xi\nsim\rho\) if and only if \(P_{\rho,\xi}=P_{\xi}\) and \(P_{\xi,\rho}=P_{\rho}\).
We already prove in the Theorem 3.9 that when \(G\) is the digraph of a partial order on a finite partially ordered set, BMT subalgebras with respect to \(G\) satisfy BM2. However, in general they do not satisfy BM1, but a weaker version of it (the weak BM1 property). However, in the tensor model we have built, we can implement BM independence when considering such \(G\). We establish this result in the following theorem giving another construction of a finite collection of \(BM\) subalgebras different than the given in [24].
**Theorem 4.6**.: _Suppose \(G=([N],E)\) is the digraph of a partial order \(\preceq\) on \([N]\). Then the family of \(*\)-subalgebras \(\{\pi_{i}(B(H_{i}))_{i=1}^{N}\) is BM independent._
Proof.: Since the subalgebras \(\{\pi_{i}(B(H_{i}))\}_{i=1}^{N}\) are BMT independent, they satisfy weak BM1 and BM2, so we only need to show that BM1 holds. Take \(\mathcal{A}_{i}=\pi_{i}(B(H_{i}))\) for \(i\in[N]\). Assume \(a_{1}\in\mathcal{A}_{\xi}\), \(a_{2}\in\mathcal{A}_{\rho}\), \(a_{3}\in\mathcal{A}_{\eta}\) with \(\xi,\rho,\eta\in[N]\) satisfying \(\xi\prec\rho\succ\eta\), \(\xi\nsim\rho\succ\eta\), or \(\xi\prec\rho\nsim\eta\). Recall that each variable \(a\in\mathcal{A}_{i}\) is of the form
\[a=P_{i,1}\otimes\cdots\otimes P_{i,i-1}\otimes T\otimes P_{i,i+1}\otimes \cdots\otimes P_{i,N}=\pi_{i}(T)\]
for some \(T\in B(H_{i})\). So, let \(T_{1}\in B(H_{\xi})\), \(T_{2}\in B(H_{\rho})\), and \(T_{3}\in B(H_{\eta})\) be such that \(a_{1}=\pi_{\xi}(T_{1})\), \(a_{2}=\pi_{\rho}(T_{2})\) and \(a_{3}=\pi_{\eta}(T_{3})\). As in the proof of Theorem 4.4, we take \(B_{i},C_{i}\in B(H_{i})\) such that \(a_{1}a_{2}a_{3}=B_{1}\otimes\cdots\otimes B_{N}\) and \(a_{1}a_{3}=C_{1}\otimes\cdots\otimes C_{N}\). To show BM1 holds, it is enough to prove \(B_{i}=C_{i}\) for \(i\neq\rho\) and \(B_{\rho}=\varphi(a_{2})C_{\rho}\).
Observe that \(B_{j}=P_{\xi,j}P_{\rho,j}P_{\eta,j}\) and \(C_{j}=P_{\xi,j}P_{\eta,j}\) for \(j\ \neq\xi,\rho,\eta\). If \(P_{\xi,j}=P_{j}\) or \(P_{\eta,j}=P_{j}\), then \(B_{j}=C_{j}=P_{j}\) because each operator \(P_{\xi,j}\), \(P_{\rho,j}\), or \(P_{\eta,j}\) is either the projection \(P_{j}\) or the identity \(I_{j}\). On the other hand, if \(P_{\xi,j}=P_{\eta,j}=I_{j}\), due to the definition of the operators \(P_{i,j}\) and since \(G\) is the digraph associated to \(\preceq\), we have \(j\prec\eta\) and \(j\prec\xi\). But, by hypothesis, either \(\xi\prec\rho\) or \(\eta\prec\rho\) holds, and hence \(j\prec\rho\) and \(P_{\rho,j}=I_{j}\). Therefore, \(B_{j}=C_{j}\) for \(j\neq\xi,\rho,\eta\).
To prove \(B_{j}=C_{j}\) for \(j=\xi,\eta\) and \(B_{\rho}=\varphi(a_{2})C_{\rho}\), we consider two cases: \(\xi=\eta\) and \(\xi\neq\eta\). Suppose first \(\xi=\eta\). Thus, none of the two conditions \(\xi\nsim\rho\succ\eta\) or \(\xi\prec\rho\nsim\eta\) holds, so we must have \(\xi\prec\rho\), and hence \(P_{\rho,\xi}=I_{\xi}\) and \(P_{\xi,\rho}=P_{\rho}\). Moreover, \(B_{\xi}=T_{1}P_{\rho,\xi}T_{3}\), \(C_{\xi}=T_{1}T_{3}\), \(B_{\rho}=P_{\xi,\rho}T_{2}P_{\xi,\rho}\), and \(C_{\rho}=P_{\xi,\rho}P_{\xi,\rho}\) since \(a_{1}=\pi_{\xi}(T_{1})\), \(a_{2}=\pi_{\rho}(T_{2})\), and \(a_{3}=\pi_{\eta}(T_{3})\). Hence, \(B_{\xi}=C_{\xi}\) due to \(P_{\rho,\xi}=I_{\xi}\) and \(B_{\rho}=P_{\rho}T_{2}P_{\rho}=\varphi(a_{2})C_{\rho}\) due to \(P_{\xi,\rho}=I_{\rho}\) and Remark 4.3.
Suppose now \(\xi\neq\eta\). In this case, we have that \(B_{\xi}=T_{1}P_{\rho,\xi}P_{\eta,\xi}\), \(C_{\xi}=T_{1}P_{\eta,\xi}\), \(B_{\eta}=P_{\xi,\eta}P_{\rho,\eta}T_{3}\), \(C_{\eta}=P_{\xi,\eta}T_{3}\), \(B_{\rho}=P_{\xi,\rho}T_{2}P_{\eta,\rho}\), and \(C_{\rho}=P_{\xi,\rho}P_{\eta,\rho}\). Since \(P_{i,j}=P_{j}\) if \(i\nsim j\) or \(i\prec j\), then \(P_{\xi,\rho}=P_{\rho}=P_{\eta,\rho}\) provided \(\xi\prec\rho\succ\eta\), \(\xi\prec\rho\nsim\eta\) or \(\xi\nsim\rho\succ\eta\). Thus, Remark 4.3 gives \(B_{\rho}=P_{\rho}T_{2}P_{\rho}=\varphi_{\rho}(T_{2})P_{\rho}=\varphi(a_{2})C_{\rho}\). For \(B_{\xi}=C_{\xi}\) and \(B_{\eta}=C_{\eta}\), let us consider three sub-cases: \(\xi\nsim\eta\), \(\xi\prec\eta\) and \(\xi\succ\eta\). First, if \(\xi\nsim\eta\), so we get \(P_{\eta,\xi}=P_{\xi}\) and \(P_{\xi,\eta}=P_{\eta}\), and hence \(B_{\xi}=T_{1}P_{\xi}=C_{\xi}\) and \(B_{\eta}=P_{\eta}T_{3}=C_{\eta}\) since each \(P_{\rho,\xi}\) and \(P_{\rho,\eta}\) is either an identity \(I_{j}\) or a projection \(P_{j}\). Second, if \(\xi\prec\eta\), so we get \(P_{\eta,\xi}=I_{\xi}\) and \(P_{\xi,\eta}=P_{\eta}\), and hence \(B_{\xi}=T_{1}P_{\rho,\xi}\) and \(C_{\xi}=T_{1}\); moreover, since \(P_{\rho,\eta}\) is either the identity \(I_{\eta}\) or the projection \(P_{\eta}\), we obtain \(B_{\eta}=P_{\eta}T_{3}=C_{\eta}\). Note that if \(\eta\prec\rho\), then \(\xi\prec\eta\prec\rho\), and hence \(P_{\rho,\xi}=I_{\xi}\) and \(B_{\xi}=C_{\xi}\); on the other hand, if \(\eta\nsim\rho\), then we must have \(\rho\succ\xi\), and hence \(P_{\rho,\xi}=I_{\xi}\) and \(B_{\xi}=C_{\xi}\). Finally, if \(\xi\succ\eta\), following similar arguments to the case \(\xi\prec\eta\), we get \(B_{\xi}=T_{1}P_{\xi}=C_{\xi}\) since \(P_{\eta,\xi}=P_{\xi}\) and \(B_{\eta}=P_{\xi,\eta}T_{3}=C_{\eta}\) since \(\rho\succ\eta\) and \(P_{\rho,\eta}=I_{\eta}\).
Let us mention that the idea of using tensor products and rank 1 projection to construct independent algebras of operators is not new. Lenczewski [13] gave a tensor model for Boolean independent random variables, which was extended by Franz to include monotone and anti-monotone notions of independence in [7].
More interestingly and related to our construction, Lenczewski gave a tensor model of \(\Lambda\)-boolean (mixtures of Boolean and tensor) independence and \(\Lambda\)-monotone (mixtures of monotone and tensor) independence, [10]. More recently he also gave a construction in [13] for \(c\)-monotone independence [6].
## 5 BMT Central Limit Theorem
In this section, we prove the Central Limit Theorem (CLT) for BMT independent random variables together with some of its properties regarding the possible limiting distributions. In particular, we recover the known CLTs for Boolean, monotone, and tensor independence and give sufficient conditions for the non-compactess of the limiting measure.
Through the entire section, it is assumed that we are given a non-commutative probability space \((\mathcal{A},\varphi)\) together with a sequence of random variables \((a_{i})_{i=1}^{\infty}\) that are identically distributed with zero mean and unit variance 2. Each finite sequence \(a_{1},a_{2},\ldots,a_{N}\) is assumed to be BMT independent with respect to a digraph \(G_{N}=(V_{N},E_{N})\) with \(V_{N}=[N]\) and \(G_{N-1}\subset G_{N}\).
Footnote 2: As it is customary in non-commutative probability, the identically distributed assumption can be relaxed to having uniformly bounded moments, i.e., \(\sup_{i}|\varphi(a_{i}^{n})|<\infty\) for each integer \(n\geq 1\).)
### Central Limit Theorem
The Central Limit Theorem for BMT independent random variables refers then to determining the limiting distribution of the normalized sum \((a_{1}+\cdots+a_{N})/\sqrt{N}\). Concretely, for each
integer \(k\geq 1\), and up to an error term of order \(N^{-1/2}\), we compute the value of
\[\varphi\left[\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{k}\right].\]
A first step is to single out the monomial containing singletons, as in most of the combinatorial proofs of the CLT.
**Proposition 5.1** (Singleton condition).: _Let \(a_{1},\ldots,a_{N}\) be centered BMT independent random variables. Let \(\mathbf{i}:[m]\to[N]\) be a set of indices. If \(\pi=\ker[\mathbf{i}]\) contains a singleton, then \(\varphi(a_{i_{1}}\cdots a_{i_{m}})=0\)._
Proof.: Notice that this follows directly from the definition of BMT independence and the fact that \(\varphi(a_{i})=0\) for any \(i\). Indeed, if \(\{k\}\) is a singleton in \(\ker[\mathbf{i}]\), so is in \(\ker_{G}[\mathbf{i}]\), which is a refinement of the former, and thus
\[\varphi(a_{i_{1}}\cdots a_{i_{m}})=\varphi(a_{i_{k}})\prod_{\begin{subarray}{ c}V\in\ker_{G}[\mathbf{i}]\\ V\neq\{k\}\end{subarray}}\varphi\bigg{(}\prod_{k\in V}^{\rightarrow}a_{i_{k}} \bigg{)}=0.\]
**Proposition 5.2**.: _Suppose \(\mathbf{i}:[2m]\to[N]\) is such that \(\pi=\ker[\mathbf{i}]\) is pair partition in \(\mathcal{P}_{2}(2m)\) and let \(G_{\mathbf{i}}\) denote the independence graph of \(a_{i_{1}},\ldots,a_{i_{2m}}\). Thus, we have_
\[\varphi(a_{i_{1}}\cdots a_{i_{m}})=\begin{cases}1,\text{ if }G_{\pi(\mathbf{i})} \subseteq G_{\mathbf{i}},\\ 0,\text{ otherwise}.\end{cases}\]
Proof.: Note first that the condition \(G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}}\) is equivalent to \(E_{\pi(\mathbf{i})}\subseteq E_{\mathbf{i}}\) since the graphs \(G_{\pi(\mathbf{i})}\) and \(G_{\mathbf{i}}\) have the same set of vertices. Now, the BMT independence of \(a_{1},\ldots,a_{N}\) implies
\[\varphi(a_{i_{1}}\cdots a_{i_{m}})=\prod_{V\in\ker_{G}[\mathbf{i}]}\varphi\bigg{(} \prod_{k\in V}^{\rightarrow}a_{i_{k}}\bigg{)}\]
where \(\ker_{G}[\mathbf{i}]\) is a refinement of \(\ker[\mathbf{i}]\). Now, if \(\ker_{G}[\mathbf{i}]\neq\ker[\mathbf{i}]\), then \(\ker_{G}[\mathbf{i}]\) has a singleton since \(\ker[\mathbf{i}]\) is a pair partition. This in turns implies by Proposition 5.1, that \(\varphi(a_{i_{k}})=0\), since \(a_{i_{k}}\) are assumed to be centered. On the other hand, if \(\ker_{G}[\mathbf{i}]=\ker[\mathbf{i}]\), we obtain \(\varphi(a_{i_{1}}\cdots a_{i_{m}})=\prod_{V\in\ker_{G}[\mathbf{i}]}\varphi(\prod_{ k\in V}^{\rightarrow}a_{i_{k}})=1\) since \(\varphi(a_{i_{k}}^{2})=1\) for each \(k\). Finally, the condition \(\ker_{G}[\mathbf{i}]=\ker[\mathbf{i}]\) is equivalent to the condition \(G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}}\) by Lemma 2.13.
**Theorem 5.3** (BMT central limit theorem).: _Suppose \(a_{1},\ldots,a_{N}\) are centered variables with unit variance, uniformly bounded moments of all orders, and independence graph \(G_{N}\). Then for any integer \(m\geq 1\) we have_
\[\varphi\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}=\sum_{\pi\in P_{2} (m)}\ N^{-m/2}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker(\mathbf{i})=\pi\end{subarray}}\mathbf{1}_{G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i} }}\quad+\quad O(N^{-1/2}). \tag{1}\]
_where \(G_{\pi}\) is the nesting-crossing graph of \(\pi\) and \(G_{\mathbf{i}}\) is independence graph fo \(a_{i_{1}},a_{i_{2}},\ldots,a_{i_{2k}}\) for \(\mathbf{i}=(i_{1},i_{2},\ldots,i_{m})\)._
Proof.: Observe that
\[\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}=\frac{1}{N^{m/2}}\sum_{i_ {1}=1}^{N}\cdots\sum_{i_{m}=1}^{N}\left(\prod_{k=1}^{m}a_{i_{k}}\right).\]
From the fact that \(\mathbf{i}\sim\mathbf{j}\) if \(\ker[\mathbf{i}]=\ker[\mathbf{j}]\) defines an equivalence relation, we can split the set of functions \(\mathbf{i}:[m]\to[N]\) into disjoint sets. Using this and the linearity of \(\varphi\), we have that
\[\varphi\left[\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}\right]= \frac{1}{N^{m/2}}\sum_{\pi\in\mathcal{P}(m)}\sum_{\begin{subarray}{c}\mathbf{i}: [m]\to[N]\\ \ker[i]=\pi\end{subarray}}\varphi(a_{i_{1}}\cdots a_{i_{m}}).\]
The variables \((a_{i})\) are BMT independent, so we get
\[\varphi\left[\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}\right]= \frac{1}{N^{m/2}}\sum_{\pi\in\mathcal{P}(m)}\sum_{\begin{subarray}{c}\mathbf{i}: [m]\to[N]\\ \ker[i]=\pi\end{subarray}}\prod_{V\in\ker G[i]}\varphi((a_{i_{k}})|k\in V).\]
From Proposition 5.1, if \(\pi\) contains a singleton, then \(\prod_{V\in\ker_{G}[i]}\varphi((a_{i_{k}})|k\in V)=0\). So, only partitions \(\pi\) with at least two elements per block contribute in the sum above. Take \(C>0\) such that \(C\geq\sup_{n\leq m}|\varphi(a_{i}^{n})|\). Since \(i_{k}=i_{k^{\prime}}\) for any \(k,k^{\prime}\in V\) and \(V\in\ker_{G}[\mathbf{i}]\), we have that
\[\prod_{V\in\ker_{G}[\mathbf{i}]}|\,\varphi((a_{i_{k}})|k\in V)\,|\leq C^{|\ker_{ G}[i]|}\leq C^{m}.\]
Thus, for any partition \(\pi\in\mathcal{P}(m)\), we obtain
\[\left|\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\prod_{V\in\ker_{G}[i]}\varphi((a_{i_{k}})|k \in V)\right|\leq C^{m}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}1\leq C^{m}N^{|\pi|}.\]
This implies
\[N^{-m/2}\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\ |\pi|<m/2\end{subarray}}\left|\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\prod_{V\in\ker_{G}[i]}\varphi((a_{i_{k}})|k\in V )\right|\leq\sum_{\begin{subarray}{c}\pi\in\mathcal{P}_{\chi}(m)\\ |\pi|<m/2\end{subarray}}C^{m}N^{|\pi|-m/2}=O(N^{-1/2}).\]
Let us denote by \(\tilde{\mathcal{P}}(m)\) the set of all partitions \(\pi\in\mathcal{P}(m)\) with no singletons. We have proved that only partitions in \(\tilde{\mathcal{P}}(m)\) give a non-zero contribution and if \(|\pi|<m/2\) this contribution is
of order \(O(N^{-1/2})\). Thus noticing that \(|\pi|\leq m/2\) for any \(\pi\in\tilde{\mathcal{P}}(m)\) with equality only if \(\pi\) is a pair partition we arrive to
\[\varphi\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}=\sum_{\pi\in P_{2} (m)}\;\;N^{-m/2}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker(\mathbf{i})=\pi\end{subarray}}\varphi(a_{i_{1}}\cdots a_{i_{m}})+\quad O(N^{ -1/2})\.\]
Finally, Proposition 5.2 gives
\[\varphi\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}=\sum_{\pi\in P_{2} (m)}\;\;N^{-m/2}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker(\mathbf{i})=\pi\end{subarray}}\mathbf{1}_{G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}} }\quad+\quad O(N^{-1/2})\.\]
_Remark 5.4_.: The set of pairing partitions \(P_{2}(m)\) is empty if \(m\) is odd. Thus, Theorem 5.3 states that odd moments in the CLT for BMT random variables always vanish as \(N\to\infty\), and therefore the limiting distribution must be symmetric if it exists.
Moreover, the even moments satisfy the Carleman's condition, namely, \(\sum_{k=1}^{\infty}m_{2k}^{-1/2k}=+\infty\) where \(m_{2k}\) denotes the \(2k\)-th moment since Theorem 5.3 gives that each \(m_{2k}\) is bounded by \(|\#P_{2}(2k)|=1\cdot 3\cdots(2k-1)\). Consequently, the limiting distribution, if it exists, is determined by moments, and convergence in distribution for BMT central limit theorem is equivalent to convergence in moments.
**Corollary 5.5** (Boolean Central Limit theorem).: _If the independence graph \(G_{N}\) of the variables \(a_{1},a_{2},\ldots,a_{N}\) is the null graph for every integer \(N\geq 1\), then \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converges in moments as \(N\to\infty\) to the Bernoulli distribution \((\delta_{-1}+\delta_{+1})/2\)._
Proof.: By Remark 5.4 is is enough to consider even moments, so let \(m=2k\). For all \(\mathbf{i}:[m]\to[N]\), \(G_{\mathbf{i}}\) is a graph with no edges. So, \(\mathbf{1}_{G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}}}\) is \(1\) only when \(G_{\pi(\mathbf{i})}\) has no edges, i.e. only when \(\pi=\{\{1,2\},\ldots,\{2k-1,2k\}\}\). Thus,
\[\lim_{N\to\infty}\varphi\left(\left(\frac{a_{1}+\cdots+a_{N}}{ \sqrt{N}}\right)^{2k}\right) =\lim_{N\to\infty}\frac{1}{N^{k}}\sum_{\begin{subarray}{c}\mathbf{i }:[m]\to[N]\\ \ker[\mathbf{i}]=\{\{1,2\},\ldots,\{2k-1,2k\}\}\end{subarray}}1\] \[=\lim_{N\to\infty}\frac{1}{N^{k}}N(N-1)\cdots(N-k+1)=1.\]
**Corollary 5.6** (Tensor Central Limit theorem).: _If the independence graph \(G_{N}\) of the variables \(a_{1},a_{2},\ldots,a_{N}\) is the complete graph, then \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converges in moments as \(N\to\infty\) to the Gaussian distribution \(\frac{1}{\sqrt{2\pi}}\exp(-t^{2}/2)\,dt\)._
Proof.: Again, we only consider \(m=2k\). Now, for all \(\mathbf{i}:[m]\to[N]\), \(G_{\mathbf{i}}\) is a complete graph. So, \(\mathbf{1}_{G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}}}\) is one for all \(\pi\in\mathcal{P}_{2}(2k)\). Thus,
\[\lim_{N\to\infty}\varphi\left(\left(\frac{a_{1}+\cdots+a_{N}}{ \sqrt{N}}\right)^{2k}\right) =\lim_{N\to\infty}\frac{1}{N^{k}}\sum_{\pi\in\mathcal{P}_{2}(2k) }\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}1\] \[=\lim_{N\to\infty}\frac{1}{N^{k}}N(N-1)\cdots(N-k+1)\#\mathcal{P} _{2}(2k)\] \[=\#\mathcal{P}_{2}(2k).\]
**Corollary 5.7** (Monotone Central Limit theorem).: _If the independence graph \(G_{N}\) of the variables \(a_{1},a_{2},\ldots,a_{N}\) has edge set \(E_{N}=\{(j,i)\in[N]^{2}:i<j\}\), then \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converges in moments as \(N\to\infty\) to the arcsine distribution \(\frac{1}{2\pi}\sqrt{4-t^{2}}\,dt\)._
Proof.: Let \(\pi\in\mathcal{P}_{2}(2k)\), and \(G_{\pi}\) be its nesting-crossing graph. Note that if \(\pi\) has a crossing, \(G_{\mathbf{i}}\) cannot be a subgraph of \(G_{\pi}\) for all \(\mathbf{i}\), since \(G_{\mathbf{i}}\) has no double edges while \(G_{\pi}\) does.
So let \(\pi\in\mathcal{NC}_{2}(2k)\), with blocks \(b_{1},\ldots,b_{k}\). The nesting-crossing graph is just a nesting graph. In this case in order that \(G_{\pi}\subset G_{\mathbf{i}}\) we need that \(i_{l}<j_{m}\) whenever \(b_{l}\) is nested with \(b_{m}\). To count such indices we such a unorder subset of indices of size \(k\), and count the number of ordering in the indices which satify this condition. Thus the cardinality of the set \(\{i\in[N]_{\pi}^{k}|G_{\pi}\subset G_{\mathbf{i}}\}\) equals \({N\choose k}*M(\pi)\), where \(M(\pi)\) is the number of labellings \(L:\pi\to[n]\) such \(L(b_{i})\leq L(b_{j})\) if \(b_{i}\) is nested in \(b_{j}\).
In the limit for a partition \(\pi\in\mathcal{NC}_{2}(2k)\) we get that
\[\lim_{N\to\infty}\frac{1}{N^{k}}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\mathbf{1}_{G_{\pi}\subseteq G_{\mathbf{i}}} =\lim_{N\to\infty}\frac{1}{N^{k}}{N\choose k}M(\pi)=\frac{M(\pi)}{k!}.\]
Thus summing over all pair partitions we get
\[\lim_{N\to\infty}\varphi\left(\left(\frac{a_{1}+\cdots+a_{N}}{ \sqrt{N}}\right)^{2k}\right) =\sum_{\pi\in\mathcal{NC}_{2}(2k)}\lim_{N\to\infty}\frac{1}{N^{k }}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\mathbf{1}_{G_{\pi}\subseteq G_{\mathbf{i}}}\] \[=\sum_{\pi\in\mathcal{NC}_{2}(2k)}\frac{M(\pi)}{k!}\] \[=\frac{1}{k!}.\#\mathcal{M}_{2}(2k)\]
where \(\mathcal{M}_{2}(2k)\) denote the set of monotone pairings, see [2, 11, 5].
Not all sequences of graphs lead to a limiting distribution, even if \(G_{n-1}\subset G_{n}\) for all \(n\). We present and example to show this.
_Example 5.8_.: Consider the following sequence of graphs:
* \(G_{0}=(\{1\},\emptyset)\).
* For all \(n\), \(G_{2n}=G_{2n-1}\cup\tilde{G}_{2n-1}\), where \(\tilde{G}_{2n-1}\) is a copy of \(G_{2n-1}\)
* For all \(n\), \(G_{2n+1}=\hat{H}_{2n+1}\), \(H_{2n+1}=G_{2n}\cup\tilde{G}_{2n}\), where \(\hat{H}_{2n+1}\) is the graph \(H_{2n+1}\) where we add the edges between the vertices of \(G_{2n}\) and \(\tilde{G}_{2n}\).
The first of these graphs are shown in Figure 4.
Now we consider where Bernoulli variables, \((X_{i})_{i=1}^{\infty}\), and suppose that \(X_{1},\ldots,X_{2^{n}}\) are BMT independent with respect to \(G_{n}\); this is consistent by the construction of \(G_{n}\). We are interested in the normalized sum
\[Y_{n}=\frac{X_{1}+\cdots+X_{2^{n}}}{2^{n/2}}.\]
It is easy to calculate the first three moments
\[\phi(Y_{n})=0,\quad\phi(Y_{n}^{2})=1,\quad\phi(Y_{n}^{3})=0. \tag{2}\]
For the fourth moment we calculate a recursion for \(n\) odd or even.
\[\begin{array}{ll}\phi(Y_{2n}^{4})&=\frac{1}{4}(\phi((Y_{2n-1})^{4})+\phi((Y_ {2n-1})^{4})+2\phi((Y_{2n-1})^{2}))=\frac{1}{2}(\phi((Y_{2n-1})^{4})+1),\\ \phi(Y_{2n+1}^{4})&=\frac{1}{4}(\phi((Y_{n})^{4})+\phi((Y_{n-1})^{4})+6\phi((Y _{n})^{2}))=\frac{1}{2}(\phi((Y_{2n})^{4})+3).\end{array}\]
From where, \(\phi(Y_{2n+2}^{4})=\frac{1}{4}\phi(Y_{2n}^{4})+\frac{5}{4}\) and \(\phi(Y_{2n+2}^{4})=\frac{1}{4}\phi(Y_{2n}^{4})+\frac{7}{4}.\) Hence, one sees that
\[\phi(Y_{2n}^{4})\to 5/3\text{ and }\phi(Y_{2n+1}^{4})\to 7/3.\]
Thus we do not have convergence in moments and neither convergence in distribution, by Remark 5.4.
Figure 4: The sequence of graphs \(G_{i}\), which do not satisfy a CLT.
### Further properties of CLT
We have observed some properties of BMT central limit theorem such as determinacy of moments and symmetry. Here, we want to consider also some properties in relation with the associated independence graphs. Firstly, we show that if two graphs differ by a small number of edges then they provide the same limiting distribution for the CLT. Secondly, we consider the boundedness of support in the limiting distribution and finally, we provide a description and examples on how consistency is reflected into Boolean, tensor or monotone convolutions.
**Theorem 5.9** (Perturbation).: _Let \(a_{1},\ldots,a_{N},b_{1},\ldots,b_{N}\) be centered variables with unit variance and uniformly bounded moments of all orders. Suppose \(G_{N}\) and \(H_{N}\) are the independence graphs of \(a_{1},\ldots,a_{N}\) and \(b_{1},\ldots,b_{N}\), respectively. If the symmetric difference of \(G_{N}\) and \(H_{N}\) has order \(o(N^{2})\), then_
\[\lim_{N\to\infty}\left[\varphi\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right) ^{m}-\varphi\left(\frac{b_{1}+\cdots+b_{N}}{\sqrt{N}}\right)^{m}\right]=0 \qquad\forall\ m\geq 1.\]
_Consequently, \((a_{1}+\cdots+a_{N})/\sqrt{N}\) has a limiting distribution if and only if \((b_{1}+\cdots+b_{N})/\sqrt{N}\) does; moreover, the two limiting distributions coincide if any of them exists._
Proof.: First note that \(G_{\pi(\boldsymbol{i})}=H_{\pi(\boldsymbol{i})}\) since this is a graph that depends only on the partition \(\pi\in P(m)\) and a tuple \(\boldsymbol{i}=(i_{1},\ldots,i_{m})\) with \(\ker[\boldsymbol{i}]=\pi\) and not on \(G_{N}\) nor \(H_{N}\). So, to avoid confusion, let us put \(F_{\pi(\boldsymbol{i})}=G_{\pi(\boldsymbol{i})}=H_{\pi(\boldsymbol{i})}\). Due to Theorem 5.3, it is enough to show that
\[0=\lim_{N\to\infty}\sum_{\begin{subarray}{c}\boldsymbol{i}:[m]\to[N]\\ \ker[\boldsymbol{i}]=\pi\end{subarray}}N^{-m/2}\left(\boldsymbol{1}_{F_{\pi( \boldsymbol{i})}\subseteq G_{\boldsymbol{i}}}-\boldsymbol{1}_{F_{\pi( \boldsymbol{i})}\subseteq H_{\boldsymbol{i}}}\right)\]
for each pairing partition \(\pi\in P_{2}(m)\). Since \(\boldsymbol{1}_{F_{\pi(\boldsymbol{i})}\subseteq G_{\boldsymbol{i}}}- \boldsymbol{1}_{F_{\pi(\boldsymbol{i})}\subseteq H_{\boldsymbol{i}}}=0\) unless \(F_{\pi(\boldsymbol{i})}\subseteq G_{\boldsymbol{i}}\) and \(F_{\pi(\boldsymbol{i})}\not\subseteq H_{\boldsymbol{i}}\) or \(F_{\pi(\boldsymbol{i})}\subseteq G_{\boldsymbol{i}}\) and \(F_{\pi(\boldsymbol{i})}\not\subseteq H_{\boldsymbol{i}}\), a condition that holds only if \(G_{\boldsymbol{i}}\not\subseteq H_{\boldsymbol{i}}\) or \(H_{\boldsymbol{i}}\not\subseteq G_{\boldsymbol{i}}\), we obtain
\[\sum_{\begin{subarray}{c}\boldsymbol{i}:[m]\to[N]\\ \ker[\boldsymbol{i}]=\pi\end{subarray}}\left|\boldsymbol{1}_{F_{\pi( \boldsymbol{i})}\subseteq G_{\boldsymbol{i}}}-\boldsymbol{1}_{F_{\pi( \boldsymbol{i})}\subseteq H_{\boldsymbol{i}}}\right|\leq|S_{N,\pi}|\]
where \(S_{N,\pi}=\{\boldsymbol{i}:[m]\to[N]:\ker[\boldsymbol{i}]=\pi\) and \(G_{\boldsymbol{i}}\not\subseteq H_{\boldsymbol{i}}\) or \(H_{\boldsymbol{i}}\not\subseteq G_{\boldsymbol{i}}\}\).
Take \(\pi=\{V_{1},V_{2},\ldots,V_{r}\}\in P_{2}(m)\) with \(r\) the number of blocks of \(\pi\). For a given tuple \(\boldsymbol{i}=(i_{1},\ldots,i_{m})\), the graphs \(G_{\boldsymbol{i}}\) and \(H_{\boldsymbol{i}}\) have the same vertex set \(\{i_{k}:k=1,2,\ldots,m\}\). And hence, \(\ker[\boldsymbol{i}]=\pi\) implies \(\boldsymbol{i}\in S_{N,\pi}\) only if \(G_{\boldsymbol{i}}\) has at least one edge that \(H_{\boldsymbol{i}}\) does not or \(H_{\boldsymbol{i}}\) has at least one edge that \(G_{\boldsymbol{i}}\) does not. Thus, for each \(\boldsymbol{i}\in\), at least one of the graphs \(G_{\boldsymbol{i}}\) and \(H_{\boldsymbol{i}}\) shares at least one edge with \(G_{N}\triangle H_{N}\) the symmetric difference of \(G_{N}\) and \(H_{N}\). This implies all elements in \(S_{N,\pi}\) can be constructed in the following way: (1) pick two distinct blocks \(V,W\in\pi\), an edge \((j_{V},j_{W})\in E(G_{N}\triangle H_{N})\), and \(r-2\) distinct values \(j_{U}\in[N]\setminus\{j_{V},j_{W}\}\) for \(U\in\pi\setminus\{V,W\}\); (2) take \(\boldsymbol{i}=(i_{1},\ldots,i_{m})\) where \(i_{k}=j_{U}\) provided \(k\in U\). The previous construction is not necessarily injective but it does define a set containing \(S_{N,\pi}\). So, we get
\[|S_{N,\pi}|\leq r(r-1)\cdot|E(G_{N}\triangle H_{N})|\cdot(N-2)(N-3)\cdots(N-(r -1))\leq r(r-1)\cdot|E(G_{N}\triangle H_{N})|\cdot N^{r-2}\]
The desired result then follows since \(r\leq m/2\) and \(\lim_{N\to\infty}\left|E(G_{N}\triangle H_{N})\right|/N^{2}=0\) imply
\[\lim_{N\to\infty}\frac{r(r-1)\cdot\left|E(G_{N}\triangle H_{N})\right|\cdot N^ {r-2}}{N^{m/2}}=0.\]
**Corollary 5.10**.: _If the independence graph \(G_{N}\) of the variables \(a_{1},a_{2},\ldots,a_{N}\) has order \(o(N^{2})\), then \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converges in moments as \(N\to\infty\) to the Bernoulli distribution \((\delta_{-1}+\delta_{+1})/2\)._
**Corollary 5.11**.: _If the independence graph \(G_{N}\) of the variables \(a_{1},a_{2},\ldots,a_{N}\) contains no copy of a complete bipartite graph, then \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converges in moments as \(N\to\infty\) to the Bernoulli distribution \((\delta_{-1}+\delta_{+1})/2\)._
Proof.: This follows from Theorem 5.9 and the Kovari-Sos-Turan theorem, the latter stating that for fixed integers \(s\geq r\geq 2\) any \(N\)-vertex graph with at least \(CN^{2-\frac{1}{r}}\) edges contains a complete bipartite subgraph \(K_{r,s}\).
**Proposition 5.12**.: _Let for each \(N\), let \(a_{1},\ldots,a_{N},\) and \(b_{1},\ldots,b_{N},\) be a couple of tuples of i.d. selfadjoint random variables with mean zero and variance one. Suppose that \(G_{N}\) and \(H_{N}\) are the independence graphs of \(a_{1},\ldots,a_{N}\) and \(b_{1},\ldots,b_{N}\), respectively. If \(G_{N}\) is a subgraph of \(J_{N}\), for all \(N\) and if \((a_{1}+\cdots+a_{N})/\sqrt{N}\) and \((b_{1}+\cdots+b_{N})/\sqrt{N}\) converge in moments to \(X\) and \(Y\), respectively, then \(\|X\|_{\infty}\leq\|Y\|_{\infty}\)._
Proof.: If \(Y\) has unbounded support, then the statement follows by definition. Otherwise \(\|Y\|_{\infty}\) is finite and thus determined by moments.
Notice that both limiting distributions do not depend on the choice of \(a_{i}\)'s, and we may assume that \(a_{i}\) has the same distribution as \(b_{i}\), for all \(i\).
Now, for any index \(\mathbf{i}=(i_{1},\ldots,i_{k})\),
\[\varphi(a_{i_{1}}\cdots a_{i_{k}})\ \ \leq\varphi(b_{i_{1}}\cdots a_{i_{k}})\]
Consequently, the moments of \((b_{1}+\cdots+b_{N})/\sqrt{N}\) are larger than the moments of \((a_{1}+\cdots+a_{N})/\sqrt{N}\). By taking limits we see that, if we denote the even moments of \(X\) by \((m_{2n}(X))_{n>0}\), and the moments of Y by \((m_{2n}(Y))_{n}>0\), we have the inequality
\[m_{2n}(X)\leq m_{2n}(Y),\quad\text{for all }n.\]
This means that \((m_{2n}(X))^{1/n}\leq\|Y\|_{\infty}\) and thus, letting \(n\to\infty\) we see that \(\|X\|_{\infty}\leq\|Y\|_{\infty}\).
**Theorem 5.13**.: _Let \(M_{N}\) denote the greatest integer \(M\geq 0\) so that the full graph \(K_{M}\) is contained if \(G_{N}\). If \(\liminf_{N\to\infty}M_{N}/N>0\), then the limit distribution of \((a_{1}+\cdots+a_{N})/\sqrt{N}\) if it exists has non-compact support._
Proof.: Suppose \((a_{1}+\cdots+a_{N})/\sqrt{N}\) has an analytical limiting distribution, i.e., there exists a probability measure \(\mu\) on the real line so that
\[\lim_{N\to\infty}\varphi\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{k}=\ \int_{-\infty}^{+\infty}t^{k}\,d\mu(t)=:m_{k}(\mu)\]
for all integers \(k\geq 1\). Note that if \(\mu\) has compact support, so \(\mu((-\infty,-L)\cup(+L,+\infty))=0\) for some \(L\geq 0\), then \(\sup_{k\geq 1}\sqrt[k]{|m_{k}(\mu)|}\leq L<\infty\). Thus, to prove \(\mu\) has no compact support, it enough to show that \(\sup_{k\geq 1}\sqrt[2k]{|m_{2k}(\mu)|}=+\infty\).
By hypothesis, for each integer \(N\geq 1\), there exists a set \(J_{N}=\{j_{1}<j_{2}<\cdots<j_{M_{N}}\}\subset[N]\) so that \(G_{N}\) contains the full graph on \(J_{N}\), so \((j_{k},j_{\ell})\) is an edge of \(G_{N}\) for any distinct \(j_{k},j_{\ell}\in J_{N}\). Thus, if \(\boldsymbol{i}=(i_{1},i_{2},\ldots,i_{2k})\) satisfies \(i_{r}\in J_{N}\) for \(r=1,2,\ldots,2k\), the graph \(G_{\boldsymbol{i}}\) is the full graph since it is the restriction of \(G_{N}\) to vertex set \(\{i_{r}\mid r=1,2,\ldots,2k\}\subset J_{N}\), and hence \(G_{\pi(\boldsymbol{i})}\) is always a subgraph of \(G_{\boldsymbol{i}}\) with \(\pi=\ker[\boldsymbol{i}]\). So, for any pairing partition \(\pi\in P_{2}(2k)\), we get
\[\sum_{\begin{subarray}{c}\boldsymbol{i}:[2k]\to[N]\\ \ker[\boldsymbol{i}]=\pi\end{subarray}}\boldsymbol{1}_{G_{\pi(\boldsymbol{i}) \subseteq G_{\boldsymbol{i}}}} \geq\sum_{\begin{subarray}{c}\boldsymbol{i}:[2k]\to J_{N}\\ \ker[\boldsymbol{i}]=\pi\end{subarray}}\boldsymbol{1}_{G_{\pi(\boldsymbol{i}) \subseteq G_{\boldsymbol{i}}}}=\ M_{N}(M_{N}-1)\cdots(M_{N}-k+1)\]
where \(k\) is the number of blocks in \(\pi\) since it is a pairing. Put \(2C=\liminf_{N\to\infty}M_{N}/N>0\). Thus, for each \(k\geq 1\), there exists \(\tilde{N}_{k}\geq 1\) so that \(N\geq\tilde{N}_{k}\) implies \(M_{N}\geq k\) and
\[\left(\frac{M_{N}}{N}\right)\left(\frac{M_{N}-1}{N}\right)\cdots\left(\frac{M_ {N}-k+1}{N}\right)\geq C^{k}.\]
Hence, Theorem 5.3 implies \(m_{2k}(\mu)\geq|P_{2}(2k)|\,C^{k}\) since the last two inequalities give
\[\sum_{\pi\in P_{2}(2k)}\ N^{-k}\sum_{\begin{subarray}{c}\boldsymbol{i}:[2k] \to[N]\\ \ker[\boldsymbol{i}]=\pi\end{subarray}}\boldsymbol{1}_{G_{\pi(\boldsymbol{i}) \subseteq G_{\boldsymbol{i}}}} \geq\sum_{\pi\in P_{2}(2k)}\ C^{k} = |P_{2}(2k)|\,C^{k}\]
for \(N\geq\tilde{N}_{k}\). Finally, \(|P_{2}(2k)|=(2k-1)(2k-3)\cdots(1)\) implies \(\sup_{k\geq 1}\sqrt[2k]{|m_{2k}(\mu)|}=+\infty\).
Finally, we consider the role of consistency in the BMT Central Limit Theorems.
**Proposition 5.14**.: _Let \(a_{1},\ldots,a_{M_{N}},a_{M_{N}+1},\ldots,a_{M_{N}+L_{N}}\) be centered variables with unit variance, uniformly bounded moments of all orders, and independence graph \(G_{M_{N}+L_{N}}\) where \(N=M_{N}+L_{N}\). Suppose \((a_{1}+\cdots+a_{M_{N}})/\sqrt{M_{N}}\) and \((a_{M_{N}+1}+\cdots+a_{M_{N}+L_{N}})/\sqrt{L_{N}}\) converge in moments to \(a_{M}\) and \(a_{L}\), respectively. If \(t=\lim_{N\to\infty}M_{N}/N\) exists, then \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converges in moments to \(\sqrt{t}\cdot a_{M}\,+\,\sqrt{1-t}\cdot a_{L}\) with \(a_{M}\) and \(a_{L}\)_
1. _boolean independent if_ \((i,j),(j,i)\notin E_{M_{N}+L_{N}}\) _whenever_ \(i\in[M_{N}]\) _and_ \(j\in[M_{N}+L_{N}]\setminus[M_{N}]\)__
2. _monotone independent if_ \((j,i)\in E_{M_{N}+L_{N}}\) _and_ \((i,j)\notin E_{M_{N}+L_{N}}\) _whenever_ \(i\in[M_{N}]\) _and_ \(j\in[M_{N}+L_{N}]\setminus[M_{N}]\)__
_._
3. _tensor independent if_ \((i,j),(j,i)\in E_{M_{N}+L_{N}}\) _whenever_ \(i\in[M_{N}]\) _and_ \(j\in[M_{N}+L_{N}]\setminus[M_{N}]\)__
Proof.: Take \(a_{N,1}=(a_{1}+\cdots+a_{M_{N}})/\sqrt{N}\) and \(a_{N,2}=(a_{M_{N}+1}+\cdots+a_{M_{N}+L_{N}})/\sqrt{N}\). Thus, for each integer \(m\geq 1\), we get
\[\varphi\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}=\sum_{i_{1},\ldots,i_{m}=1}^{2}\varphi\,(a_{N,i_{1}}a_{N,i_{2}}\cdots a_{N,i_{m}}).\]
Now, if _(1) ((2),(3))_ holds, we have \(a_{N,1}\) and \(a_{N,2}\) boolean (monotone, tensor, respectively) independent due to Proposition 3.12, and hence the last equality implies
\[\lim_{N\to\infty}\varphi\left(\frac{a_{1}+\cdots+a_{N}}{\sqrt{N}}\right)^{m}= \varphi\,(\sqrt{t}\cdot a_{M}\,+\,\sqrt{1-t}\cdot a_{L})^{m}\]
with \(a_{M}\) and \(a_{L}\) boolean (monotone, tensor, respectively) independent since \(a_{N,1}\) and \(a_{N,2}\) converge in moments to \(\sqrt{t}\cdot a_{M}\) and \(\sqrt{1-t}\cdot a_{L}\), respectively.
_Example 5.15_.: Suppose the independence graph \(G_{N}\) of the variables \(a_{1},\cdots,a_{N}\) is the the complete bipartite graph \(K_{M_{N},L_{N}}\) with \(N=M_{N}+L_{N}\). From Corollary 5.5, it follows that each \((a_{1}+\cdots+a_{M_{N}})/\sqrt{M_{N}}\) and \((a_{M_{N}+1}+\cdots+a_{M_{N}+L_{N}})/\sqrt{L_{N}}\) converge in moments to a Bernoulli distribution \((\delta_{-1}+\delta_{+1})/2\). If additionally we have \(\lim_{N\to\infty}=t\) exists, then \(0\leq t\leq 1\) and \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converge in moments to
\[\left[\frac{1}{2}\delta_{-\sqrt{t}}+\frac{1}{2}\delta_{+\sqrt{t}}\right]* \left[\frac{1}{2}\delta_{-\sqrt{1-t}}+\frac{1}{2}\delta_{+\sqrt{1-t}}\right]\]
where \(*\) denotes the classical convolution of measures.
_Example 5.16_.: Suppose the independence graph \(G_{N}\) of the variables \(a_{1},\cdots,a_{N}\) is the the disjoint union of two complete graphs \(K_{M_{N}}\) and \(K_{L_{N}}\) with vertex sets \([M_{N}]\) and \([M_{N}+L_{N}]\setminus[M_{N}]\),respectively, and \(N=M_{N}+L_{N}\). From Corollary 5.6, it follows that each \((a_{1}+\cdots+a_{M_{N}})/\sqrt{M_{N}}\) and \((a_{M_{N}+1}+\cdots+a_{M_{N}+L_{N}})/\sqrt{L_{N}}\) converge in moments to a normal distribution \(\mathcal{N}(0,1)\). Thus, if \(\lim_{N\to\infty}=t\) exists, then \(0\leq t\leq 1\) and \((a_{1}+\cdots+a_{N})/\sqrt{N}\) converge in moments to
\[\mathcal{N}(0,\sqrt{t})\uplus\mathcal{N}(0,\sqrt{1-t})\]
where \(\uplus\) denotes the Boolean convolution of measures.
_Example 5.17_.: Suppose \(G_{N}\) is the Turan Graph \(T(N,r_{N})\), a complete multi-partite graph formed by partitioning a set of \(N\) vertices into \(r_{N}\) subsets with sizes as equal as possible. This graph becomes the empty graph (when \(r_{N}=1\)), the complete graph (when \(r_{N}=N\)), and a complete bipartite graph (when \(r_{N}=2\)). If \(r_{N}\to r<\infty\), when \(N\to\infty\), then the bmt central limit theorem associated to \(T(N,r_{N})\) has as limit distribution a convolution of \(r\) Bernoulli random variables.
_Remark 5.18_.: Proposition 5.14 and last examples shows the number of edges does not determine the support compactness of the limit distribution in Theorem 5.3. Indeed, suppose \(G_{N}=K_{\lfloor N/2\rfloor}\sqcup\emptyset_{\lceil N/2\rceil}\) and \(H_{N}=K_{\lfloor N/2\rfloor,\lceil N/2\rceil}\) and let \(n_{N}\) and \(m_{N}\) denote the the number of edges of \(G_{N}\) and \(H_{N}\), respectively. Then, we have
\[n_{N}=\frac{\lfloor\frac{N}{2}\rfloor(\lfloor\frac{N}{2}\rfloor-1)}{2}\leq \lfloor\frac{N^{2}}{8}\rfloor\leq\lfloor\frac{N^{2}}{4}\rfloor\leq\lfloor\frac {N}{2}\rfloor\lceil\frac{N}{2}\rceil=m_{N}.\]
Thus, the limit distribution associated to \(H_{N}\) is compactly supported (being the classical convolution of two Bernoulli distributions), while the limit distribution associated to \(G_{N}\) is not (being the boolean convolution of a Gaussian distribution and a Bernoulli distribution).
## 6 Poisson Limit Theorem
Apart from the central limit theorem, the law of rare events also known as Poisson limit theorem is probably the the most important theorem in probability.
In this limit theorem, one considers sums of independent Bernoulli variables with common parameter \(\lambda/n\) and considers its limit. In BMT independence we can have such limits. The combinatorics appearing are similar as the one for the central limit theorem. Not surprisingly, the main difference is that here we need to consider the set of all instead of pair partitions to all partitions.
**Theorem 6.1**.: _Suppose \(a_{1},a_{2},\ldots,a_{N}\) BMT independent with respect to the graph \(G_{N}\). If each \(a_{i}\) has Bernoulli distribution \(\mu_{N}=(1-\frac{\lambda}{N})\delta_{0}+\frac{\lambda}{N}\delta_{1}\), then the moments of the sum \(a_{1}+\cdots a_{N}\) satisfy_
\[\varphi\left[(a_{1}+\cdots+a_{N})^{m}\right]\ =\sum_{\pi\in P(m)}\lambda^{\#(\pi)} \ N^{-\#(\pi)}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\mathbf{1}_{G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}} }\quad+\quad O(N^{-1})\.\]
Proof.: Expanding the product \((a_{1}+\cdots+a_{N})^{m}\) and using the fact that \(\ker[\mathbf{i}]=\ker[\mathbf{i^{\prime}}]\) defines an equivalence relation on \(\{\mathbf{i}\mid\mathbf{i}:[m]\to[N]\}\), we have
\[\varphi(a_{1}+\cdots+a_{N})^{m} =\sum_{\pi\in P(m)}\ \sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\varphi(a_{i_{1}}a_{i_{2}}\cdots a_{i_{m}}).\]
Notice that \(\varphi(a_{i}^{k})=\lambda/N\) since each \(a_{i}\) has Bernoulli distribution \(\mu_{N}=(1-\frac{\lambda}{N})\delta_{0}+\frac{\lambda}{N}\delta_{1}\). So, for each partition \(\pi\in P(m)\), the BMT independence of \(a_{1},a_{2},\ldots,a_{N}\) gives
\[\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\varphi(a_{i_{1}}a_{i_{2}}\cdots a_{i_{m}}) =\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\prod_{V\in\ker_{G}[\mathbf{i}]}\varphi((a_{i_{k} })|V) =\sum_{\begin{subarray}{c}\mathbf{i}:[m]\to[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\left(\frac{\lambda}{N}\right)^{\#(\ker_{G}[ \mathbf{i}])}\]
where \(\#(\ker_{G}[\mathbf{i}])\) denotes the number of blocks in the partition \(\ker_{G}[\mathbf{i}]\). Now, recall that \(\ker_{G}[\mathbf{i}]\) is a refinement of \(\ker[\mathbf{i}]\), so we can write
\[\sum_{\begin{subarray}{c}\mathbf{i}:[m]\rightarrow[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\left(\frac{\lambda}{N}\right)^{\#(\ker_{G}[ \mathbf{i}])}\ \ =\ \ \sum_{\begin{subarray}{c}\theta\in P(m)\\ \theta\leq\pi\end{subarray}}\sum_{\begin{subarray}{c}\mathbf{i}:[m]\rightarrow[N] \\ \ker[\mathbf{i}]=\pi\\ \ker_{G}[\mathbf{i}]=\theta\end{subarray}}\left(\frac{\lambda}{N}\right)^{\#( \ker_{G}[\mathbf{i}])}.\]
Moreover, for any partitions \(\pi,\theta\in P(m)\) with \(\theta\leq\pi\), we have
\[\left|\sum_{\begin{subarray}{c}\mathbf{i}:[m]\rightarrow[N]\\ \ker[\mathbf{i}]=\pi,\ker_{G}[\mathbf{i}]=\theta\end{subarray}}\left(\frac{\lambda}{N }\right)^{\#(\ker_{G}[\mathbf{i}])}\right|\leq\left(\frac{|\lambda|}{N}\right)^{ \#(\theta)}\left|\sum_{\begin{subarray}{c}\mathbf{i}:[m]\rightarrow[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}1\right|\leq|\lambda|^{\#(\theta)}\,N^{\#(\pi) -\#(\theta)}\]
where \(\#(\pi)\) and \(\#(\theta)\) denote the number of blocks in \(\pi\) and \(\theta\), respectively. Hence, since \(\theta\leq\pi\) implies \(\#(\theta)\geq\#(\pi)\) with equality only if \(\theta=\pi\), we obtain
\[\sum_{\begin{subarray}{c}\mathbf{i}:[m]\rightarrow[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\left(\frac{\lambda}{N}\right)^{\#(\ker_{G}[ \mathbf{i}])}\ \ =\ \ \sum_{\begin{subarray}{c}\mathbf{i}:[m]\rightarrow[N]\\ \ker[\mathbf{i}]=\pi,\ker_{G}[\mathbf{i}]=\pi\end{subarray}}\left(\frac{\lambda}{N} \right)^{\#(\pi)}\ \ \ +\ \ \ O(N^{-1}).\]
Therefore, due the equivalence of the conditions \(G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}}\) and \(\ker[\mathbf{i}]=\ker_{G}[\mathbf{i}]\) for any \(\mathbf{i}:[m]\rightarrow[N]\) with \(\ker[\mathbf{i}]=\pi\), we get
\[\varphi\left[(a_{1}+\cdots+a_{N})^{m}\right]\ =\ \sum_{\pi\in P(m)} \lambda^{\#(\pi)}\ N^{-\#(\pi)}\sum_{\begin{subarray}{c}\mathbf{i}:[m] \rightarrow[N]\\ \ker[\mathbf{i}]=\pi\end{subarray}}\mathbf{1}_{G_{\pi(\mathbf{i})}\subseteq G_{\mathbf{i}}}\ \ \ +\ \ \ O(N^{-1})\.\]
Similarly as for the CLT we may deduce the Boolean, monotone and tensor Poisson limit Theorems, as applications of the above theorem by using similar arguments as in the proofs of Corollaries 5.5, 5.6 and 5.7. We leave the details of the proof to the reader.
**Corollary 6.2**.: _Let for all \(N\), \(a_{1},\ldots,a_{N}\) have a Bernoulli distribution \(\mu_{N}=(1-\frac{\lambda}{N})\delta_{0}+\frac{\lambda}{N}\delta_{1}\). Then as \(N\rightarrow\infty\)_
1. _If_ \(a_{1},\ldots,a_{N}\) _are BMT independent with respect to the complete graph_ \(G_{N}=K_{N}\)_, then the variable_ \(a_{1}+\cdots a_{N}\) _converges in moments to a classical Poisson distribution._
2. _If_ \(a_{1},\ldots,a_{N}\) _are BMT independent with respect to the empty graph_ \(G_{N}=\emptyset_{N}\)_, then the variable_ \(a_{1}+\cdots a_{N}\) _converges in moments to a Boolean Poisson distribution,_ _[_23_]__._
3. _If_ \(a_{1},\ldots,a_{N}\) _are BMT independent with respect to the graph_ \(G_{N}\) _associated with a total order_ \(<\)_, then the variable_ \(a_{1}+\cdots a_{N}\) _converges in moments to a monotone Poisson distribution,_ _[_16_]_
Concluding Remarks
We have started the development of new framework both analytical and algebraic that enables us to investigate arbitrary mixtures of Boolean, monotone and tensor independence. This setting provides a unified approach for these three notions of independence through a digraph depicting pair-wise independence relations and what we have called the kernel of function subordinated to a digraph. Nonetheless, besides establishing Central and Poisson-Type Limit Theorems, there are many interesting problems that remain open. We mention a few in this last section.
1. The ideas in this paper and those from [8] could be combined to obtain a corresponding notion of BMFT independence. In [8], the authors introduced the notion of _\(\mathcal{T}\)-tree_ independence, which allowed them to study mixtures of Boolean, free, and monotone independence. A closer look at our notion of the kernel subordinated to a digraph reveals that this object splits mixed moments according to the commutation relations from tensor independence, much in the spirit of the reduced non-crossing partitions from mixtures of free and tensor independence in [22, 14]. A notion of BMFT independence seems likely to arise from combining the maximal non-crossing partitions from _\(\mathcal{T}\)-tree_ independence and the commutation relations encoded in the kernel subordinated to a digraph.
2. The CLT and Poisson Limit theorem hint on the possible set of partitions that describe the combinatorics of BMT independence. It would be interesting if such intuition can be further studied in a systematical way. One possible direction and an important open question is to define cumulants with respect to BMT independence.
3. It has been shown that if we consider sequences of random variables where commuting and anticommuting relations are taken randomly, one obtains a deterministic limit in the CLT, which correspond to \(q\)-Gaussian distributions [20], see also [3] for a generalization. It would be interesting if considering random directed graphs as the independence graphs in the BMT-central limit theorem one would get some interesting interpolation between, Gaussian, arcsine and symmetric Bernoulli distributions.
4. It would be desirable to classify all probability measures that arise as limits of the CLT for BMT independent random variables. In particular, one should determine the digraphs that recover and generalize the limiting measures obtained in [25] for BM independence. These latter measures were shown to be symmetric and compactly supported, they include the semi-circular law and their even moments satisfy the generalized recurrence relation for Catalan numbers, namely, \[g_{n}=\sum_{r=1}^{n}\gamma_{r}\,g_{r-1}g_{n-r}.\] Section 5 in this paper shows that non-compact measures appear in the CLT for BMT independent random variables, so they are not covered by BM independence.
5. Diving into the last remark, one would like to find out properties of digraphs \(G_{N}=(V_{N},E_{N})\) that determine the compactness of limiting measures in the CLT for BMT independent random variables. This seems to be a non-trivial problem since it amounts to analyze the growth as \(N\) goes to infinity of the number of sub-graphs of \(G_{N}\) that are isomorphic to each nesting-crossing graph \(G_{\pi}\). Moreover, the edge set \(E_{N}\) must be of order \(N^{2}\) --i.e., \(\lim_{N\to\infty}|E_{N}|/N^{2}>0\)-- to obtain a limiting measure different from the Bernoulli distribution, but at this order both compact and non-compact measure appear.
|
2309.09188 | Simon Conjecture and the $\text{v}$-number of monomial ideals | Let $I\subset S$ be a graded ideal of a standard graded polynomial ring $S$
with coefficients in a field $K$, and let $\text{v}(I)$ be the
$\text{v}$-number of $I$. In previous work, we showed that for any graded ideal
$I\subset S$ generated in a single degree, then $\text{v}(I^k)=\alpha(I)k+b$,
for all $k\gg0$, where $\alpha(I)$ is the initial degree of $I$ and $b$ is a
suitable integer. In the present paper, using polarization, we extend Simon
conjecture to any monomial ideal. As a consequence, if Simon conjecture holds,
and all powers of $I$ have linear quotients, then $b\in\{-1,0\}$. This fact
suggest that if $I$ is an equigenerated monomial ideal with linear powers, then
$\text{v}(I^k)=\alpha(I)k-1$, for all $k\ge1$. We verify this conjecture for
monomial ideals with linear powers having $\text{depth}S/I=0$, edge ideals with
linear resolution, polymatroidal ideals, and Hibi ideals. | Antonino Ficarra | 2023-09-17T07:26:13Z | http://arxiv.org/abs/2309.09188v2 | # Simon conjecture and
###### Abstract.
Let \(I\subset S\) be a graded ideal of a standard graded polynomial ring \(S\) with coefficients in a field \(K\), and let \(\mathrm{v}(I)\) be the v-number of \(I\). In previous work, we showed that for any graded ideal \(I\subset S\) generated in a single degree, then \(\mathrm{v}(I^{k})=\alpha(I)k+b\), for all \(k\gg 0\), where \(\alpha(I)\) is the initial degree of \(I\) and \(b\) is a suitable integer. In the present paper, using polarization, we extend Simon conjecture to any monomial ideal. As a consequence, if Simon conjecture holds, and all powers of \(I\) have linear quotients, then \(b\in\{-1,0\}\). This fact suggest that if \(I\) is an equigenerated monomial ideal with linear powers, then \(\mathrm{v}(I^{k})=\alpha(I)k-1\), for all \(k\geq 1\). We verify this conjecture for monomial ideals with linear powers having \(\mathrm{depth}\,S/I=0\), edge ideals with linear resolution, polymatroidal ideals, and Hibi ideals.
Key words and phrases:graded ideals, v-number, asymptotic behaviour, primary decomposition 2020 Mathematics Subject Classification: Primary 13F20; Secondary 13F55, 05C70, 05E40
## 1. Introduction
Let \(S=K[x_{1},\ldots,x_{n}]=\bigoplus_{d}S_{d}\) be the standard graded polynomial ring with \(n\) variables and coefficients in a field \(K\), and let \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) be the graded maximal ideal. We denote the set of associated primes of \(I\) by \(\mathrm{Ass}(I)\), and by \(\mathrm{Max}(I)\) the set of associated primes of \(I\) that are maximal with respect to the inclusion. The concept of v-number was introduced in [7], and further studied in [1, 3, 5, 17, 20, 25, 26, 29, 30, 32]. Let \(I\subset S\) be a graded ideal and let \(\mathfrak{p}\in\mathrm{Ass}(I)\). Then, the _v-number of \(I\) at \(\mathfrak{p}\)_ is defined as
\[\mathrm{v}_{\mathfrak{p}}(I)\ =\ \min\{d\ :\ \text{there exists}\ f\in S_{d}\ \text{such that}\ (I:f)=\mathfrak{p}\}.\]
Whereas, the _v-number of \(I\)_ is defined as
\[\mathrm{v}(I)\ =\ \min\{d\ :\ \text{there exists}\ f\in S_{d}\ \text{such that}\ (I:f)\in\mathrm{Ass}(I)\}.\]
Let \(I\subset S\) be a graded ideal. In [17] the asymptotic behaviour of the function \(\mathrm{v}(I^{k})\), called the _v-function_ of \(I\), was investigated. It is known by Brodmann [4] that \(\mathrm{Ass}(I^{k})\) stabilizes for large \(k\). That is, \(\mathrm{Ass}(I^{k+1})=\mathrm{Ass}(I^{k})\) for all \(k\gg 0\). A prime ideal \(\mathfrak{p}\subset S\) such that \(\mathfrak{p}\in\mathrm{Ass}(I^{k})\) for all \(k\gg 0\), is called a _stable prime of \(I\)_. The set of the stable primes of \(I\) is denoted by \(\mathrm{Ass}^{\infty}(I)\). Thus, for all \(k\gg 0\),
\[\mathrm{v}(I^{k})=\min_{\mathfrak{p}\in\mathrm{Ass}^{\infty}(I)}\mathrm{v}_{ \mathfrak{p}}(I^{k}).\]
It is expected that for any graded ideal \(I\subset S\), \(\mathrm{v}(I^{k})\) becomes a linear function in \(k\) for \(k\gg 0\)[17, Conjecture 4.5]. Such a conjecture has been proved when |
2307.16504 | Random walks in correlated diffusivity landscapes | In recent years, several experiments highlighted a new type of diffusion
anomaly, which was called Brownian yet non-Gaussian diffusion. In systems
displaying this behavior, the mean squared displacement of the diffusing
particles grows linearly in time, like in a normal diffusion, but the
distribution of displacements is non-Gaussian. In situations when the
convergence to Gaussian still takes place at longer times, the probability
density of the displacements may show a persisting peak around the
distribution's mode, and the pathway of convergence to the Gaussian is unusual.
One of the theoretical models showing such a behavior corresponds to a
disordered system with local diffusion coefficients slowly varying in space.
While the standard pathway to Gaussian, as proposed by the Central Limit
Theorem, would assume that the peak, under the corresponding rescaling,
smoothens and lowers in course of the time; in the model discussed, the peak,
under rescaling, narrows and stays sharp. In the present work, we discuss the
nature of this peak. On a coarse-grained level, the motion of the particles in
the diffusivity landscape is described by continuous time random walks with
correlations between waiting times and positions. The peak is due to strong
spatiotemporal correlations along the trajectories of diffusing particles.
Destroying these correlations while keeping the temporal structure of the
process intact leads to the decay of the peak. We also note that the correlated
CTRW model reproducing serial correlations between the waiting times along the
trajectory fails to quantitatively reproduce the shape of the peak even for the
decorrelated motion, while being quite accurate in the wings of the PDF. This
shows the importance of high-order temporal correlations for the peak's
formation. | Adrian Pacheco-Pozo, Igor M. Sokolov | 2023-07-31T09:02:01Z | http://arxiv.org/abs/2307.16504v1 | # Random walks in correlated diffusivity landscapes
###### Abstract
In recent years, several experiments highlighted a new type of diffusion anomaly, which was called Brownian yet non-Gaussian diffusion. In systems displaying this behavior, the mean squared displacement of the diffusing particles grows linearly in time, like in a normal diffusion, but the distribution of displacements is non-Gaussian. In situations when the convergence to Gaussian still takes place at longer times, the probability density of the displacements may show a persisting peak around the distribution's mode, and the pathway of convergence to the Gaussian is unusual. One of the theoretical models showing such a behavior corresponds to a disordered system with local diffusion coefficients slowly varying in space. While the standard pathway to Gaussian, as proposed by the Central Limit Theorem, would assume that the peak, under the corresponding rescaling, smoothens and lowers in course of the time; in the model discussed, the peak, under rescaling, narrows and stays sharp. In the present work, we discuss the nature of this peak. On a coarse-grained level, the motion of the particles in the diffusivity landscape is described by continuous time random walks with correlations between waiting times and positions. The peak is due to strong spatiotemporal correlations along the trajectories of diffusing particles. Destroying these correlations while keeping the temporal structure of the process intact leads to the decay of the peak. We also note that the correlated CTRW model reproducing serial correlations between the waiting times along the trajectory fails to quantitatively
reproduce the shape of the peak even for the decorrelated motion, while being quite accurate in the wings of the PDF. This shows the importance of high-order temporal correlations for the peak's formation.
Disordered systems, Diffusion, Random walks, Correlations
## 1 Introduction
The erratic motion of particles diffusing in a fluid medium (Brownian motion) has drawn considerable attention of scientists since Robert Brown first systematically investigated it [1]. A. Einstein [2] was the first to propose a mathematical description of this type of motion (see [3] for a detailed historical account). Einstein, who essentially did not know about Brownian motion, found out, that such a phenomenon is an inavoidable consequence of the kinetic theory of heat, and closely connected it to diffusion. In this picture of what we now call _normal diffusion_, the particles' motion possesses two important properties [4]: (i) The mean square displacement (MSD) of the particles from their initial position grows linearly in time,
\[\langle\mathbf{r}(t)^{2}\rangle=2dDt \tag{1}\]
(with \(d\) being the dimension of space, and \(D\) being the diffusion coefficient), and (ii) The probability density function (PDF) of the particles' displacements at a given time follows a Gaussian distribution
\[p(\mathbf{r},t)=\frac{1}{(4\pi Dt)^{d/2}}\exp\left(-\frac{\mathbf{r}^{2}}{4Dt }\right). \tag{2}\]
The properties (1) and (2) were tested in many experiments, and their confirmation laid a solid foundation to our understanding of the atomistic structure of matter [5]. The random walk approach used by Einstein assumed that one can approximate the particle's motion by a sequence of independent steps in random directions under the condition that the times necessary to make a step are the same, and the displacement in a single step has a finite second moment. This approach was closely mirrored in many early experiments using stroboscopic measurements. Independently of Einstein, Smoluchowski [6] presented a more formal mathematical description of the Brownian motion which lead to the same results as Einstein's, and set the ground to a new branch of probability theory concerning the diffusion processes [7]. After Einstein and Smoluchowski, Langevin [8] proposed a new mathematical tool for the description of the particle's motion, the stochastic differential equation.
The standard picture corresponds to the tracer's motion in a homogeneous, quiescent fluid. In the course of time, many deviations from this kind of behavior were found for other media. Numerous experiments on transport in complex
media (disordered solids, rocks, biological media, etc.) showed that, instead of a linear time dependence as given by Eq. (1), the MSD often follows a power-law time-dependence \(\langle\mathbf{r}(t)^{2}\rangle\propto t^{\gamma}\), with \(0<\gamma<1\) (subdiffusion) or \(1<\gamma<2\) (superdiffusion). A system whose MSD shows such a time dependence is said to exhibit _anomalous diffusion_. Depending on the specific case, different mathematical models have been proposed to describe this anomalous behavior by focusing on different aspects of the motion [9; 10; 11; 12; 13; 14]. Some classical models are: the uncorrelated continuous time random walks (CTRW) with power-law waiting time distributions, the fractional Brownian motion, and Levy walks and Levy flights. The PDF in these models may or may not be Gaussian.
Several recent experiments [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31] reported a new type of diffusion in which the MSD grows linearly in time, like in the normal diffusion, yet the PDF of displacements shows considerable deviations from the Gaussian shape. Usually, the PDF of displacements is well-described by a Laplace (two-sided exponential) distribution. This behavior was called Brownian yet non-Gaussian (BnG) diffusion [18]. Some of the corresponding systems show a crossover from the non-Gaussian distribution to a Gaussian one at long times [15; 30]. In several cases [15; 18; 19; 24; 26; 27; 28; 29; 30; 31], for times at which the crossover takes place, the PDF presents a peak close to its mode. This peak resembles a part of the initial Laplace distribution, while the parts of the distribution further from its mode have already a more or less Gaussian shape.
Many of the systems in which the BnG diffusion is observed are pertinent to soft matter, and almost all of the experimental systems with BnG diffusion may show a great deal of spatial and temporal inhomogeneity, or disorder. Thus, the medium in which the particle moves may be spatially heterogeneous, or change in time. The properties of the diffusing tracer may change in time as well.
Different assumptions about the heterogeneity involved lead to different classes of models which were proposed for the description of BnG diffusion. The most popular class corresponds to the diffusing diffusivity (DD) models, see e.g. [32; 33; 34; 35; 36]. They assume slow random changes of the diffusion coefficient in time. The particular variant of the model used in [34] will be called "the minimal model" of diffusing diffusivity in what follows. Another model describing BnG diffusion is the diffusivity landscape model (DLM) [37] which considers that the diffusion coefficient varies slowly in space.
The possible connection between the diffusing diffusivity and DLM was stated in Ref. [34]: the temporal randomness of the diffusion coefficient can be considered as stemming from its spatial change along the trajectory of a diffusing particle, so that the "minimal model" is a kind of a mean-field approximation for the case of spatial changes. Even if the DD and DL models are gauged in such a way that they reproduce the main features of the phenomenon, their predictions differ in some details. Looking particularly into these details may deliver valuable experimental insights into the kind of disorder involved. Thus, the DLM (and other models with correlated spatial disorder like the one discussed in [38], not necessarily exhibiting the BnG
behavior) show a pronounced central peak at the mode of the PDF of particles' displacements. This central peak is, however, absent in the minimal model. The existence of this central peak in [38] was immediately connected to the correlated nature of disorder.
Recently, in Ref. [39], we concentrated on the behavior of the PDF of displacements close to its mode and showed that the PDF of displacements in several classical strongly disordered systems displays such a peak at its center. The behavior of this central peak is quite peculiar, since its presence shows that the convergence to a Gaussian (i.e. normal) behavior under homogenization may follow a different pathway than the one commonly known from the Central Limit Theorem (CLT) applied to sums of many independent, identically distributed (i.i.d.) random variables following some continuous distribution (in our case this should be the short-time Laplace one). This standard situation suggests that the initially sharp peak would smoothen and lower. However, under homogenization, the central peak in the considered classical strongly disordered systems gets narrower under the rescaling \(\mathbf{r}\to\mathbf{r}/\sqrt{t}\), \(p\to t^{d/2}p\) implied by the CLT, while approximately keeping the height. Passing from the spatially disordered systems to their mean-filed counterparts (like the corresponding CTRWs, or the minimal model) restores the standard convergence pathway like the one predicted by the CLT.
The differences in the convergence pathways have to do with the fact that some important local information about the system is erased when passing to the pre-averaged (mean-field) description. Now, one could ask, what is the important information erased? In the present work, we try to answer this question by simulating the particles' trajectories in DLM (described as a continuous-time random walk of particles on a lattice with position-dependent waiting times) and erase the correlations between the waiting times and positions, while fully preserving the temporal structure of the walk. The result of the discussion shows that the existence of the persistent peak is connected to spatiotemporal correlations, and destroying them (while fully preserving the temporal structure of the problem) leads to a different kind of behavior. We note that the answer to this question may apply in other similar situations in strongly disordered systems.
The article is structured as follows: In Section 2, we revisit the diffusivity landscape model being the base of our investigation. Section 3 explores the idea that the DLM presents strong spatio-temporal correlations which ultimately leads to the PDF exhibiting a central peak. We show that destroying these spatiotemporal correlations while fully preserving the temporal structure of steps reproduces the PDF in DLM at short times, but leads to lowering and disappearing of the peak at long ones. Section 4 provides a CTRW model with correlated waiting times which partially reproduces the behavior found in this decorrelated DLM model, but fails to fully describe the situation. In Section 5, we discuss the role of the particular shape of the correlation function of diffusivities assumed in DLM by considering a slightly different model. Finally, Section 6 presents concluding remarks.
## 2 Diffusivity landscape model
In what follows, we use the model proposed by Postnikov et al. [37] which assumes the particles' diffusion in a heterogeneous medium modeled by a correlated diffusivity landscape \(D(\mathbf{r})\). This motion is described by the force-free Langevin equation with multiplicative noise
\[\frac{d}{dt}\mathbf{r}=\sqrt{2D(\mathbf{r})}\;\boldsymbol{\xi}(t), \tag{3}\]
with \(\boldsymbol{\xi}(t)\) being a Gaussian white noise with \(\langle\boldsymbol{\xi}(t)\rangle=0\) and \(\langle\xi_{\mu}(t)\xi_{\nu}(t^{\prime})\rangle=\delta_{\mu\nu}\delta(t-t^{ \prime})\) with \(\mu,\nu\) representing Cartesian coordinates. This Langevin equation corresponds to the Fokker-Planck equation
\[\frac{\partial}{\partial t}p(\mathbf{r},t)=\nabla[(1-\alpha)\nabla D(\mathbf{ r})+D(\mathbf{r})\nabla]p(\mathbf{r},t) \tag{4}\]
with \(\alpha\) being the interpretation parameter taking values in the interval \(0\leq\alpha\leq 1\) (see e.g. [40] for a comprehensive discussion). The authors of [37] asked, under which condition would Eq. (4) describe the BnG diffusion, and found out that the two following conditions should be met: First, Eq (3) without external potential must be interpreted in the Ito sense (\(\alpha=0\); then, of course, any other interpretation can be used by introducing the corresponding deterministic force [40]), and second, initial positions of diffusing particles must be sampled from the equilibrium distribution. Taking as a "stylized fact" that the PDF at short times has been observed to follow a Laplace distribution [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31], one can then show that the single-point PDF of the diffusion coefficients in the corresponding landscape should be given by a Gamma distribution:
\[p(D)=\frac{\beta^{\beta}}{\Gamma(\beta)}\frac{1}{\overline{D}}\left(\frac{D}{ \overline{D}}\right)^{\beta-1}\exp\left(-\beta\frac{D}{\overline{D}}\right), \tag{5}\]
where \(\Gamma(\cdot)\) is a Gamma function, and \(\beta\) and \(\overline{D}\) are shape parameters dependent on the dimension of space.
In what follows, we will concentrate on the two-dimensional situation, for which \(\beta=5/2\) and \(\overline{D}=5D_{0}/3\), with \(D_{0}\) being the sampled diffusion coefficient, i.e., the one defining the slope of the "experimental" MSD assumed to strictly follow the linear dependence \(\langle\mathbf{r}(t)^{2}\rangle=2dD_{0}t\)[37].
A finite-difference discretization of the Fokker-Planck equation, Eq. (4), with \(\alpha=0\) on a square lattice with lattice constant \(a\) leads to a master equation (see Eq. (6) below) which, in its turn, defines a random walk scheme. The corresponding random walks are exactly what will be simulated in what follows.
For \(\alpha=0\), Eq. (4) can be rewritten in the form
\[\frac{\partial}{\partial t}p(\mathbf{r},t)=\Delta[D(\mathbf{r})p(\mathbf{r},t)],\]
and its discrete version is
\[\frac{d}{dt}p_{i}(t)=\sum_{k=1}^{4}\frac{D_{j_{k}}}{a^{2}}p_{j_{k}}(t)-\frac{4D_{ i}}{a^{2}}p_{i}(t). \tag{6}\]
Here, the discretization point \(i\) corresponds to coordinates \((x_{i},y_{i})\) on a rectangular grid with the lattice constant \(a\), and points \(j_{k}\) are the four nearest neighbors of the lattice point \(i\).
Under the above discretization, the random diffusivity field at each lattice point translates into correlated values of local parameters \(D_{i}\equiv D(x_{i},y_{i})\), which are generated according to the following algorithm, Ref. [37]: One begins by constructing an array of independent Gaussian random variables \(G_{i}\) with zero mean and unit variance. Then one generates a correlated Gaussian field \(\widehat{G}_{i}\) by applying the Fourier filtering method [41] to \(G_{i}\). Like in [37], we take the correlation function of the correlated field to follow
\[\rho(\mathbf{r}_{ij})=\langle\widehat{G}_{i}\widehat{G}_{j}\rangle=\exp\left( -\frac{\mathbf{r}_{ij}^{2}}{2\lambda^{2}}\right), \tag{7}\]
with \(\lambda\) being the correlation length, and \(\mathbf{r}_{ij}\) the Euclidean distance between lattice points \(i\) and \(j\). We note that the choice of Eq. (7) is not dictated by any physical reasons but by the ease of numerical implementation and further calculations. In Section 5, we will explore the consequences of changing the correlation function of the diffusivity landscape by considering a checkerboardlike diffusivity landscape.
Finally, the correlated Gaussian field \(\widehat{G}_{i}\) is transformed into the \(\Gamma\)-distributed diffusivity landscape \(D_{i}\) by performing a probability transformation:
\[D_{i}=f(\widehat{G})=F_{\beta}^{-1}\left\{\frac{1}{2}\left[1-\mathrm{erf} \left(\frac{\widehat{G}}{\sqrt{2}}\right)\right]\right\}, \tag{8}\]
where \(\mathrm{erf}(\cdot)\) is the error function and \(F_{\beta}^{-1}(x)\) is the inverse of the cumulative distribution function (CDF) \(F_{\beta}(D)\) for the PDF given by Eq. (5), which is given by
\[F_{\beta}(D)=\int_{0}^{D}p(D^{\prime})dD^{\prime}=\frac{1}{\Gamma(\beta)} \gamma\left(\beta,\beta\frac{D}{\overline{D}}\right),\]
with \(\gamma(\cdot,\cdot)\) being the lower incomplete Gamma function. The procedure above generates a diffusivity landscape \(D_{i}\) whose correlation function follows from that of the correlated Gaussian field, Eq. (7), by a transformation which will be discussed in Sec. 4.1.
Figure 1 shows a realization of the diffusivity landscape \(D_{i}\) for a lattice of \(256\times 256\) with \(a=1\), \(D_{0}=1\) and \(\lambda=10\).
Let us now return to our Eq. (6). Defining the transition rates as
\[\omega_{i\to j}=\frac{D_{i}}{a^{2}}=\frac{1}{4}\left(\frac{a^{2}}{4D_{i}} \right)^{-1}=\frac{1}{4}\frac{1}{\tau_{i}},\]
with
\[\tau_{i}=\frac{a^{2}}{4D_{i}} \tag{9}\]
being the mean waiting time at a site, and \(1/4\) corresponding to the probability to choose one of the four neighbors to jump to. We put Eq. (6) into a standard form of a master equation
\[\frac{d}{dt}p_{i}(t)=\sum_{k=1}^{4}\omega_{j_{k}\to i}p_{j_{k}}(t)-\sum_{k=1}^{ 4}\omega_{i\to j_{k}}p_{i}(t),\]
which can be rewritten as
\[\frac{d}{dt}p_{i}(t)=\frac{1}{4}\sum_{k=1}^{4}\frac{1}{\tau_{j_{k}}}p_{j_{k}}( t)-\frac{1}{\tau_{i}}p_{i}(t). \tag{10}\]
In Ref. [37], Eq. (10) was solved using the forward Euler method. Here, we employ another approach. Like in Ref. [39], we use the fact that the master equation (10) corresponds to a CTRW with exponential waiting times distribution [42]. Thus, to solve the master equation, i.e., to obtain the evolution
Figure 1: A two-dimensional realization of the diffusivity landscape \(D(\mathbf{r})\) in the diffusivity landscape model. It corresponds to a \(256\times 256\) lattice with correlation length \(\lambda=10\) and sampled diffusion coefficient \(D_{0}=1\).
of the PDF, we generate random walk trajectories whose waiting times follow the exponential waiting time density
\[\psi(t|\tau_{i})=\frac{1}{\tau_{i}}\exp\left(-\frac{t}{\tau_{i}}\right),\]
with \(\tau_{i}\) given by Eq. (9). Taking the lattice spacing to be the length unit of the problem (\(a=1\)), we get
\[\psi(t|D_{i})=4D_{i}\exp\left(-4D_{i}t\right). \tag{11}\]
Note that the single-step displacements in our CTRW are i.i.d. random variables (each step has a unit length and arbitrary, random direction), while waiting times are not independent since \(D_{i}\) at neighboring points are correlated. An illustration of the procedure to generate random trajectories can be seen in panel (\(a\)) of Figure 2. As we shall see, this alternative method allows us to study the role of space-time correlations in the DLM, which would be impossible to do by solving the ordinary differential equations (ODEs). Moreover, generating random trajectories is considerably less computationally expensive than solving ODEs, and allows us to have much better statistics of the desired quantities.
## 3 Space-time correlations
In the last decades, several correlated CTRW models were proposed which lead to interesting behaviors [43; 44; 45]. However, all these correlated models focus only on either the temporal part or the spatial part separately or simultaneously, but leave the spatiotemporal correlations out of the picture. The reason for this is that dealing with such cross-correlations is, in general, a very complex task, even from a computational point of view.
All (semi-)analytical results usually come from applying mean-field techniques which partially or completely ignore the fine-scale structure of the system, so that some interesting features of the spatially disordered systems are not reproduced. This is the case, e.g., for the behavior of the central peak seen in the DLM [39], which is not reproduced in such pre-averaged models like the CTRW description of the DLM [39], or the minimal model of BnG [34]. It is in this regard that we seek to know to what extent the spatiotemporal correlations are responsible for the persistence of the central peak and the unusual art of convergence to a Gaussian distribution by narrowing of the central peak under rescaling \(\mathbf{r}\rightarrow\mathbf{r}/\sqrt{t}\), \(p\to t^{d/2}p\), instead of its lowering. To assess the effects of spatiotemporal correlations in the DLM, we remove them by randomization of step directions and see what changes by comparing the PDF in the decorrelated motion with that in the correlated one.
Let us consider a particle whose initial position is \(\mathbf{r}_{0}\), and follow its true motion as given by a random walk scheme corresponding to the master equation (6). At that position, the particle waits for a time \(t_{0}\) which is drawn
from the exponential distribution \(\psi(t|D_{0})\) given in Eq. (11), with \(D_{0}=D(\mathbf{r}_{0})\). Next, the particle randomly jumps to one of its neighboring lattice points whose position is \(\mathbf{r}_{1}\), and then waits for another time \(t_{1}\) which is now drawn from the exponential distributions \(\psi(t|D_{1})\), with \(D_{1}=D(\mathbf{r}_{1})\). This process of jumping and waiting is repeated until the maximal simulation time \(t_{max}\) is exceeded by the sum of waiting times. At the end, the trajectory of our particle (which we will call _real_ particle to distinguish its motion from its randomized counterparts) is given by a list of positions \(\{\mathbf{r}_{0},\mathbf{r}_{1},\mathbf{r}_{2},\dots\}\), which correspond to a simple random walk, and a list \(\{t_{0},t_{1},t_{2},\dots\}\) of the corresponding waiting times between subsequent jumps which are drawn from the exponential distributions \(\psi(t|D_{i})\) with \(D_{i}=D(\mathbf{r}_{i})\), and which are therefore dependent on the particles' positions. This dependence of the exponential distribution on the value of the local diffusion coefficient generates the spatiotemporal correlations in the DLM. The procedure to obtain the trajectories of a real particle is sketched in panel \((a)\) of Figure 2.
We now use the above trajectories of true motion of particles, which we refer to as "real trajectories" in what follows, to generate new trajectories in which space and time are uncorrelated. Let us start by taking the temporal part of a real trajectory, i.e., the list of waiting times \(\{t_{0},t_{1},\dots\}\), and discard the spatial part. Then we proceed as follows: Let us consider a new particle, which we will call the _decoupled_ particle, whose initial position is the same as for the real one, i.e., \(\mathbf{r}_{0}\). The decoupled particle then waits a time equal to the first waiting time of the real particle, namely \(t_{0}\), and makes a jump to one of the neighboring lattice points with position \(\mathbf{r}_{1}^{\prime}\). At this position, the decoupled particle waits a time equal to the second waiting time of the real particle, namely \(t_{1}\), to make the next jump in a random direction. This process is repeated until the same number of steps as for a real particle is done, and the maximal time \(t_{max}\) is exceeded. Hence, one ends up with a trajectory for the decoupled
Figure 2: Schematics of the procedure to decouple space and time. The real particle follows the blue trajectory, whereas the decoupled particle follows the red trajectory. Notice that the waiting times for both particles are the same, and they depend only on the positions of the real particle.
particle consisting of a list of the positions \(\{\mathbf{r}_{0},\mathbf{r}_{1}^{\prime},\mathbf{r}_{2}^{\prime},\dots\}\) being sums of i.i.d. random steps, and the same list of waiting times \(\{t_{0},t_{1},\dots\}\) as for the real one, which are however decoupled from the corresponding particle's positions. This process is depicted in panel \((b)\) of Figure 2. The trajectories of real and decoupled particles are then used for obtaining the PDFs of displacements in a given realization of the landscape. Similar PDFs are obtained for different realizations of the diffusivity landscapes and then weighted-averaged under the equilibrium condition: the corresponding weight is proportional to the waiting time \(t_{0}\) at \(\mathbf{r}_{0}\) in the corresponding landscape.
The resulting PDFs for real and decoupled particles are displayed in Figure 3. The four panels of the plot present four different maximal times: \(t_{max}=10^{1}\), \(10^{2}\), \(10^{3}\), and \(10^{4}\). Each panel presents a comparison of the PDF for the real (black dots) and decoupled (red dots) particles. Each PDF is the average over \(10^{4}\) realizations of the diffusivity landscape over a lattice of \(2048\times 2048\) with \(\lambda=10\) and \(D_{0}=1\). Each realization contains \(10^{5}\) particles. Plotted in Figure 3 is a cut of PDF \(p(x,y;\ t)\) through the origin at \(y=0\). Moreover, following [39], we plot the PDF as a function of the rescaled displacement \(\xi=x/\sqrt{t}\). To keep the normalization of the PDF, it has to be rescaled as \(q(\xi)=t\cdot p(\xi)\). Figure 3 shows that the decoupling of the spatial and temporal aspects of the motion changes the art of convergence to the Gaussian from the unusual one, by narrowing of the central peak, to the CLT-like convergence, by lowering and smoothening the peak.
Figure 3: A comparison of the one-dimensional cut of the PDF \(q(\xi)=p(x,0)t\) of rescaled displacements \(\xi=x/\sqrt{t}\) for the real particle (black) and decoupled particle (red), see text for details. Each panel represents a particular time. The flattening of the the central peak in the PDF of positions of the decoupled particle is evident at longer times.
In Refs. [38] and [39], the existence of the central peak was connected with the set of particles which started their motion in a patch with a very low local diffusivity, so that they could hardly leave the patch until very long times. The randomization results show that this is only a partial explanation, since at the beginning of its motion a decoupled particle experiences the same, very long waiting times as the real one provided it started in such a patch. The trajectory of a decoupled particle is simply a different realization of a random walk with the same starting point associated with the same list of waiting times, so that only kind of correlations which are destroyed by our procedure correspond to what happens if the particle returns to a close vicinity of its initial position after making an excursion to the outside of the patch (a real particle will again experience long waiting times, while for a decoupled one these new waiting times are not necessarily long, and new long waiting periods occure at different positions). Thus, it is a behaviour after an excursion that makes the peak persistent.
Figure 3, however, shows that the central peak for a decoupled motion is still present at a times as long as \(10^{3}\). The reason for its existence may only be connected with correlations between waiting times along the trajectory, which are not destroyed by decoupling. Therefore, our next step will be to include the temporal correlations into a space-time-decoupled CTRW model.
## 4 Time-correlated continuous-time random walk
Ref. [39] presented a mean-field description of the DLM. This mean-field description is constructed as an _uncorrelated_ CTRW model whose waiting time distribution is found by averaging the mean waiting time distribution at a site (Eq. (11)) over the distribution of the diffusion coefficients, which is given by the one-point distribution of the diffusivity landscape (Eq. (5)). This mean-field waiting time distribution is given by
\[\psi(t)=\int_{0}^{\infty}\psi(t|D)p(D)dD=\frac{5}{2}\left(\frac{3}{8}\right)^{ 5/2}\left(\frac{3}{8}+t\right)^{-7/2}, \tag{12}\]
for \(D_{0}=1\). The corresponding waiting time density corresponds to a Pareto type II distribution with mean waiting time \(\langle t\rangle=1/4\) and second moment \(\langle t^{2}\rangle=3/8\). The fact that the initial state of the system in the DLM must be at equilibrium should also be included in its mean-field description. This is done by taking the first waiting time to follow the PDF [42]
\[\psi_{1}(t)=\frac{1}{\langle t\rangle}\left[1-\int_{0}^{t}\psi(t^{\prime})dt^ {\prime}\right]=\frac{3}{2}\left(\frac{3}{8}\right)^{3/2}\left(\frac{3}{8}+t \right)^{-5/2}, \tag{13}\]
which is also a Pareto type II distribution with a different exponent.
As we have shown in [39], this mean-field description, being a pre-averaged model (cf. Eq. (12)) neglecting _all_ correlations, does not show any peak at the center of the distribution, except for a decaying remains of the initial condition at \(\mathbf{r}(0)=0\). The PDF of the particles' displacements in this model is shown in Figure 5 to be compared with the results for a decoupled particle and for a CTRW model reproducing the serial correlations along the trajectory discussed below.
### Correlation function of the diffusion coefficient along the trajectories
We would like to know to what extent the PDF for the decoupled particle can be replicated if temporal correlations are included. To do so, let us first determine the correlations of the diffusion coefficients along the trajectories of the random walk.
Let us start by finding an approximation for the correlation function of the diffusivity landscape \(D(\mathbf{r})\) in terms of that of the correlated Gaussian field \(\widehat{G}\) defined in Eq. (7). The correlation function of the diffusivity landscape is, by definition,
\[C_{DD}(\mathbf{r})\equiv\xi(\mathbf{r})=\frac{\langle\delta D(\mathbf{0}) \delta D(\mathbf{r})\rangle}{\sigma_{D}^{2}}=\frac{\langle D(\mathbf{0})D( \mathbf{r})\rangle-\overline{D}^{2}}{\sigma_{D}^{2}}, \tag{14}\]
with \(\sigma_{D}^{2}\) the variance of the local diffusivity, and \(\delta D(\mathbf{r})=D(\mathbf{r})-\overline{D}\). In our case, \(\overline{D}=5/3\) and \(\sigma_{D}^{2}=10/9\), for \(D_{0}=1\) in two dimensions.
Let us turn our attention to the mean \(\langle D(\mathbf{0})D(\mathbf{r})\rangle\) in the last expression of Eq. (14). Just for convenience, let us denote \(D(\mathbf{0})\) and \(D(\mathbf{r})\) as \(D_{1}\) and \(D_{2}\), respectively. By doing so, one can write
\[\langle D(\mathbf{0})D(\mathbf{r})\rangle=\langle D_{1}D_{2}\rangle=\int_{0}^ {\infty}\int_{0}^{\infty}dD_{1}dD_{2}p(D_{1},D_{2};\;\mathbf{r})D_{1}D_{2}. \tag{15}\]
Now, we make use of the invariance of the probability measures,
\[dD_{1}dD_{2}p_{D}(D_{1},D_{2},\mathbf{r})=d\widehat{G}_{1}d\widehat{G}_{2}p_{ G}(\widehat{G}_{1},\widehat{G}_{2};\;\mathbf{r}),\]
with
\[p_{G}(\widehat{G}_{1},\widehat{G}_{2};\;\mathbf{r})=\frac{1}{2\pi\sqrt{1- \rho(\mathbf{r})^{2}}}\exp\left[-\frac{\widehat{G}_{1}^{2}+\widehat{G}_{2}^{2 }-2\widehat{G}_{1}\widehat{G}_{2}\rho(\mathbf{r})}{2(1-\rho(\mathbf{r})^{2})}\right]\]
being the bivariate distribution of the correlated Gaussian field used in the first stage of construction of the diffusivity landscape, with \(\rho(\mathbf{r})\) being the correlation function of this field given by Eq. (7). We note that the \(\mathbf{r}\)-dependence in this expression is fully due to the one of the correlation function
\(\rho(\mathbf{r})\), and concentrate only on this \(\rho\)-dependence, introducing the function \(g(\widehat{G}_{1},\widehat{G}_{2};\;\rho)=p(\widehat{G}_{1},\widehat{G}_{2};\; \mathbf{r})\). Now, we can write Eq. (15) as
\[\langle D_{1}D_{2}\rangle=\int_{-\infty}^{\infty}\int_{\infty}^{\infty}d \widehat{G}_{1}d\widehat{G}_{2}g(\widehat{G}_{1},\widehat{G}_{2};\;\rho)f( \widehat{G}_{1})f(\widehat{G}_{2}), \tag{16}\]
with \(f(\widehat{G})\) being the function that transforms the correlated Gaussian field into the diffusivity landscape, Eq. (8). Note that according to Eqs. (16) and (8) the value of the function \(\langle D_{1}D_{2}\rangle\) is a function of \(\rho\) only, and therefore passing to the correlation function of diffusivity landscape, which differs from \(\langle D(\mathbf{0})D(\mathbf{r})\rangle\) by shift and rescaling, we see that \(\xi(\mathbf{r})=\xi[\rho(\mathbf{r})]\), and the dependence \(\xi(\rho)\) is not influenced by a particular shape of the correlation function of the Gaussian landscape. Thus, the transformation from the Gaussian field to a Gamma-distributed landscape corresponds to a pointwise transformation of their correlation functions. This property will be used several times.
The integration in Eq. (16) can only be performed numerically. However, one can still find an analytical approximation to this integral. We begin by Taylor expanding the function \(f(\widehat{G})\) around zero up to the fourth order:
\[f(\widehat{G})\approx a_{0}+a_{1}\widehat{G}+a_{2}\widehat{G}^{2}+a_{3} \widehat{G}^{3}+a_{4}\widehat{G}^{4}+O(\widehat{G}^{5}),\]
with \(a_{0}=1.4505\), \(a_{1}=0.9704\), \(a_{2}=0.2194\), \(a_{3}=0.0130\) and \(a_{4}=0.0011\) for the values of parameters used. The coefficients correspond to the numerical evaluation of the analytical expressions of the corresponding derivatives of \(f\) which is easily done with Mathematica. This last expression can now be used to compute the integral in Eq. (16) as a function of \(\rho\), since the corresponding integral reduces to the sum of moments of a bivariate Gaussian weighted with different prefactors. Keeping contributions up to the fourth order in \(\rho\) we find
\[\langle D(\mathbf{0})D(\mathbf{r})\rangle\approx b_{0}+b_{1}\rho+b_{2}\rho^{2 }+b_{3}\rho^{3}+b_{4}\rho^{4}+O(\rho^{5}),\]
with \(b_{0}=2.77717\), \(b_{1}=1.01867\), \(b_{2}=0.09043\), \(b_{3}=0.00101\) and \(b_{4}=0.00003\). The first coefficient (\(b_{0}\)) is equal to \(\overline{D}^{2}\), therefore it vanishes when plugging back into Eq. (14). Moreover, since the coefficients \(b_{3}\) and \(b_{4}\) are small compared to \(b_{1}\) and \(b_{2}\), they can be neglected. Under this approximation we get \(C_{DD}(\mathbf{r})\approx\xi[\rho(\mathbf{r})]\) with the function
\[\xi(\rho)=\frac{b_{1}\rho+b_{2}\rho^{2}}{b_{1}+b_{2}}=c_{1}\rho+c_{2}\rho^{2}. \tag{17}\]
with \(c_{1}=0.918465\) and \(c_{2}=0.081535\). This simple quadratic approximation has the relative accuracy better than \(0.0005\) in the whole domain \(0\leq\rho\leq 1\) as compared to the result of high-precision numerical integration.
The transformation \(\xi(\rho)\) is invertible, and gives therefore the possibility to construct a Gaussian filed whose probability transformation would produce a
Gamma-field with a desired two-point correlation function. We will use this possibility in what follows, when considering the correlated CTRW scheme in Sec. 4.2. The inverse transformation is given by the solution of the quadratic equation, giving the inverse function
\[\rho(\xi)=\sqrt{\left(\frac{c_{1}}{2c_{2}}\right)^{2}+\frac{\xi}{2c_{2}}}-\frac {c_{1}}{2c_{2}}. \tag{18}\]
Substituting the expression for \(\rho(\mathbf{r})\), Eq. (7), into Eq. (17) we get the approximation for the correlation function of the diffusivity landscape:
\[C_{DD}(\mathbf{r})\approx c_{1}\exp\left(-\frac{\mathbf{r}^{2}}{2\lambda^{2}} \right)+c_{2}\exp\left(-\frac{\mathbf{r}^{2}}{\lambda^{2}}\right). \tag{19}\]
Now we can use this approximation to find the correlation function of the diffusivity landscape along the trajectories, or in other words, as a function of the number of steps \(C_{DD}(n)\). To do so, Eq. (19) has to be averaged using the PDF \(f(\mathbf{r}|n)\) of the displacements given the number of steps \(n\):
\[C_{DD}(n)=\int d\mathbf{r}\;C_{DD}(\mathbf{r})\;f(\mathbf{r}|n). \tag{20}\]
Figure 4: Correlation function \(C_{DD}(n)\) as a function of the number of steps for the two-dimensional case with \(\lambda=10\) and \(D_{0}=1\). We compare the numerical results obtained in simulations of particle diffusion in the diffusivity landscapes (green line), with the approximation Eq. (22) (red line). The standard errors of the mean (SEM) are represented by the light green area. Excellent agreement is observed in the whole range of the steps’ numbers.
Given that the spatial part of the motion is a two-dimensional simple random walk, the PDF \(f(\mathbf{r}|n)\) can be safely approximated by a two-dimensional Gaussian distribution
\[f(\mathbf{r}|n)=\left(\frac{d}{2\pi a^{2}n}\right)^{\frac{d}{2}}\exp\left(-\frac {d\,\mathbf{r}^{2}}{2a^{2}n}\right)=\frac{1}{2\pi\sigma^{2}n}\exp\left(-\frac{ \mathbf{r}^{2}}{2\sigma^{2}n}\right), \tag{21}\]
with \(d=2\), \(a=1\) and, respectively, \(\sigma^{2}=1/2\). Within this approximation, Eq. (20) takes the form
\[C_{DD}(n)\approx c_{1}\left(1+\frac{n}{2\lambda^{2}}\right)^{-1}+c_{2}\left(1+ \frac{n}{\lambda^{2}}\right)^{-1}. \tag{22}\]
Figure 4 shows a comparison between this approximate expression and the result from simulations of particle diffusion on the diffusivity landscape. Excellent agreement is observed in the whole range of steps' numbers. Note that the correlation function of diffusion coefficients is extremely long-ranged.
### Correlated CTRW
Let us now use the correlation function of the diffusivity values along the trajectories, Eq. (22), to construct a time-correlated CTRW scheme. The process of generating correlated waiting times is similar to the one used for generating the landscape. Starting from values of \(\xi(n)=C_{DD}(n)\) given by Eq. (22) we use Eq. (18) to obtain the correlation function \(\rho(n)\) of a Gaussian vector which then will be transformed to the one of diffusivity values and finally into waiting times along the trajectory.
We proceed by generating a one-dimensional uncorrelated Gaussian vector \(g_{i}\) by assigning to each entry of the vector a random number drawn from a Gaussian distribution with zero mean and unit variance. Then, using the Fourier filtering method [41], we generate a correlated Gaussian vector \(\widehat{g}_{i}\) with correlation function \(\rho(n)\) with \(n=|i-j|\). Using the probability transformation, Eq. (8), we transform this correlated Gaussian vector into a one-dimensional array of diffusion coefficients \(\mathcal{D}_{i}\) with the desired correlation function \(C_{DD}(n)=\xi(n)\). The array of correlated diffusion coefficients \(\mathcal{D}_{i}\) is then used to generate waiting times of our CTRW scheme by drawing random numbers \(t_{i}\) from an exponential waiting time distribution \(\psi(t|\mathcal{D}_{i})=4\mathcal{D}_{i}\exp(-4\mathcal{D}_{i}t)\). In each realization of the process, one repeats the procedure until obtaining such a number \(n^{\prime}\) of drawn elements that the sum of the first \(n^{\prime}\) elements does not exceed \(t_{max}\) but the sum of the first \(n^{\prime}+1\) does. The number of elements \(n^{\prime}\) is then the number of steps performed by a walker until \(t_{max}\). The PDF of displacements for this correlated CTRW can be estimated by the average
\[p(\mathbf{r},t_{max})=\langle f(\mathbf{r}|n^{\prime})\rangle_{n^{\prime}},\]
with \(f(\mathbf{r}|n)\) being the PDF of displacements for a given number of steps, Eq. (21), weighted with the waiting time of the first step.
Figure 5 shows the resulting PDF for two different times \(t_{max}=10^{2}\), and \(10^{3}\), one time per panel. Each panel presents a comparison between the PDF of the decoupled particle and that in the correlated CTRW. The PDFs for decoupled particles are the same as the ones in Figure 3 for the corresponding times. For the correlated CTRW, each PDF corresponds to the average over \(8\times 10^{6}\) different realizations of the correlated array of diffusion coefficients \(\mathcal{D}_{i}\), constructed with \(\lambda=10\). As one can see, both PDFs are indistinguishable in their wings, and both present a central peak which, instead of narrowing, flattens out. However, the shapes of the peaks in both cases are significantly different. We note that the uncorrelated CTRW model shows a very different behavior in the wing (its convergence to a Gaussian is much faster) and does not show any peak except for some remains of initial condition at a shorter time.
It is worth mentioning that to generate the PDFs of the correlated CTRW we have used an extremely high number of realizations, namely \(8\times 10^{6}\); in our simulations this number was subdivided into five independent runs, and the results were both considered separately, and pooled for the plot in Figure 5. The analysis of the subsets shows that the height of the peak in different sets of \(1.6\times 10^{6}\) realizations still fluctuates considerably, so that this height is dominated by rare events, while both in the initial model and in the decoupled variant thereof the behavior in the peak may be considered as much more typical.
Since the approximations used to construct the correlated CTRW are quite accurate and the number of realizations is high enough to guarantee sufficiently good statistics, the differences suggest that our correlated model fails to capture important details of temporal correlations. Since serial correlations along the trajectories are reproduced correctly, one can conclude that these are the
Figure 5: A comparison of the one-dimensional cut of the PDF \(q(\xi)=p(x,0)t\) of rescaled displacements \(\xi=x/\sqrt{t}\) for the decoupled particle (red) and the correlated CTRW (black) for \(t=10^{2}\) and \(10^{3}\) when the differences between the behaviors are considerable. The data for the decoupled particles are the same as in Figure 3, the result for coupled CTRW correspond to \(8\cdot 10^{6}\) independent realizations, see text for details. The green dots show the results for an uncorrelated CTRW model as given by Eqs. (12) and (13).
higher-oder correlations that play a key role in the development of the central peak but are of minor importance in the wings.
## 5 The checkerboard model
Let us now go a few steps back and consider how critical our assumption about the shape of the correlation function \(\rho(\mathbf{r})\), Eq. (7), is, i.e., what happens if this function is chosen differently. To do so, we consider a DLM with a checkerboard-like diffusivity landscape. On a lattice, a checkerboard-like diffusivity landscape consists of an array of \(N\times N\) squares containing \(2\zeta\times 2\zeta\) lattice points. A constant diffusion coefficient \(D^{(i)}\) is assigned to each square. These diffusion coefficients are drawn from the distribution given by Eq. (5), the condition needed for the diffusion to be BnG. This choice of diffusivity landscape strongly changes the shape of the correlation function. Moreover, the changes in diffusion coefficients on the borders of the squares are now discontinuous, while the previous diffusivity landscape was assumed to model a smooth situation. In this model, \(\zeta\) defines the correlation length of the landscape; to compare to the results of the above DLM, we set \(\zeta=\lambda\). Figure 6
Figure 6: A two dimensional realization of the diffusivity landscape \(D(\mathbf{r})\) for the checkerboard model. It corresponds to a \(300\times 300\) lattice where each cell has a size of \(2\zeta\) with \(\zeta=10\), and sampled diffusion coefficient is \(D_{0}=1\).
shows one realization of the checkerboard-like diffusivity landscape on a lattice of \(300\times 300\) with \(\zeta=10\) and \(D_{0}=1\).
As in the case of the DLM, we perform random walk simulations of particles diffusing on an ensemble of checkerboard-like landscapes, from which the PDF of displacements can then be constructed. Figure 7 shows the time evolution of the PDF averaged over \(2\times 10^{4}\) different realizations of the landscape, each one using \(10^{4}\) particles. The landscape was constructed with \(N=109\) and \(\zeta=10\), i.e., we consider a lattice of size \(2180\times 2180\). One can see that the central peak is preserved. Moreover, the transition to the Gaussian limits follows the same type of convergence by its narrowing. This suggests that the form of the correlation function of the diffusivity landscape does not change the overall behavior. A closer look, though, reveals the presence of some discontinuities near the center of the distribution, which are expected from the fact that the diffusivity landscape itself is discontinuous.
## 6 Conclusions
In this work, we study the diffusivity landscape model (DLM) characterized by a diffusion coefficient slowly varying in space. Under specific conditions, this model leads to a Brownian yet non-Gaussian diffusion, that is, the MSD is linear in time, but the shape of the PDF changes from a Laplace distribution at short times to a Gaussian distribution at long ones. The art of convergence
Figure 7: A one dimensional cut of the PDF \(q(\xi)=p(x,0)t\) of rescaled displacements \(\xi=x/\sqrt{t}\) for the diffusion of particles in the checkerboard model. The straight line corresponds to the Laplace distribution, whereas the dotted line corresponds to the Gaussian distribution. The inset shows a close-up of the central part of the distribution exposing the peak.
to the Gaussian is quite a peculiar one since the PDF at all times displays a central peak that does not decay with time, but narrows under rescaling. We show that the persistence of the peak is due to strong spatiotemporal correlations introduced by correlations of local diffusion coefficients in space. Destroying the spatiotemporal correlations on the level of single trajectories (by considering a different relaization of steps' directions while keeping the same list of waiting times as for the real motion) lets the peak to lower and to disappear at longer times. This kind of behavior is qualitatively reproduced by a correlated CTRW model with serial correlations of waiting times along the trajectory mimicking the ones observed in simulations. The model however fails to quantitatively reproduce the PDF for the decoupled case, showing a considerably lower peak. We attribute this fact to an important role of higher-order correlations which are not reproduced by the model. By considering a different variant of correlated disorder (the checkerboard model) we moreover show that the existence of the peak is insensitive to the exact shape of the correlation function of local diffusivities, and its shape is hardly sensitive to it.
**Acknowledgments.** The work of A. P. P. was financially supported by Doctoral Programmes in Germany funded by the Deutscher Akademischer Austauschdienst (DAAD) (Programme ID 57440921).
|
2306.17533 | Ethics in rotten apples: A network epidemiology approach for active
cyber defense | As Internet of Things (IoT) technology grows, so does the threat of malware
infections. A proposed countermeasure, the use of benevolent "white worms" to
combat malicious "black worms", presents unique ethical and practical
challenges. This study examines these issues via network epidemiology models
and simulations, considering the propagation dynamics of both types of worms in
various network topologies. Our findings highlight the critical role of the
rate at which white worms activate themselves, relative to the user's system
update rate, as well as the impact of the network structure on worm
propagation. The results point to the potential of white worms as an effective
countermeasure, while underscoring the ethical and practical complexities
inherent in their deployment. | Francesco Bonacina, Ignacio Echegoyen, Diego Escribano, Marcus Krellner, Francesco Paolo Nerini, Rasha Shanaz, Andreia Sofia Teixeira, Alberto Aleta | 2023-06-30T10:43:35Z | http://arxiv.org/abs/2306.17533v1 | # Ethics in rotten apples:
###### Abstract
As Internet of Things (IoT) technology grows, so does the threat of malware infections. A proposed countermeasure, the use of benevolent "white worms" to combat malicious "black worms", presents unique ethical and practical challenges. This study examines these issues via network epidemiology models and simulations, considering the propagation dynamics of both types of worms in various network topologies. Our findings highlight the critical role of the rate at which white worms activate themselves, relative to the user's system update rate, as well as the impact of the network structure on worm propagation. The results point to the potential of white worms as an effective countermeasure, while underscoring the ethical and practical complexities inherent in their deployment.
## I Introduction
'Internet of Things' (IoT) technology is everywhere. Even seemingly trivial household devices like light bulbs and toasters are connected to the internet over local networks. Unfortunately, the rise of malware infections has become a critical concern in IoT cybersecurity, posing a significant threat with increasing frequency and sophistication. These infections lead to disruptive system failures and substantial financial losses [1, 2]. In response, a promising countermeasure has emerged in the form of "white worms", which would serve as benevolent counterparts to malicious "black worms".
In this context, worms refer to a type of malware that exploits vulnerabilities in devices to propagate to other devices. Unlike smartphones and personal computers, IoT devices typically lack regular updates [3]. The proposed white worms share similar propagation characteristics with black worms but are specifically designed to identify and rectify security vulnerabilities [4, 5].
However, before white worms can be widely adopted, there are significant questions that must be addressed. Ethically, the concept of white worms walks a fine line since they infiltrate systems without explicit permission, which could be viewed as a breach of privacy or even illegal. This raises intricate ethical and legal dilemmas that require careful exploration, potentially limiting the application of white worms. Additionally, understanding the propagation dynamics of these worms is vital for designing effective and ethical white worms.
The propagation of viruses, whether biological or digital, has been a focal point of scientific investigation for many decades. As early as the 1980s, there have been propositions that computer viruses could be studied using tools and methodologies developed for human diseases [6]. The tools developed by network epidemiology are particularly suited to this task given the resemblance to biological networked systems, and the mechanisms by which viruses spread [7, 8, 9, 10, 11].
While the spread of multiple viruses on networks has been studied in network epidemiology [12, 13], we propose a model specifically tailored to the contagion of computer viruses, wherein one of the pathogens protects the host from further infection. Furthermore, we incorporate the ethical characteristics of white worms proposed in the literature [4, 5]. To accomplish this, we develop a compartmental model that spreads on various types of networks and explore its dynamics through stochastic simulations under different conditions. Finally, we discuss the effectiveness of white worms, considering the ethical considerations incorporated into the model.
## II Materials and Methods
### Overview of the contagion process
Our model considers the propagation of two worm types within a network of vulnerable devices (\(V\)): a malicious "black worm" and a benign "white worm". The
black worm's purpose is to infiltrate any unprotected device by exploiting an unspecified security loophole, with a transmission rate \(\beta_{B}\) from one device to another. Conversely, the white worm seeks to secure the devices by forcing system updates. We label its transmission rate \(\beta_{W}\) and hypothesize that both types of worms exploit the same security loophole, equating their transmission rates, i.e., \(\beta_{W}=\beta_{B}\). However, in line with the suggestion made by [4], the white worm does not take immediate action upon the device. Initially, it urges the device's user to update the system while remaining in a dormant state (\(D\)). The user has the option to patch the system's vulnerability at a rate of \(\gamma\).
Subsequently, the white worm uses the device's resources to (i) propagate to connected machines and (ii) patch the system. Between the period of activation and updating (states \(W\) or \(W_{B}\)), the white worm maintains the capacity to spread, but the user has no possibility to update the device manually. It is important to note that the mere presence of a worm does not eliminate the device's vulnerability. Therefore, a device hosting a dormant or active white worm can still be compromised by the black worm (\(D_{B}\) or \(W_{B}\)). Similarly, a device already infected by the black worm (\(B\)) can be infiltrated by the white worm (\(D_{B}\)). The white worm transitions from a dormant to an active state at a rate \(\epsilon\). Once in the active state, it initiates the system update at a rate \(\mu\), hence sealing the security loophole. Once protected, white and black worms are removed, and the device becomes immune to further infections by any of them (\(P\)).
The diagram depicted in Fig. 1 represents all possible state transitions within the system.
In accordance with common practice, we set \(\mu=1\) without losing generality, as time can always be appropriately rescaled. We are primarily interested in the scenario where both worm types exploit the same vulnerability for propagation, hence \(\beta_{B}=\beta_{W}\), as previously established. Consequently, our analysis concentrates on the influence of two parameters that pertain to the ethical conduct of the white worm: the rate \(\epsilon\) at which a dormant white worm is activated and starts to leverage the resources of the host device, and the rate \(\gamma\) at which users respond to system update prompts. A summary of the transmission parameters explored in this study is described in Table 1.
### Epidemic dynamics in the homogeneous mixing
Let us define \(\rho^{X}(t)\) as the fraction of devices in the compartment \(X\) at time \(t\), i.e., its density. Then, the equations of the model under the homogeneous mixing assumption are the following:
\[\left\{\begin{array}{ll}\dot{\rho}^{V}=-\beta_{B}\rho^{V}\phi^{B}-\beta_{W} \rho^{V}\phi^{W},\\ \dot{\rho}^{B}=\beta_{B}\rho^{V}\phi^{B}-\beta_{W}\rho^{B}\phi^{W},\\ \dot{\rho}^{D}=\beta_{W}\rho^{V}\phi^{W}-\beta_{B}\rho^{D}\phi^{B}-\epsilon\rho ^{D}-\gamma\rho^{D},\\ \dot{\rho}^{D_{B}}=\beta_{B}\rho^{D}\phi^{B}+\beta_{W}\rho^{B}\phi^{W}- \epsilon\rho^{D_{B}}-\gamma\rho^{D_{B}},\\ \dot{\rho}^{W}=\epsilon\rho^{D}-\beta_{B}\rho^{W}\phi^{B}-\mu\rho^{W},\\ \dot{\rho}^{W_{B}}=\epsilon\rho^{D_{B}}+\beta_{B}\rho^{W}\phi^{B}-\mu\rho^{W_{ B}},\\ \dot{\rho}^{P}=\mu\rho^{W_{B}}+\mu\rho^{W}+\gamma\rho^{D_{B}}+\gamma\rho^{D}, \end{array}\right. \tag{1}\]
where
\[\begin{cases}\phi^{B}&=\rho^{B}+\rho^{D_{B}}+\rho^{W_{B}},\\ \phi^{W}&=\rho^{W}+\rho^{W_{B}},\end{cases} \tag{2}\]
represent the total fraction of devices that can propagate the black or the white worm, respectively.
### Networks
Our study scrutinizes worm propagation across three distinct network topologies. The first of these is a complete graph of a hundred nodes. This selection allows us
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Name** & **Parameter** & **Value** \\ \hline Infection rate of black worms & \(\beta_{B}\) & 1.1 \\ Infection rate of white worms & \(\beta_{W}\) & 1.1 \\ Activation rate of white worms & \(\epsilon\) & [0.01-1000] \\ Protection rate (user) & \(\gamma\) & [0.1-100] \\ Protection rate (white worm) & \(\mu\) & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Summary of the transmission parameters of the model** - For both black and white worms, the infection rate parameters are set equal. Furthermore, without loss of generality, the protection rate associated with the white worm is also fixed. Lastly, we will iterate the activation rate values for white worms and the user’s protection rate within the intervals specified in the table.
Figure 1: **Model scheme** - Compartmental model that describes how black and white worms can spread within the system. Vulnerable devices (\(V\)) can be infected by either a black worm or a white worm. When infected by a black worm (\(B\)), they actively spread it. However, if infected by a white worm, they enter a dormant state (\(D\)) until system upgrade or self-activation occurs. Activated white worms (\(W\)) propagate until the device is forcibly updated. Devices with white worms in dormant or active states can also be infected by black worms (\(D_{B}\) or \(W_{B}\)). Similarly, devices infected with black worms (\(B\)) can be infected by active white worms (\(D_{B}\)). Once the system is updated, either by user approval or the action of a white worm, security vulnerabilities are fixed, and the machine is protected (\(P\)).
to compare numeric solutions derived from the homogeneous mixing model with results from stochastic simulations.
However, the structure of real-world computer networks is often far from homogeneous, especially in the case of IoT devices. These networks are known to demonstrate substantial heterogeneity and a high degree of clustering around central access points [14; 15]. To better represent this reality, we also consider two different projected network topologies. These projections assume that if two IoT devices are linked to routers that can communicate with each other, a direct link between both devices can be inferred.
To construct these additional network topologies, we employ Python's NetworkX package [16]. The first of these is an Erdos-R'enyi network, where pairs of nodes establish connections with a consistent probability. The second network follows a power law distribution, with node degrees \(k\) conforming to a power law distribution, \(k^{-\alpha}\), where \(\alpha=10\). The contagion process equations for a network under mean-field approximation are provided for further insight in Appendix B.
### Stochastic simulations
The stochastic propagation of both worm types across the network is simulated utilizing the Gillespie algorithm [17]. Originally proposed as a Monte Carlo simulation technique for chemical reactions, it has since been extended to model Markovian dynamics, such as those observed in epidemics [18; 19]. More specifically, we have employed the algorithm's implementation found in Python's package EoN version 1.1 [19; 20]. Detailed insights into this method can be found in Appendix A.
## III Results
### Homogeneous mixing
We commence our exploration by examining the model's behavior under the assumption of homogeneous mixing, according to which every device can directly interact with any other device. In Fig. 2, we depict the final proportion of protected devices as a function of the ratio \(\epsilon/\gamma\). The majority of observables rely solely on the ratio \(\epsilon/\gamma\) and not on their individual values, as altering the value of \(\gamma\) only affects the temporal dynamics but not the end states. The four observables depicted in Fig. 2 were calculated using both the Equations (1) and the simulations of the stochastic model.
As we can see in the figure, the major impact of increasing the ratio is changing the path through which devices get protected. If the rate at which users update their system upon being prompted is large (\(\epsilon/\gamma\ll 1\)), most devices will be protected by their owners. If, instead, the white worm is allowed to spread for a long time before protecting the system (\(\epsilon/\gamma\gg 1\)), it will actively protect most devices. The stochastic simulations on the complete graph corroborate this finding, showing that the botnet can easily be destroyed under the homogeneous mixing hypothesis.
### Spreading on networks
Under the homogeneous mixing model, the final size of the botnet is essentially zero across a wide range of \(\epsilon/\gamma\) values. However, when the spread occurs over heterogeneous networks, the dynamics shift markedly, as illustrated in Fig. 3.
Firstly, we observe the familiar epidemic threshold widely discussed in the relevant literature, which is negligible for scale-free networks [9]. Consequently, as shown in Fig. 3(a), the black worm cannot entirely infect the network, as a significant outbreak of the white worm always leads to the protection of a certain fraction of devices. Contrarily, Fig. 3(b) shows that the ratio \(\epsilon/\gamma\) must exceed a certain value for the white worm to propagate and dismantle the botnet effectively.
Secondly, we note that even for very high force rates, the final botnet size may not reach zero. In fact, for the power law topology, the botnet size remains over 20% of the devices even after the white worm's elimination. This finding sharply contrasts with the results from the Erdos-Renyi network and the complete graph. Moreover, forced device protection by the white worm is required for most scenarios in which the final size of the botnet is relatively small.
Figure 2: **Homogeneous mixing** - Final distribution of protected and unprotected devices under the homogeneous mixing hypothesis as a function of the ratio \(\epsilon/\gamma\). We examine the final proportion of protected devices (black), divided into protected by users (green) and protected by the white worm activation (light blue), and the final coverage of the botnet (orange). The results are obtained from both the Equations (1) (dashed lines) and the stochastic model implemented on the complete graph (dots).
However, these observations merely describe the system's final state and do not consider the dynamics during the initial propagation stages. In Fig. 4, we present the fraction of devices that at any point were part of the botnet, i.e. that were simultaneously infected with the black worm and thus exploitable, for instance, for a DDoS attack. Here, the outcomes heavily depend on the specific value of \(\gamma\). When its value is exceedingly low, the botnet could potentially cover nearly the entire system at some point. Addressing this issue requires increasing the rate at which users update their systems, as this action is executed much faster than the protection afforded by the white worm. This results in smaller botnets and, consequently, reduced threats, but also requires faster propagation by the white worm (increased \(\epsilon\)).
We conclude this analysis by examining the botnet threat duration. Fig. 5 portrays the time interval during which the botnet infects a certain fraction of devices as a function of the ratio \(\epsilon/\gamma\). We note that when \(\gamma\gg\epsilon\), the white worm cannot propagate effectively, and the botnet remains undestroyed indefinitely. However, as we augment \(\epsilon\), the interval drastically shortens, thereby reducing the botnet's threat as it can not be used for an extended period of time.
## IV Discussion and Conclusions
The intersection of cybersecurity and ethics presents complex and intriguing dilemmas. Our study attempted to address these challenges through the lens of IoT security and the use of white worms for protection. The introduction of white worms into a system inherently walks a thin ethical line, due to the potential breach of privacy or even legality caused by their self-propagation without explicit user consent. Our findings illuminate both the possibilities and pitfalls that may arise with their use.
In the case of a homogeneous mixing model, we found that for a wide range of the ratio \(\epsilon/\gamma\), the botnet is effectively eliminated. Yet, the mechanism leading to its eradication is very different. If \(\epsilon\ll\gamma\) the devices are mostly protected actively by their owners. If, instead, \(\epsilon\gg\gamma\), the devices will be protected by the white worm. However, the dynamics change significantly when worms spread across heterogeneous networks. We found that in certain cases, the botnet size was never reduced to zero, and over 20% of devices remained infected in the power law topology. These findings underscore the importance of considering network structure when designing strategies for white worm deployment.
Moreover, our analysis of the early propagation stages revealed that the specific value of \(\gamma\) has a significant impact on the size of botnets. A low rate could allow a botnet to cover almost the entire system at some point, underscoring the necessity of taking swift action to protect the system. In other terms, if users do not actively protect their system upon being prompted, the malware will capture most of the system. Furthermore, even if for a wide range of values the complete botnet only lasts for a brief period of time, the fact that malware spreads
Figure 3: **Protection coverage for spreading on networks** - Total fraction of devices protected in the system by the time the white worm vanishes in a power law network (a) and an Erdős-Rényi network (b) as a function of the force rate (\(\epsilon/\gamma\)). We distinguish whether the protection was provided by a willing update by the owner of the device (light blue) or forced by the white worm (green). In orange, the final botnet size by the end of the simulation. All results were obtained using stochastic simulations of the model with the parameters described in Table 1.
through the whole system raises other concerns, such as potential data loss or privacy breaches.
Despite these insights, our study has several limitations. Firstly, we made several assumptions about the behavior of white worms and users, which may not hold in real-world situations. For instance, we assumed that both the white and black worms exploit the same security vulnerability. We also assumed that system updates completely protect devices from infection, which may not always be the case given the myriad of potential vulnerabilities. Furthermore, our models do not account for the potential interaction between both worms, such as the white worm directly patching the system if it detects the presence of the black worm.
Future research could address these limitations by incorporating more realistic assumptions and behaviors into the models. Additionally, further empirical studies are necessary to validate the model predictions and to provide more detailed insights into the interactions between white worms, black worms, and users. Similarly, it would be important to study the problem in more realistic IoT networks, as we have seen that the topology plays a major role in the dynamics of the worms.
In conclusion, our study sheds light on the potential of white worms as a countermeasure against black worms in IoT networks. While this strategy could be effective under certain conditions, its implementation raises complex ethical and practical issues that warrant careful consideration. In particular, we have observed that very swift action is necessary, either by the prompted user or directly by the worm, to prevent the creation of a large botnet. This, however, implies that the white worm cannot be ethical (in the sense proposed by [4]) for too long. Further research is needed to fully understand the dynamics of this intriguing interplay between cybersecurity, technology, and ethics.
Figure 4: **Maximum botnet size** - Maximum size reached by the botnet as a function of the force rate \(\epsilon/\gamma\) for a power law network (a) and an Erdős-Rényi network (b). In contrast with other observables, the maximum fraction of devices that simultaneously belong to the network depends on \(\gamma\).
Figure 5: **Simulation time spent above critical size of the botnet** - Percentage of simulation time spent with the botnet size above different threat thresholds (y-axis), for a range of rates \(\epsilon/\gamma\) (x-axis), on a Erdős-Rényi network. When the system ends with the size of the botnet above a threshold, we set the corresponding active time as 100%.
## Data & Code
The code for the Gillespie algorithm, along with the code to generate the networks and solve the model under the homogeneous mixing assumption, is publicly available at [https://github.com/FrappaN/C72h-whiteworms](https://github.com/FrappaN/C72h-whiteworms).
## Acknowledgements
This work is the output of the Complexity72h workshop, held at IFISC in Palma, Spain, 26-30 June 2023, [https://www.complexity72h.com](https://www.complexity72h.com). RS acknowledges the support of SERB International Travel Support grant ref. ITS/2023/001976. AST acknowledges support by FCT - Fundacao para a Ciencia e Tecnologia - through the LASIGE Research Unit, ref. UIDB/00408/2020 and ref. UIDP/ 00408/2020. AA acknowledges support from the grant RYC2021-033226-I funded by MCIN/AEI/10.13039/501100011033 and the European Union NextGenerationEU/PRTR.
## Appendix A Gillespie algorithm
The action of the algorithm in a general model can be described as follows. Given a Markovian model, there is only a finite set of events that can happen. The algorithm first extracts a random waiting time before the next event; then, it randomly chooses the event that happens at the end of that time. In our model, the possible events are the following:
* A node is infected by the black worm from a neighbour.
* A node is infected by the white worm from a neighbour.
* An user updates the device, removing the worm(s).
* A white worm becomes active.
* A white worm autonomously updates the device.
The first two events define an induced transition: a node can be infected only if it has a neighbour which is already infected. At a given state of the system, the rate at which an infection event happen is given by the product of the infection rate \(\beta\) and the number of links between an infected node and an uninfected node.
The other events are spontaneous transitions from one compartment to another, and their total rate in a given state will depend on the number of nodes in the initial compartment. For example, the rate at which a white worm becomes infectious will be given by \(\epsilon(N_{D}+N_{D_{B}})\), where \(N_{D}\) is the number of devices with only a dormant white worm and \(N_{D_{B}}\) is the number of devices with also a black worm infection.
When the simulation begins, the algorithm first computes the total rate of the events, as the sum of all the rates of the possible events. It then extracts the waiting time for the next event from an exponential distribution with a rate equal to the total rate. The event which occurs is chosen randomly with probability proportional to the rate of the corresponding event. After the events, the rates are updated due to the new configuration, and the process is repeated. The process ends when the simulation time is more than a chosen \(t_{max}\), or there are no more events that can happen. In the case of our model, the simulation will always stop since the devices eventually get either:
1. All protected.
2. Composed of a mixed population of protected, infected by the black worm only, and vulnerable but only connected to protected nodes.
From both of these conditions, no other event can happen.
## Appendix B Mean-field equations
The equations of the model that define the mean-field approximation are
\[\begin{cases}&\dot{\rho}_{k}^{V}=-\beta_{B}\rho_{k}^{V}k\Theta_{B}-\beta_{W} \rho_{k}^{V}k\Theta_{W},\\ &\dot{\rho}_{k}^{B}=\beta_{B}\rho_{k}^{V}k\Theta_{B}-\beta_{W}\rho_{k}^{B}k \Theta_{W},\\ &\dot{\rho}_{k}^{D}=\beta_{W}\rho_{k}^{V}k\Theta_{W}-\beta_{B}\rho_{k}^{D}k \Theta_{B}-\epsilon\rho_{k}^{D}-\gamma_{p}\rho_{k}^{D},\\ &\dot{\rho}_{k}^{D_{B}}=\beta_{B}\rho_{k}^{D}k\Theta_{B}+\beta_{W}\rho_{k}^{B }k\Theta_{W}-\epsilon\rho_{k}^{D_{B}}-\gamma_{p}\rho_{k}^{D_{B}},\\ &\dot{\rho}_{k}^{W}=\epsilon\rho_{k}^{D}-\beta_{B}\rho_{k}^{W}k\Theta_{B}- \mu\rho_{k}^{W},\\ &\dot{\rho}_{k}^{W_{B}}=\epsilon\rho_{k}^{D_{B}}+\beta_{B}\rho_{k}^{W}k\Theta _{B}-\mu\rho_{k}^{W},\\ &\dot{\rho}_{k}^{D}=\mu\rho_{k}^{W_{B}}+\mu\rho_{k}^{W}+\gamma_{p}\rho_{k}^{D_ {B}}+\gamma_{p}\rho_{k}^{D},\end{cases} \tag{13}\]
where
\[\begin{cases}\Theta_{B}&=\sum_{k^{\prime}}\frac{k^{\prime}P(k^{\prime})}{ \langle k\rangle}(\rho_{k}^{B}+\rho_{k^{\prime}}^{D_{B}}+\rho_{k^{\prime}}^{W_ {B}}),\\ \Theta_{W}&=\sum_{k^{\prime}}\frac{k^{\prime}P(k^{\prime})}{\langle k\rangle}( \rho_{k^{\prime}}^{W}+\rho_{k^{\prime}}^{W_{B}}).\end{cases} \tag{14}\]
|
2309.09836 | RECAP: Retrieval-Augmented Audio Captioning | We present RECAP (REtrieval-Augmented Audio CAPtioning), a novel and
effective audio captioning system that generates captions conditioned on an
input audio and other captions similar to the audio retrieved from a datastore.
Additionally, our proposed method can transfer to any domain without the need
for any additional fine-tuning. To generate a caption for an audio sample, we
leverage an audio-text model CLAP to retrieve captions similar to it from a
replaceable datastore, which are then used to construct a prompt. Next, we feed
this prompt to a GPT-2 decoder and introduce cross-attention layers between the
CLAP encoder and GPT-2 to condition the audio for caption generation.
Experiments on two benchmark datasets, Clotho and AudioCaps, show that RECAP
achieves competitive performance in in-domain settings and significant
improvements in out-of-domain settings. Additionally, due to its capability to
exploit a large text-captions-only datastore in a training-free fashion, RECAP
shows unique capabilities of captioning novel audio events never seen during
training and compositional audios with multiple events. To promote research in
this space, we also release 150,000+ new weakly labeled captions for AudioSet,
AudioCaps, and Clotho. | Sreyan Ghosh, Sonal Kumar, Chandra Kiran Reddy Evuru, Ramani Duraiswami, Dinesh Manocha | 2023-09-18T14:53:08Z | http://arxiv.org/abs/2309.09836v2 | # Recap: Retrieval-Augmented Audio Captioning
###### Abstract
We present **RECAP** (**RE**rieval-Augmented Audio **CAP**-**tioning), a novel and effective audio captioning system that generates captions conditioned on an input audio and other captions similar to the audio retrieved from a datastore. Additionally, our proposed method can transfer to any domain without the need for any additional fine-tuning. To generate a caption for an audio sample, we leverage an audio-text model CLAP [1] to retrieve captions similar to it from a replaceable datastore, which are then used to construct a prompt. Next, we feed this prompt to a GPT-2 decoder and introduce cross-attention layers between the CLAP encoder and GPT-2 to condition the audio for caption generation. Experiments on two benchmark datasets, Clotho and AudioCaps, show that RECAP achieves competitive performance in in-domain settings and significant improvements in out-of-domain settings. Additionally, due to its capability to exploit a large text-captions-only datastore in a _training-free_ fashion, RECAP shows unique capabilities of captioning novel audio events never seen during training and compositional audios with multiple events. To promote research in this space, we also release 150,000+ new weakly labeled captions for AudioSet, AudioCaps, and Clotho1.
Footnote 1: We will release code and data on paper acceptance.
Sreyan Ghosh, Sonal Kumar, Chandra Kiran Reddy Evuru, Ramani Duraiswami, Dinesh Manocha University of Maryland, College Park, USA
Automated audio captioning, multimodal learning, retrieval-augmented generation
## 1 Introduction
Audio captioning is the fundamental task of describing the contents of an audio sample using natural language. Compared to Automatic Speech Recognition (ASR), which transcribes human speech, audio captioning focuses on describing distinct environmental sounds in the input audio [2, 3]. By bridging the gap between text and audio modalities, audio captioning has found various applications in real-world use cases like environment monitoring, gaming, etc. [4].
In the past, most audio captioning models employed an encoder-decoder architecture using an off-the-shelf pre-trained audio encoder and a language decoder [5, 6]. The audio encoder generates an audio embedding sequence that is used to condition the language decoder for caption generation. However, most of these systems do not perform well on cross-domain settings (trained on one domain and tested on the other), and every use case might need separate training. We hypothesize that the primary reason behind this phenomenon is the shift of occurrence of unique audio events with a domain shift. For example, the AudioCaps benchmark dataset [2] has several audio concepts (e.g., the sound of jazz or an interview) that Clotho, another benchmark dataset, does not. This is also representative of real-world scenarios where not only do audio concepts change from one domain to another (e.g., environmental sounds in a city versus a forest), but new audio concepts also keep emerging within a domain (e.g., new versions of an online game).
**Main Contributions.** We propose RECAP, **RE**rieval-Augmented Audio **CAP**tioning, a simple and scalable solution to the aforementioned problems of domain shifts. Similar to other audio captioning systems in literature [5, 6, 7], RECAP is built on an audio encoder and a language decoder (GPT-2 in our setting). However, we introduce three novel changes: (1) Instead of employing an audio encoder pre-trained only on audio, we use CLAP [1] as our audio encoder. CLAP is pre-trained on audio-text pairs to learn the correspondence between audio and text by projecting them into a
Figure 1: We propose **RECAP**, a retrieval-augmented audio captioning model. RECAP can caption novel concepts never before seen in training and improves the captioning of audio with multiple events.
shared latent space. Thus, CLAP hidden state representations are better suited for captioning due to their enhanced linguistic comprehension. (2) We condition the audio for caption generation by introducing new cross-attention layers between CLAP and the GPT-2. (3) Finally, beyond just conditioning audio, we also condition a custom-constructed prompt for training and inference. We construct the prompt using the top-\(k\) captions most similar to the audio from a datastore retrieved using CLAP. We provide more details in Section 3.1. RECAP builds on retrieval-augmented generation (RAG) [8], which offers multiple advantages discussed further in Section 3. RECAP is lightweight, fast to train (as we only optimize the cross-attention layers), and can exploit any large text-caption-only datastore in a _training-free_ fashion. We evaluate RECAP on two benchmark datasets, Clotho [3] and AudioCaps [2], and show that while being competitive to the state-of-the-art in in-domain settings, RECAP outperforms all baselines in out-of-domain settings by a large margin. Additionally, RECAP can effectively caption novel audio events never seen during training and can better generate captions for compositional audios with multiple audio events.
## 2 Related Work
**Automated Audio Captioning.** Current work in audio captioning primarily employs encoder-decoder models where a caption is generated by an autoregressive language decoder conditioned on representations obtained from an audio encoder [5, 6, 7]. The language decoder employed is either pre-trained on web-scale data [5, 6, 7] or learned from scratch [9, 10] during fine-tuning. The work closest to ours is [7], where the authors condition a GPT-2 on prompts constructed using retrieved captions. However, the key difference between our work and theirs is that we require only a text-caption-only datastore for RECAP, whereas their system requires both audio and text pairs. We also introduce additional cross-attention layers for audio conditioning. _Kim et al_[6], the current state-of-the-art system, proposed prefix tuning for audio captioning where the authors feed a prefix or a fixed-size embedding sequence to GPT-2 for audio captioning. Other works include synthetic data augmentation techniques [11, 12], and training tricks to improve learning on the source training data [13, 14].
**Retrieval-augmented Generation.** The core idea of retrieval-augmented generation (RAG) is to condition generation on additional data retrieved from an external datastore [8]. RAG has been shown to benefit knowledge-intensive NLP tasks like open-domain question-answering on datasets that require world knowledge and advanced reasoning capabilities [15, 16]. RAG has also proven to be extremely effective in various computer vision tasks, including image captioning [17, 18]. We argue that audio captioning, especially in out-of-domain scenarios, is a knowledge-intensive task as it requires the model to caption novel audio concepts never seen during training, and can benefit from RAG.
## 3 Methodology
**Problem Formulation.** Given a dataset \(\mathcal{D}\) with audio-text pairs (\(\mathcal{A}\),\(\mathcal{T}\)), where each text caption \(t_{i}\in\mathcal{T}\) corresponding to an audio \(a_{i}\in\mathcal{A}\) describes the content or events of the audio, we aim to train a model \(\theta\) to generate \(t_{i}\) from \(a_{i}\). Different from other audio captioning systems, we also assume that the model has access to a datastore \(\mathcal{DS}\) with text captions during inference. These captions come from the training set of \(\mathcal{D}\) or external sources but have no overlap with the validation or test sets of \(\mathcal{D}\).
### Recap
**Overall Architecture.** The overall architecture of RECAP
Figure 2: Illustration of **RECAP**. RECAP fine-tunes a GPT-2 LM conditioned on audio representations from the last hidden state of CLAP [1] and a text prompt. The text prompt is constructed using captions most similar to the audio, retrieved from a datastore using CLAP.
is quite simple and lightweight. RECAP employs CLAP as the audio encoder and GPT-2 as the auto-regressive language decoder. To generate the caption, the language decoder conditions on the output of the audio encoder and an individually crafted prompt for each audio. We discuss how we construct the prompt in the next subsection.
For audio conditioning, we first pass the audio samples through the CLAP audio encoder and extract the last hidden state \(A\in n\times d\), where \(n\) is the sequence length and \(d\) is the embedding dimension. This embedding is extracted from the penultimate layer of the CLAP audio encoder right before the final projection. As the audio embeddings and decoder operate on different vector spaces, we connect them through randomly initialized cross-attention modules as each decoder layer. To train the RECAP, we freeze both GPT-2 and the CLAP and only train the cross-attention layers, which reduces overall compute requirements and time for training and retains the expressivity and generalization capabilities of GPT-2. RECAP performs well even after training only 5.4\(\%\) of total parameters because, like other retrieval-augmented models [8, 22, 23], RECAP does not need all information to be stored in its weights as it has access to external knowledge from a datastore of text. Additionally, CLAP generates an audio embedding that correlates well with its corresponding textual description, thus further lowering training time due to its superior understanding of the audio content.
**Constructing prompts with Retrieved Captions.** Instead of just conditioning audio features for captioning, RECAP is also conditioned on a prompt, individually crafted for each audio during training and inference. To construct this prompt, RECAP exploits CLAP text and audio encoders [1], to retrieve top-\(k\) captions similar to an audio from a datastore. CLAP encodes audio and text to a shared vector space and has outperformed all prior models on audio-to-text and text-to-audio retrieval, thus making it most suitable for our task. Specifically, for retrieval, we calculate the cosine similarity between the embeddings of the current audio \(a_{i}\) and all the text captions in the datastore \(\mathcal{DS}\) and just choose the captions with the highest similarity. Once we have retrieved the top-\(k\) similar captions, we construct a prompt in the following manner: _"Audios similar to this audio sounds like: caption 1, caption 2, \(\cdots\) caption k. This audio sounds like:"_. For retrieval, we naturally ignore the original caption \(t_{i}\) corresponding to \(a_{i}\). RECAP is then trained using the generic cross-entropy loss between the tokens for the predicted caption \(\hat{t}_{i}\) and the ground-truth caption \(t_{i}\).
## 4 Experiments and Results
**Datasets.** For training and evaluating RECAP, we use either Clotho [3], AudioCaps [2], or a combination of both. Clotho has 3839/1045/1045 unique audios in train/dev/test splits, respectively, with five captions for each audio. AudioCaps has 49,838/495/975 with five captions each only for the train set.
**Baselines.** We compare RECAP with six competitive baselines that are taken from literature. Eren _et al_. [9] and Xu _et al_. [10] train a Gated Recurrent Unit (GRU) for generating captions, conditioned on audio embeddings extracted from an audio encoder. Chen _et al_. [20] replaces the GRU with a transformer decoder, and Mei _et al_. [19] trains an entire encoder-decoder transformer architecture from scratch. Kim _et al_. [6] and Gontier _et al_. [5] use a pre-trained language model, where the former employs GPT-2, and the latter employs BART [24].
**Experimental Setup.** To compare the performance of RECAP, we conduct experiments in three distinct setups: (1) We
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Training set & Method & BLEU\({}_{1}\) & BLEU\({}_{2}\) & BLEU\({}_{3}\) & BLEU\({}_{4}\) & METEOR & ROUGE\({}_{L}\) & CIDEr & SPICE & SPIDEr \\ \hline \multirow{8}{*}{(1) Clotho} & Mei _et al_. [19] & 0.527 & 0.327 & 0.211 & 0.131 & 0.158 & 0.356 & 0.320 & 0.105 & 0.213 \\ & Gontier _et al_. [5] & 0.506 & 0.318 & 0.210 & 0.134 & 0.148 & 0.338 & 0.278 & 0.092 & 0.185 \\ & Chen _et al_. [20] & 0.534 & 0.343 & 0.230 & 0.151 & 0.160 & 0.356 & 0.346 & 0.108 & 0.227 \\ & Xu _et al_. [10] & 0.556 & 0.363 & 0.242 & 0.159 & 0.169 & 0.368 & 0.377 & 0.115 & 0.246 \\ & Koh _et al_. [21] & 0.551 & 0.369 & 0.252 & **0.168** & 0.165 & 0.373 & 0.380 & 0.111 & 0.246 \\ & Kim _et al_. [6] & 0.560 & 0.376 & 0.253 & 0.160 & 0.170 & 0.378 & 0.392 & 0.118 & **0.255** \\ & RECAP (w/ DS) & 0.563 & 0.381 & **0.257** & 0.165 & **0.179** & 0.383 & 0.398 & 0.122 & 0.214 \\ & RECAP (w/ \(\mathcal{DS}_{large}\)) & **0.582** & **0.384** & **0.257** & 0.166 & 0.177 & **0.395** & **0.411** & **0.125** & 0.224 \\ \hline \multirow{8}{*}{(2) AudioCaps} & Mei _et al_. [19] & 0.294 & 0.146 & 0.080 & 0.043 & 0.096 & 0.239 & 0.117 & 0.050 & 0.084 \\ & Gontier _et al_. [5] & 0.309 & 0.146 & 0.071 & 0.034 & 0.098 & 0.233 & 0.112 & 0.046 & 0.079 \\ & Chen _et al_. [20] & 0.226 & 0.114 & 0.065 & 0.039 & 0.086 & 0.228 & 0.109 & 0.042 & 0.076 \\ \cline{1-1} & Kim _et al_. [6] & 0.342 & 0.195 & 0.115 & 0.065 & 0.112 & 0.276 & 0.192 & 0.074 & 0.133 \\ \cline{1-1} & RECAP (w/ \(\mathcal{DS}_{caps}\)) & 0.339 & 0.193 & 0.109 & 0.068 & 0.110 & 0.276 & 0.195 & 0.084 & 0.137 \\ \cline{1-1} & RECAP (w/ DS) & 0.515 & 0.349 & 0.210 & 0.143 & 0.155 & **0.328** & **0.332** & 0.998 & 0.201 \\ \cline{1-1} & RECAP (w/ \(\mathcal{DS}_{large}\)) & **0.519** & **0.385** & **0.216** & **0.149** & **0.157** & 0.324 & 0.331 & **1.004** & **0.209** \\ \hline \multirow{8}{*}{(3) Clotho \& Chen _et al_. [19] & 0.516 & 0.318 & 0.204 & 0.127 & 0.157 & 0.351 & 0.313 & 0.105 & 0.209 \\ \cline{1-1} & Gontier _et al_. [5] & 0.461 & 0.282 & 0.182 & 0.117 & 0.136 & 0.318 & 0.251 & 0.083 & 0.167 \\ \cline{1-1} & Chen _et al_. [20] & 0.516 & 0.325 & 0.215 & 0.141 & 0.153 & 0.350 & 0.314 & 0.102 & 0.208 \\ \cline{1-1} AudioCaps & Kim _et al_. [6] & 0.539 & 0.346 & 0.227 & 0.142 & 0.159 & 0.366 & 0.319 & 0.111 & 0.215 \\ \cline{1-1} & RECAP (w/ \(\mathcal{DS}_{large}\)) & 0.547 & **0.361** & **0.238** & 0.149 & **0.167** & 0.379 & 0.322 & **0.116** & **0.222** \\ \cline{1-1} & RECAP (w/ \(\mathcal{DS}_{large}\)) & **0.549** & 0.360 & **0.238** & **0.150** & 0.166 & **0.381** & **0.323** & **0.116** & 0.221 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation on Clotho. Each method is trained on three different settings and tested on the AudioCaps dataset. For evaluation, we use a datastore that has captions from the training set (\(\mathcal{DS}\)), Clotho (\(\mathcal{DS}_{caps}\)), or a large external dataset (\(\mathcal{DS}_{large}\)).
train and evaluate the model on the same dataset \(\mathcal{D}\), (2) We train the model on a dataset \(\mathcal{D}\) and evaluate the model on a different dataset \(\hat{\mathcal{D}}\) (3) We train the model on a combination of both datasets and evaluate separately on individual datasets. For (1), the datastore \(\mathcal{DS}\) consists of captions from either the training set of the source dataset \(\mathcal{D}\) or a large curated datastore \(\mathcal{DS}_{large}\). For (2), we use \(\mathcal{DS}\) that has captions from either \(\mathcal{D}\) (\(\mathcal{DS}\)), \(\mathcal{DS}_{large}\) or from the other dataset. For (3), we either use \(\mathcal{DS}\) that has captions from both datasets or use \(\mathcal{DS}_{large}\). We list all the sources of \(\mathcal{DS}_{large}\) with over 600,000+ text-only captions, on our GitHub. This includes 100,000+ new weakly labeled captions for the AudioSet strong subset and three new captions for each sample in AudioCaps and Clotho. All these captions were generated using GPT-4 and manually corrected by one expert human annotator. For retrieval-based prompt creation, we use \(k\)=4 and retrieve only the top 4 captions from the datastore. It is worth noting that RECAP does not use any additional training or data augmentation tricks. For both AudioCaps and Clotho, we train using Adam optimizer with a learning rate of 5e\({}^{-5}\) for 100 epochs and a batch size of 32. We evaluate all our models on the metrics of BLEU, METEOR, ROUGE-L, CIDEr, SPICE, and SPIDEr.
**Results.** Table 1 and Table 2 compare the performance of RECAP against all our baselines evaluated on Clotho and AudioCaps, respectively. We train our models in different settings and evaluate them with different datastores. While RECAP shows decent margins of improvement in in-domain settings, RECAP outperforms all baselines by a significant margin in out-of-domain settings when an in-domain datastore is available. Without one, RECAP shows competitive performance with SOTA [6]. The presence of a larger data store (\(\mathcal{DS}_{large}\)) almost always improves performance. This opens possibilities to improve captioning performance by augmenting the datastore with diverse synthetically generated captions.
**Results Analysis.** Table 3 compares RECAP with Kim_et al_. [6] (SOTA) on compositional instances from Clotho (**1.**) and AudioCaps (**4.**) test set. While SOTA was able to caption only one audio event, due to conditioning on a prompt constructed from diverse retrieved captions, RECAP captures multiple. We also compared with a model trained on AudioCaps and inferred on a Clotho test instance with an audio event never seen during training (**2.**), and vice-versa (**3.**). By being conditioned on in-domain prompts, RECAP can caption these instances effectively.
## 5 Conclusion and Future Work
We present RECAP, a novel audio captioning system based on retrieval-augmented generation. While being competitive with state-of-the-art methods on benchmark datasets, RECAP outperforms SOTA by a huge margin on out-of-domain settings and shows unique capabilities of captioning novel audio events and compositional audios with two or more events. Additionally, RECAP is cheap to train and can exploit a replaceable text-caption-only datastore in a _training-free_ fashion to further push performance. As part of future work, we would like to explore advanced techniques for efficient retrieval and build better audio-text models.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Training set & Method & BLEU\({}_{1}\) & BLEU\({}_{2}\) & BLEU\({}_{3}\) & BLEU\({}_{4}\) & METEOR & ROUGE\({}_{L}\) & CIDEr & SPICE & SPIDEr \\ \hline \multirow{8}{*}{(1) AudioCaps} & Mei _et al_. [19] & 0.647 & 0.488 & 0.356 & 0.252 & 0.222 & 0.468 & 0.679 & 0.160 & 0.420 \\ & Gontier _et al_. [5] & 0.699 & 0.523 & 0.380 & 0.266 & 0.241 & 0.493 & **0.753** & 0.176 & 0.465 \\ & Chen _et al_. [20] & 0.550 & 0.385 & 0.264 & 0.178 & 0.173 & 0.390 & 0.443 & 0.117 & 0.280 \\ & Eren _et al_. [9] & 0.710 & 0.490 & 0.380 & 0.230 & **0.290** & **0.590** & 0.750 & - & - \\ & Kim _et al_. [6] & 0.713 & 0.552 & 0.421 & 0.309 & 0.240 & 0.503 & 0.733 & 0.177 & 0.455 \\ & RECAP (w/ \(\mathcal{DS}\)) & 0.721 & **0.559** & **0.428** & **0.316** & 0.252 & 0.521 & 0.750 & 0.183 & **0.472** \\ & RECAP (w/ \(\mathcal{DS}_{large}\)) & **0.722** & 0.557 & **0.428** & 0.313 & 0.256 & 0.525 & 0.751 & **0.186** & 0.471 \\ \hline \multirow{8}{*}{(2) Clotho} & Mei _et al_. [19] & 0.415 & 0.219 & 0.121 & 0.063 & 0.134 & 0.303 & 0.149 & 0.066 & 0.107 \\ & Gontier _et al_. [5] & 0.425 & 0.223 & 0.124 & 0.061 & 0.128 & 0.298 & 0.147 & 0.060 & 0.104 \\ & Chen _et al_. [20] & 0.365 & 0.170 & 0.091 & 0.048 & 0.110 & 0.273 & 0.083 & 0.049 & 0.066 \\ & Kim _et al_. [6] & 0.449 & 0.266 & 0.157 & 0.084 & 0.144 & 0.330 & 0.211 & 0.083 & 0.147 \\ \cline{1-1} & RECAP (w/ \(\mathcal{DS}_{clotho}\)) & 0.427 & 0.224 & 0.148 & 0.065 & 0.112 & 0.281 & 0.191 & 0.078 & 0.136 \\ & RECAP (w/ \(\mathcal{DS}\)) & 0.501 & **0.326** & **0.211** & 0.104 & 0.164 & **0.357** & 0.359 & **0.116** & 0.198 \\ & RECAP (w/ \(\mathcal{DS}_{large}\)) & **0.507** & 0.321 & 0.206 & **0.108** & **0.169** & **0.357** & **0.362** & 0.111 & **0.204** \\ \hline \multirow{8}{*}{(3) Clotho \&} & Mei _et al_. [19] & 0.682 & 0.507 & 0.369 & 0.266 & 0.238 & 0.488 & 0.701 & 0.166 & 0.434 \\ & Gontier _et al_. [5] & 0.635 & 0.461 & 0.322 & 0.219 & 0.208 & 0.450 & 0.612 & 0.153 & 0.383 \\ \cline{1-1} & Chen _et al_. [20] & 0.489 & 0.292 & 0.178 & 0.106 & 0.152 & 0.346 & 0.265 & 0.093 & 0.179 \\ \cline{1-1} & Kim _et al_. [6] & 0.708 & 0.547 & 0.402 & 0.283 & 0.238 & 0.499 & 0.710 & 0.167 & 0.438 \\ \cline{1-1} & RECAP (w/ \(\mathcal{DS}\)) & **0.728** & **0.563** & **0.425** & 0.317 & 0.252 & **0.529** & **0.764** & 0.187 & **0.469** \\ \cline{1-1} & RECAP (w/ \(\mathcal{DS}_{large}\)) & 0.725 & 0.561 & 0.424 & **0.319** & **0.256** & **0.529** & 0.761 & **0.190** & **0.469** \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation on AudioCaps. Each method is trained on three different settings and tested on the AudioCaps dataset. For evaluation, we use a datastore that has captions from the training set (\(\mathcal{DS}\)), Clotho (\(\mathcal{DS}_{clotho}\)), or a large external dataset (\(\mathcal{DS}_{large}\)).
\begin{table}
\begin{tabular}{l|l} \hline \hline
**General Truth** & 1: a engine roars in the background while pieces of metal are being dropped in. \\ \multirow{2}{*}{**Ground Truth**} & 2: a moving vehicle has some metal container in it clining against each other. \\ \multirow{2}{*}{**4**} & 3: nature sounds with a strong container. \\ & 4: a vehicle driving as a man and woman are talking and laughing. \\ \hline \multirow{4}{*}{**SOTA**} & 1: a bell is ringing and a bell rings. \\ \multirow{2}{*}{**SOTA**} & 2: rain falling on a surface. \\ \multirow{2}{*}{**3**} & 3: people are talking and laughing with a man speaking in the background. \\ & 4: a person is talking in the background. \\ \hline \multirow{4}{*}{**RECAP**} & 1: A person is using a chiest to cut wood and a car passes by. \\ \multirow{2}{*}{} & 2: Water stabules while a car drives by in the rain. \\ \multirow{2}{*}{**3**} & 3: several vehicles move and a beep goes off. \\ \multirow{2}{*}{} & 4: an adult male is speaking, and a motor vehicle engine is running. \\ \hline \end{tabular}
\end{table}
Table 3: Comparing RECAP in 4 challenging settings. |
2305.19728 | Flop connections between minimal models for corank 1 foliations over
threefolds | In 2007 Kawamata proved that two different minimal models can be connected by
a sequence of flops. The aim of this paper is to show that the same holds true
for 2 foliated minimal models descending from a common 3-fold pair equipped
with a F-dlt foliation of corank 1. | Dongchen Jiao, Pascale Voegtli | 2023-05-31T10:44:19Z | http://arxiv.org/abs/2305.19728v1 | # Flop connections between minimal models for corank 1 foliations over threefolds
###### Abstract
In 2007 Kawamata proved that two different minimal models can be connected by a sequence of flops. The aim of this paper is to show that the same holds true for 2 foliated minimal models descending from a common 3-fold pair equipped with a F-dlt foliation of corank 1.
## 1 Introduction
In recent years the understanding of the fundamental birational geometry of foliations, especially on 3-folds, has been promoted by several groundbreaking works. In [3] most parts of the classical MMP have been extended to 3-fold pairs equipped with a mildly singular corank 1 foliation. In particular, the existence of log-flips has been established.
The successful establishment of a foliated analogue of the classical MMP in low dimensions naturally raises the question whether classical results being closely related to the MMP do find their natural generalizations for foliated pairs.
One such classical result one might strive to convey to foliations is the well-known theorem of Kawamata [5], stating that two minimal models with terminal singularities are related by a sequence of flops.
Building upon the findings of the authors in [3], the aim of this paper is to generalize the results of [5] to foliated 3-folds equipped with a corank 1 foliation with F-dlt singularities.
In the proof of the main theorem (see below), we closely follow the line of reasoning found in [5] and deviate only where adaptations or modifications seem inevitable.
Concretely, in the present publication, the following theorem is proven (For the precise definitions we refer to section (2)):
**Theorem 1.1**.: _Let \((Y_{1},\mathcal{F}_{1})\) and \((Y_{2},\mathcal{F}_{2})\) be two foliated minimal models descending from a common foliated 3-fold pair \((X,\mathcal{F})\). We assume that \(X\) is klt and \(\mathbb{Q}\)-factorial._
_Assume there is a birational map \(\alpha:Y_{1}\dashrightarrow Y_{2}\), such that \(K_{\mathcal{F}_{1}}\) and \(K_{\mathcal{F}_{2}}\) are big. Then \(\alpha\) is composed of a sequence of flops._
In detail, there are an effective \(\mathbb{Q}\)-divisor A on \(Y_{1}\) such that \((\mathcal{F}_{1},A)\) is F-dlt and a factorization
\[(Y_{1},\mathcal{F}_{1},A)=(X_{0},\mathcal{G}_{0},A_{0})\dashrightarrow(X_{1}, \mathcal{G}_{1},A_{1})\dashrightarrow...\dashrightarrow(X_{r},\mathcal{G} _{r},A_{r})=(Y_{2},\mathcal{F}_{2},A^{{}^{\prime}}) \tag{1}\]
satisfying
1. \(\beta_{i}:X_{i-1}\dashrightarrow X_{i}\) is a flip associated to a \((K_{\mathcal{G}_{i-1}}+A_{i-1})\)-negative extremal ray, where \(A_{i}\) is the strict transform of \(A\) on \(X_{i}\).
2. \(\beta_{i}\) is crepant for \(K_{\mathcal{G}_{i-1}}\) in the sense that pull-backs of \(K_{\mathcal{G}_{i-1}}\) and \(K_{\mathcal{G}_{i}}\) coincide on a common resolution.
3. the flipping contractions \(\beta_{i}\) are \(K_{\mathcal{G}_{i}}\)-trivial.
## Acknowledgements
We would like to express our deep gratitude to the PhD-advisor of the second author, Paolo Cascini, for the initialization of the project and his indispensable guidance throughout its further evolvement. Furthermore, are both authors indebted to Calum Spicer for his extended explanations, invaluable suggestions and careful revision of the first draft of this publication.
The first author is supported by EPSRC DTP studentship (EP/V520196/1) Brunel University London and would like to show appreciation to his supervisor Anne-Sophie Kaloghiros for useful discussion and invaluable suggestions.
The second author would like to thank the LSGNT for laying the mathematical foundations of this work by providing a stimulating and supporting working environment. The second author was supported by the Engineering and Physical Sciences Research Council [EP/S021590/1]. The EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), University College London.
## 2 Preliminaries
### Basic definitions
Throughout, we work over the complex numbers and by 3-fold or variety we mean a complex analytic space of dimension 3 if not specified otherwise.
**Definition 2.1**.: _Let \(X\) be a normal variety. A foliation \(\mathcal{F}\) on \(X\) is a coherent subsheaf of \(\mathcal{T}_{X}\), such that_
* \(\mathcal{F}\) _is saturated, i.e._ \(\mathcal{T}_{X}/\mathcal{F}\) _is torsion free and_
* \(\mathcal{F}\) _is closed under Lie bracket._
**Remark**.: \(\mathcal{F}\) _is always locally free on the smooth locus \(X_{0}\) of \(X\)._
**Definition 2.2**.: _Suppose \(\mathcal{F}\) is a rank \(r\) foliation on a normal variety \(X\). Notice that there exists an open embedding \(j:X_{0}\hookrightarrow X\) such that \(X_{0}\) is smooth, \(\operatorname{codim}(X\backslash X_{0})\geq 2\) and \(\mathcal{F}\) is locally free on \(X_{0}\), so \(\wedge^{r}\mathcal{F}\) is an invertible sheaf on \(X_{0}\). We define the canonical divisor of \(\mathcal{F}\) to be any divisor \(K_{\mathcal{F}}\) on \(X\) such that \(\mathcal{O}_{X}(-K_{\mathcal{F}})\cong j_{*}(\wedge^{r}\mathcal{F})\)._
**Definition 2.3**.: _By the notation \((X,\mathcal{F})\) we mean a variety \(X\) equipped with a corank 1 foliation \(\mathcal{F}\)._
**Definition 2.4**.: _A triple \((X,D,\mathcal{F})\) consists of a foliation \(\mathcal{F}\) on a variety X and an effective \(\mathbb{R}\)-divisor \(D\) such that \(K_{\mathcal{F}}+D\) is \(\mathbb{R}\)-Cartier._
For the definition of foliated minimal model we adopt the one found in [[3], section 10]. For reader's convenience, we restate it here:
**Definition 2.5**.: _A minimal model of \(\mathcal{F}\) is a \(K_{\mathcal{F}}\)-negative birational map \(f:X\dashrightarrow X^{\prime}\) such that if \(\mathcal{F}^{\prime}\) is the transformed foliation on \(X^{\prime}\), then_
1. \(X^{\prime}\) _is_ \(\mathbb{Q}\)_-factorial and klt_
2. \(\mathcal{F}^{\prime}\) _is F-dlt and_ \(K_{\mathcal{F}^{\prime}}\) _is nef._
We now recall the definitions of foliation singularities as well as the notions of invariance, tangency rsp. transversality of subvarieties:
**Definition 2.6**.: _Let X be a normal variety and \(\mathcal{F}\) a rank r foliation on X. A subvariety \(S\subset X\) is called \(\mathcal{F}\)-**invariant** if for any open subset \(U\subset X\) and any section \(\partial\in H^{0}(U,\mathcal{F})\) we have:_
\[\partial(I_{S\cap U})\subset I_{S\cap U} \tag{2}\]
_where \(I_{S\cap U}\) denotes the ideal sheaf of \(S\cap U\) in U. If \(\Delta\subset X\) is a prime divisor, one defines \(\epsilon(\Delta)=-1\) if \(\Delta\) is invariant in the above sense and \(\epsilon(\Delta)=0\) otherwise._
**Definition 2.7**.: _Let X be a normal projective variety equipped with a non-dicritical corank 1 foliation \(\mathcal{F}\)._
_We call a subvariety \(W\subset X\)**tangent** to \(\mathcal{F}\) if for any birational morphism \(\pi:\tilde{X}\to X\) and any divisor E on X such that \(E\) dominates \(W\), we have that E is \(\tilde{\mathcal{F}}\)-invariant, where \(\tilde{\mathcal{F}}\) denotes the pulled back foliation on \(\tilde{X}\)._
_Otherwise, we call \(W\subset X\) transverse to \(\mathcal{F}\)._
**Definition 2.8**.: _For a birational morphism \(\pi:\tilde{X}\to X\) and a foliated pair \((\mathcal{F},\Delta)\) on \(X\), let \(\tilde{\mathcal{F}}\) be the pulled back foliation on \(\tilde{X}\) and \(\tilde{\Delta}\) the strict transform of \(\Delta\) on \(\tilde{X}\). We then write:_
\[K_{\tilde{\mathcal{F}}}+\tilde{\Delta}=\pi^{*}(K_{\mathcal{F}}+\Delta)+\Sigma a (E,\mathcal{F},\Delta)E \tag{3}\]
_where \(\pi_{*}K_{\tilde{\mathcal{F}}}=K_{\mathcal{F}}\) and the sum runs over all prime exceptional divisors on \(\tilde{X}\). The rational numbers \(a(E,\mathcal{F},\Delta)\) are called discrepancies._
_Given a normal variety X and a foliated pair \((\mathcal{F},\Delta)\) on X we call \((\mathcal{F},\Delta)\) terminal (rsp. canonical, log-canonical) if \(a(E,\mathcal{F},\Delta)>0\) (rsp.\(\geq 0,\geq-\epsilon(E)\)) for every exceptional prime divisors E._
_For a not necessarily closed point \(P\in X\) we say \((\mathcal{F},\Delta)\) is terminal (rsp. canonical, log-canonical) at P if for all birational morphisms \(\pi:\tilde{X}\to X\) and any \(\pi\)-exceptional divisor E on \(\tilde{X}\) whose center lies in the Zariski closure \(\bar{P}\) of P we have that the discrepancy of E is \(>0(\geq 0,rsp.\geq-\epsilon(E))\)._
There are yet another two important concepts related to singularities that will be needed in the following: For reader's convenience we recall here the definition of F-dlt-singularities and the notion of log-smoothness.
**Definition 2.9**.: _Let X be a normal variety and \(\mathcal{F}\) a corank 1 foliation on X. Let \(A\) be a \(\mathbb{Q}\)-divisor such that \((K_{\mathcal{F}}+A)\) is \(\mathbb{Q}\)-Cartier._
_We call \((\mathcal{F},A)\) foliated divisorial log terminal (F-dlt) if_
1. _Each irreducible component of_ \(A\) _is generically transverse to the foliation_ \(\mathcal{F}\) _and has coefficients at most one_
2. _There is a foliated log resolution_ \(\pi:Y\to X\) _of_ \((\mathcal{F},A)\) _which only extracts divisors E of log discrepancy_ \(a(E,\mathcal{F},A)>-\epsilon(E)\)_._
**Definition 2.10**.: _Given a germ \(p\in X\) with a foliation \(\mathcal{F}\) such that \(p\) is a singular point of \(\mathcal{F}\) we call a (formal) hypersurface germ \(p\in S\) a (formal) separatrix if it is invariant under \(\mathcal{F}\)._
**Definition 2.11**.: _Given \((X,\mathcal{F})\) as in the previous definition, we say that \((\mathcal{F},A)\) is foliated log smooth if:_
1. \((X,A)\) _is log smooth_
2. \(\mathcal{F}\) _has simple singularities_
3. _If S denotes the support of non-_\(\mathcal{F}\)_-invariant components of A,_ \(p\in S\) _is a closed point and_ \(\Sigma_{1},...,\Sigma_{k}\) _are_ \(\mathcal{F}\)_-invariant divisors passing through_ \(p\)_, then_ \(\Sigma_{1}\cup...\cup\Sigma_{k}\) _is a normal crossings divisor at p._
For the precise definition of simple singularities, the inaugurated reader may again wish to consult [[8], Def. 2.13]. Next, we clarify the notions of log flips and flops: We state them in classical terms, i.e in the way they arise in the realm of the classical MMP. The respective generalizations to the foliated situation are though straight forward ( simply replace \(K_{X}\) by \(K_{\mathcal{F}}\)). The definitions are verbatim taken from [[6], Def. 3.33 rsp 6.10]
**Definition 2.12**.: _Let X be a normal scheme and D a \(\mathbb{Q}\)-divisor on X such that \(K_{X}+D\) is \(\mathbb{Q}\)-Cartier. A \((K_{X}+D)\)-flipping contraction is a proper birational morphism \(f:X\to Y\) to a normal scheme Y such that \(Exc(f)\) has codimension at least two in X and -\((K_{X}+D)\) is f-ample. A normal scheme \(X^{+}\) together with a proper birational morphism \(f^{+}:X^{+}\to Y\) is called a \((K_{X}+D)\)-flip of f if:_
1. \(K_{X^{+}}+D^{+}\) _is_ \(\mathbb{Q}\)_-Cartier, where_ \(D^{+}\) _is the birational transform of D on_ \(X^{+}\)__
2. \(K_{X^{+}}+D^{+}\) _is_ \(f^{+}\)_-ample_
3. \(Exc(f^{+})\) _has codimension at least two in_ \(X^{+}\)__
**Remark**.: _By an abuse of notation we also call the map \(\phi:X\dashrightarrow X^{+}\) a \((K_{X}+D)\)-flip or simply a D-flip. In a analogous way we can define a \(K_{\mathcal{F}}+D\)-flip for a foliation \(\mathcal{F}\) on \(X\)._
**Definition 2.13**.: _Let X be a normal scheme with klt singularity. Let \(\mathcal{F}\) be a F-dlt foliation on \(X\). A flopping contraction is a proper birational morphism \(f:X\to Y\) to a normal scheme Y such that \(Exc(f)\) has codimension at least two in X and \(K_{\mathcal{F}}\) is numerically trivial. In the above setup, i.e assuming \(K_{\mathcal{F}}\) numerically trivial: if D is a \(\mathbb{Q}\)-Cartier divisor on X such that \(-(K_{\mathcal{F}}+D)\) is f-ample, then the D-flip of f is also called the D-flop._
_We recall that a divisor F on X is called numerically f-trivial for the birational contraction morphism \(f:X\to Y\) if for every curve \(C\) contracted by \(f\), we have \(F\cdot C=0\)._
In the sequel of the proof of theorem 1.1 we will make use of the following two results from [3]. We restate them here verbatim but refer to the original publication for a proof of the statements:
**Theorem 2.14** ([3],Theorem 9.4).: _Let X be a normal projective 3-fold with klt singularities. Let \(\mathcal{F}\) be a corank 1 foliation on X. Let \(\Delta\) be a \(\mathbb{Q}\)-divisor such that \((\mathcal{F},\Delta)\) is a F-dlt pair. Let \(A\geq 0\) and \(B\geq 0\) be \(\mathbb{Q}\)-divisors such that \(\Delta=A+B\) and A ample. Assume \(K_{\mathcal{F}}+\Delta\) is nef._
_Then \(K_{\mathcal{F}}+\Delta\) is semi-ample._
**Theorem 2.15** ([3],Lemma 3.24).: _Let X be a normal projective 3-fold, \(\mathcal{F}\) a corank1 foliation on X. Let \((\mathcal{F},\Delta)\) be an F-dlt pair such that \(\lfloor\Delta\rfloor=0\) and let A be an ample \(\mathbb{Q}\)-divisor on X._
_Then there is an effective \(\mathbb{Q}\)-divisor \(A^{{}^{\prime}}\sim_{Q}A\) such that:_
1. \((\mathcal{F},\Delta)\) _is also F-dlt_
2. \(\lfloor\Delta+A^{\prime}\rfloor=0\)__
3. _The support of_ \(A^{\prime}\) _does not contain any log canonical center of_ \((\mathcal{F},\Delta)\)__
## 3 Proof of the Theorem 1.1
Sticking to the notation in the statement of Theorem 1.1, we first show that the birational map \(\alpha\) relating the 2 foliated minimal models is a small map, i.e the codimension of the exceptional locus is at least 2.
We start by setting up the notation: Let \((Y_{1},\mathcal{F}_{1})\) and \((Y_{2},\mathcal{F}_{2})\) be 2 foliated minimal models of a common log-canonical foliated threefold pair \((X,\mathcal{F})\), such
that \(X\) is \(\mathbb{Q}\)-factorial with klt singularities. Denote by \(\alpha:(Y_{1},\mathcal{F}_{1})\dashrightarrow(Y_{2},\mathcal{F}_{2})\) the map connecting the \(2\) minimal models and by \(\alpha_{i}:(X,\mathcal{F})\dashrightarrow(Y_{i},\mathcal{F}_{i})\) the sequence of MMP-steps run through in order to obtain the respective minimal models.
We recall that the steps of the foliated MMP are \(K_{\mathcal{F}}\)-negative and hence so are the above defined \(\alpha_{i}^{\prime}s\). Thus there is a common log resolution \(W\) of \(X\), \(Y_{1}\) and \(Y_{2}\) such that for the following commutative diagram we have:
(4)
where \(E_{i}\geq 0\), \(q_{i}-\text{exceptional}\) and \(p_{*}^{-1}Exc(\alpha_{i})\subset suppE_{i}\)
**Lemma 3.1**.: _With the above notation, we have that \(\alpha\) is small._
Proof.: Suppose \(E_{1}\neq E_{2}\) then without loss of generality we assume that \(\overline{E_{1}}:=E_{1}-\min\{E_{1},E_{2}\}>0\), \(\overline{E_{2}}:=E_{2}-\min\{E_{1},E_{2}\}\geq 0\). According to the negativity lemma we can find a curve \(C\subseteq\text{Supp}\ \overline{E_{1}}\) such that \(C\) is not contained in Supp \(\overline{E_{2}}\) and \(C\cdot\overline{E_{1}}<0\). Then we get
\[q_{2}^{*}(K_{\mathcal{F}_{2}})=q_{1}^{*}(K_{\mathcal{F}_{1}})+E_{1}-E_{2}=q_{ 1}^{*}(K_{\mathcal{F}_{1}})+\overline{E_{1}}-\overline{E_{2}}\]
and
\[q_{2}^{*}(K_{\mathcal{F}_{2}})\cdot C=q_{1}^{*}(K_{\mathcal{F}_{1}})\cdot C+( \overline{E_{1}}-\overline{E_{2}})\cdot C<0\]
contradicting the nefness of \(\mathcal{F}_{i}\).
Hence we conclude that \(\overline{E_{1}}=\overline{E_{2}}\) which implies that \(q_{1}\) and \(q_{2}\) contract the same divisors, thus \(\alpha\) is small.
The overall strategy of Kawamata in the analogous classical statement, i.e. for minimal models of terminal varieties instead of foliated minimal models as in the present paper, is to choose an ample divisor \(L^{\prime}\) on minimal model \(Y_{2}\) and consider its strict transform \(L\) on minimal model \(Y_{1}\). As klt-ness is an open property for varieties \((Y_{1},lL)\), for a small number \(l\), is still klt and Kawamata can thus assume that the divisor \(K_{Y_{1}}+lL\) is not nef because otherwise \(\mathbb{Q}\)-factoriality of \(Y_{2}\) and the classical base point free theorem ([6], Theorem 3.3) imply that \(\alpha\) is an isomorphism. This allows Kawamata then to run a \((K_{Y_{1}}+lL)\)-MMP on \(Y_{1}\) to finally reach his conclusions.
In the foliated case treated here, although there is as well an analogue of the classical basepoint-free theorem for corank1-foliations on 3-folds (see section 2), its invocation is not as straightforward as is in the original proof. The main difficulties arise from the requirement in the foliated version of the basepoint-free theorem that \((\mathcal{F}_{1},lL)\) must be F-dlt. A priori it is not immediate that this condition can be satisfied.
The core of the present paper thus consists in the demonstration that a careful choice of the ample divisor \(L^{\prime}\) on \(Y_{2}\) guarantees that \((\mathcal{F}_{1},lL)\) becomes F-dlt.
Invoking (2.15), we know that we can find an ample \(L^{\prime}\) on \(Y_{2}\) and a sufficiently small real number l such that \((\mathcal{F}_{2},lL^{\prime})\) is again F-dlt. The next few Lemmata in the present paper lay the ground for the proof that \(L^{\prime}\) can be chosen such that also its strict transform on \(Y_{1}\) satisfies \((\mathcal{F}_{1},lL)\) F-dlt.
**Remark**.: _Notice that the Bertini-type theorem 2.15 does not directly apply to \((\mathcal{F}_{1},lL)\) as \(L\) is no longer ample._
### Exceptional curves are foliated-trivial
In this section we are going to prove the following proposition:
**Proposition 3.2**.: _For any curve \(C\) lying in the exceptional locus of \(\alpha\), we have \(K_{\mathcal{F}_{1}}\cdot C=0\)._
We will prove this in two steps:
**Lemma 3.3**.: _Suppose \(K_{\mathcal{F}_{1}}\) is big and nef and \(K_{\mathcal{F}_{1}}\cdot C>0\), then there exists \(0<\epsilon\ll 1\), such that_
\[C\nsubseteq\mathbf{B}_{+}(K_{\mathcal{F}_{1}}+\epsilon L)\]
Proof.: Fix an ample divisor \(A\) on \(Y_{1}\). Since \(K_{\mathcal{F}_{1}}\) is nef and \(K_{\mathcal{F}_{1}}\cdot C>0\), according to [[1], Theorem 1.3] we have
\[C\nsubseteq\mathbf{B}_{+}(K_{\mathcal{F}_{1}})\]
i.e. for any fixed \(n\gg 0\), we have
\[C\nsubseteq\mathbf{B}(K_{\mathcal{F}_{1}}-\frac{1}{n}A)\]
Then we can find \(0\leq H\sim_{\mathbb{Q}}K_{\mathcal{F}_{1}}-\frac{1}{n}A\) such that \(C\nsubseteq H\). We thus have
\[K_{\mathcal{F}_{1}}+\epsilon L=K_{\mathcal{F}_{1}}-\frac{1}{n}A+\frac{1}{n}A+ \epsilon L\sim_{\mathbb{Q}}H+(\frac{1}{n}A+\epsilon L)\]
Since \(A\) is ample, for \(0<\epsilon\ll 1\) we have \(\frac{1}{n}A+\epsilon L\) is also ample, and so is \(\frac{1}{n}A+\epsilon L-\delta A\) for some \(0<\delta\ll 1\),so we may find an effective \(\mathbb{Q}\)-divisor \(H_{0}\sim_{\mathbb{Q}}K_{\mathcal{F}_{1}}+\epsilon L-\delta A\) such that \(C\nsubseteq H_{0}\).
**Lemma 3.4**.: _There exists an ample divisor \(L^{\prime}\) such that \(\operatorname{Exc}(\alpha)=\mathbf{B}_{+}(K_{\mathcal{F}_{1}}+\epsilon L)\) for \(L=\alpha_{*}^{-1}L^{\prime}\) and \(\epsilon\) from the above proposition._
Proof.: First we choose an ample divisor \(L^{\prime}\) on \(Y_{2}\) with \(0<\epsilon\ll 1,\epsilon\in\mathbb{Q}\) such that \(C\nsubseteq{\mathbf{B}_{+}(K_{\mathcal{F}_{1}}+\epsilon L)}\).
According to [[2], Theorem A], we have
\[\mathrm{Exc}(\Phi_{m})=\mathbf{B}_{+}(K_{\mathcal{F}_{1}}+\epsilon L)\]
where \(\Phi_{m}:Y_{1}\dashrightarrow H^{0}(Y_{1},m(K_{\mathcal{F}_{1}}+\epsilon L))\) is the map induced by the complete linear system \(|m(K_{\mathcal{F}_{1}}+\epsilon L)|\) on \(Y_{1}\) with \(m\gg 0\) and sufficiently divisible.
However, since the \(\mathbb{Q}\)-divisor \(K_{\mathcal{F}_{2}}+\epsilon L^{\prime}\) is ample, we assume that \(n(K_{\mathcal{F}_{2}}+\epsilon L^{\prime})\) is very ample and since \(\alpha\) is small, \(\alpha\) is the rational map induced by the complete linear system
\[|n(K_{\mathcal{F}_{1}}+\epsilon L)|=\alpha_{*}^{-1}|n(K_{\mathcal{F}_{2}}+ \epsilon L^{\prime})|\]
hence \(\Phi_{mn}=p\circ\alpha\) where \(p\) is the twisted embedding \(H^{0}(Y_{1},n(K_{\mathcal{F}_{1}}+\epsilon L))\hookrightarrow H^{0}(Y_{1},mn (K_{\mathcal{F}_{1}}+\epsilon L))\). So \(\mathrm{Exc}(\alpha)=\mathrm{Exc}(\Phi_{mn})\).
Now we combine Lemma 3.3 and Lemma 3.4 and one can easily see that Proposition 3.2 holds.
### An exceptional curve is not an lcc for the foliation
**Lemma 3.5**.: _Notation as above. There is no curve contained in_
\[Exc(\alpha):=\{x\in Y_{1}|\alpha\text{ is not an isomorphism around }x\}\]
_which is a log canonical center for the foliation \(\mathcal{F}_{1}\)._
Proof.: Assume for sake of contradiction there was a curve \(C\subseteq\mathrm{Exc}(\alpha)\) whose generic point was a log canonical foliation singularity. By [[8], Definition 2.24], passing to a foliated log resolution \(W\), there is a divisor \(E\) such that \(a(E,\mathcal{F}_{1})=-\epsilon(E)\). Notice that the \(K_{\mathcal{F}}\)-negativity of the map \(\alpha_{1}\) gives rise to the following equation:
\[p^{*}K_{\mathcal{F}}=q_{1}^{*}K_{\mathcal{F}_{1}}+E_{1} \tag{5}\]
such that \(p_{*}^{-1}\mathrm{Exc}(\alpha_{1})\subset\mathrm{Supp}E_{1}\) and \(E_{1}\geq 0\).
First notice that as \(C\subset\mathrm{Exc}(\alpha)\) we know that any exceptional divisor on the log resolution \(W\) is either \(\alpha_{1}\) or \(\alpha_{2}\)-exceptional.
We can distinguish two scenarios. Either \(E\) descends to \(X\) as a divisor, or \(center_{X}(E)\) is contained in the flipping locus of some flip arising in the decomposition of \(\alpha_{2}\) into flips and divisorial contractions.
In the first case, \(E\) descends to a divisor \(E_{X}\) on \(X\) with the property that \(\alpha_{1}(E_{X})=C\) and which is of discrepancy \(a(E,F_{1},\Delta_{1})=-\epsilon(E_{X})\) as the restriction of \(p\) to \(E\) in this case is an isomorphism and hence preserves discrepancies.
On the other hand, from equation (5) we infer that \(p_{*}^{-1}(E_{X})=E\subset E_{1}\). This is however impossible as \(a(E,\mathcal{F}_{1})=-\epsilon(E)\leq 0\) whereas \(E_{1}>0\) by construction.
Consequently, we may assume that \(D:=center_{X}(E)\) is not divisorial. But then \(C\) is a flipped curve for the map \(\alpha_{2}\) (as \(V\) in this case is an isomorphism by and hence cannot be a log canonical center by ([3], Lemma 2.7)). The preceding
comment in brackets though suggests that \(X\) and \(X_{1}\) are locally isomorpic around the curve \(C\), which implies together with the local nature of flips that the flip of \(D\) in the course of the MMP defined by \(\alpha_{2}\) could also be carried out on \(X_{1}\) contradicting the fact that \(X_{1}\) is minimal. Hence the claim.
### The foliation is terminal along the curve
**Lemma 3.6**.: _A curve \(C\) in \(\operatorname{Exc}(\alpha)\) satisfying \(K_{\mathcal{F}}\cdot C=0\) is tangent to the foliation \(\mathcal{F}\)._
Proof.: As \(K_{\mathcal{F}}\) is big there exist an ample divisor \(A\) and an effective divisor E such that \(K_{\mathcal{F}}\sim_{Q}A+E\). By assumption we furthermore have \(0=K_{\mathcal{F}}\cdot C=(A+E)\cdot C\), which implies that \(E\cdot C<0\) due to ampleness of \(A\). Let \(L\) be a component of E such that \(L\cdot C<0\), i.e. \(C\) is contained in \(\operatorname{Supp}(L)\). By the lemma (3.5) we know that \(C\) is not an lcc for \(\mathcal{F}\) and hence can find \(\epsilon>0\) such that \(C\) defines an lcc for \(\mathcal{F}+\epsilon L\). Notice that by construction we have \((K_{\mathcal{F}}+\epsilon L)\cdot C<0\). We can now invoke [[8],Thm 4.5] to conclude that \(C\) needs to be tangent as claimed.
Lemma (3.5) proved that \(C\) itself is not a log canonical center of the foliation \(\mathcal{F}\), we next want to show that in addition, there are no \(0\)-dimensional log canonical centers located along \(C\). Together with the above demonstrated tangency of \(C\) this will allow us to conclude that the foliation \(\mathcal{F}\) is terminal along \(C\).
**Lemma 3.7**.: _Let \(C\) be a tangent curve contained in \(\operatorname{Exc}(\alpha)\), then \(C\) does not contain any lc center of \(\mathcal{F}\)._
Proof.: As \(C\) is tangent by assumption, we can invoke ([8], Thm 5.11) to deduce that there is a germ of an analytic surface \(S\) such that \(C\subset S\) and S is foliation invariant.
For sake of contradiction we next assume that there is a \(0\)-dimensional log canonical center located along \(C\). By [[3],thm 3.8] we can deduce that \(\mathcal{F}\) is foliated log smooth around \(p\).
Notice that in this situation all log canonical centers are precisely given by the strata of \(\operatorname{Sing}(\mathcal{F})\) around the simple singularity of the foliation.
Furthermore, the local description of simple singularities ( for definition see [[3], Def. 2.8] in order for \(p\) to be a log canonical center, we must have at least two one dimensional strata of \(\operatorname{Sing}(\mathcal{F})\), corresponding to the intersection lines of two formal coordinate hyperplane germs with the surface \(S\), intersecting \(C\) in the point \(p\). The scenario is depicted in the picture (1) below where we labelled the \(2\) presumptive \(1\)-dimensional log canonical centers meeting \(C\) by \(\xi_{1}\) and \(\xi_{2}\). We next argue that the depicted picture cannot occur.
According to Lemma 3.3 we know that we can assume \(K_{\mathcal{F}}\cdot C=0\). Thus in the described scenario, the following calculation would hold true:
\[0=K_{\mathcal{F}}\cdot C=(K_{S}+\sum(\xi_{i}))\cdot C \tag{6}\]
where the sum runs over strata of \(\operatorname{Sing}(\mathcal{F})\) intersecting in \(C\) ( in (1) we only drew \(2\) of them for the sake of clarity). In case that there are at least two we conclude
that \(-K_{S}\cdot C\geq 2\). As the following reasoning displays, this would however imply that \(C\) deforms which is impossible as \(C\) by assumption is contained in the one dimensional \(\operatorname{Exc}(\alpha)\). In order to get the contradiction, we need to assume that \(C\) is rational, an assumption which will be justified right after.
Here we regard \(S\) as an analytic surface. And for simplicity we assume that \(S\) is smooth here. The case when \(S\) is singular will be discussed later.
According to standard deformation theory, an estimate on the dimension of the deformation space of a morphism \(f\) from a curve \(C\) to a surface \(S\) is given as follows:
\[\dim_{[f]}\operatorname{Mor}(C,S)\geq\chi(C,f^{*}T_{S})=-K_{S}.f^{*}C+2(1-g(C)) \tag{7}\]
where the equality follows from Hirzebruch-Riemann-Roch [4].
Assuming thus for the moment that \(g(C)=0\), our previous calculations show that
\[\dim_{[f]}\operatorname{Mor}(C,S)\geq 4\]
Since \(\dim\operatorname{Aut}(C)\leq 3\), \(C\) would deform, a contradiction.
Now we turn back to the general case when \(S\) is singular. In this case we take a minimal resolution \(\varphi:S^{\prime}\to S\). Since \(S\) is not smooth hence not terminal we have
\[K_{S^{\prime}}+E=\varphi^{*}K_{S}\]
for some effective exceptional divisor \(E\) on \(S^{\prime}\). According to the simple singularity condition we have that \(S\) is smooth at the generic point of \(C\). So if we take \(C^{\prime}=\varphi_{*}^{-1}C\) we have
\[K_{S^{\prime}}\cdot C^{\prime}\leq K_{S}\cdot C\leq-2\]
Then the argument (7) still works.
We are then left to justify the assumption that \(C\) is of arithmetic genus \(0\).
In order to verify this claim it suffices to show that its normalization is a smooth rational curve. Using equation (6) and taking advantage of the projection formula as well as generalized adjunction results (see [[7], chapter 4]) we obtain the following sequence of equalities
\[\deg(\nu)(K_{\mathcal{F}}-\sum(\xi_{i}))|_{C}+\tilde{C}^{2}=K_{S}.\cdot\tilde {C}+\tilde{C}^{2}\sim_{\mathcal{Q}}K_{\tilde{C}}+\operatorname{Diff}_{\tilde{ C}} \tag{8}\]
where \(\nu:\tilde{C}\mapsto C\) denotes the normalization and \(\operatorname{Diff}_{\tilde{C}}\) the different of the adjunction.
Now notice that the normal bundle of \(\tilde{C}\), given by \(\tilde{C}^{2}\) has non-positive degree as \(C\) does not deform. Thus, the whole left hand side of the \(\mathbb{Q}\)-linear equivalence above is of non-positive degree and in turn so is the right hand side. Noticing that \(\operatorname{Diff}_{\tilde{C}}\) is effective, we conclude that \(K_{\tilde{C}}\) has non-positive degree which implies that \(\tilde{C}\) and hence \(C\) are genus \(0\) curves.
Summing up all the gained insight about the local situation around the presumptive log canonical centre \(p\) located on \(C\), the only conceivable picture could be the one depicted in (2). However, as the log canonical locus of the
foliation is given precisely by strata of \(\operatorname{Sing}(\mathcal{F})\) and codimensionality of a simple singularity is given by the number of maximal dimensional strata it is contained in, \(p\) being \(0\)-dimensional cannot be a log canonical center of \(\mathcal{F}\).
Proof of Theorem 1.1.: Altogether, we have demonstrated that \(\mathcal{F}\) is terminal along any curve \(C\subset\operatorname{Exc}(\alpha)\). We claim that this now allows us to find \(l>0\) and an ample divisor \(L^{\prime}\) on \(Y_{2}\) satisfying the conditions in theorem (2.15) such that the strict transform \(L:=\alpha_{*}^{-1}L^{\prime}\) fulfills the desired condition that \((Y_{1},\mathcal{F}_{1},lL)\) is a F-dlt pair.
Herefore notice, that the proof of (2.15) in [[3], Lemma 3.24] essentially boils down to the demonstration of bullet points \((i)-(iv)\) in the notation of the original publication. This in turn implies that the theorem can be applied to L directly in case one is able to establish these \(4\) conditions. We now shortly comment on them individually justifying our claim :
Notice first that outside the exceptional locus \(\operatorname{Exc}(\alpha)\), \(L\) is isomorphic to \(L^{\prime}\) and the conditions are hence satisfied automatically. Furthermore, there are no log canonical centers located on \(\operatorname{Exc}(\alpha)\), so (i) is established. Point (ii) is automatically satisfied as \(\dim\operatorname{Exc}(\alpha)=1\). For (iii), again, there are no log
Figure 1: No more than \(2\) log canonical centers \(\xi_{1}\) and \(\xi_{2}\) meeting \(C\) in a common point \(p\)
Figure 2: \(1\) lcc meeting \(C\) in a common point \(p\)
canonical centers located on \(\operatorname{Exc}(\alpha)\). Lastly, ad (iv) notice that as \(\mathcal{F}\) is terminal along \(C\subset\operatorname{Exc}(\alpha)\) as we have demonstrated, we can find a real number l small enough such that any exceptional divisor E with center contained in \(\operatorname{Exc}(\alpha)\) will satisfy \(a(E,\mathcal{F},lL)\geq\epsilon(E)\). We thus conclude by theorem (2.14) that the F-dlt pair \((\mathcal{F},lL)\) would be semiample in case it was nef. By the same logic as in the original publication by Kawamata this would however force the small map \(\alpha\) to be an isomorphism and hence this case can be disregarded.
We can thus assume in the sequel that \(K_{\mathcal{F}_{1}}+lL\) as constructed above is not nef and run a \((K_{\mathcal{F}_{1}}+lL)\)-MMP in complete analogy to Kawamata. Where it is to be stressed that by results of the authors of [3], the F-dlt property is preserved under a \(K_{\mathcal{F}}\)-MMP. It is also worth mentioning that this \((K_{\mathcal{F}_{1}}+lL)\)-MMP which by construction consists of a sequence of flips is actually \(K_{\mathcal{F}_{1}}\)-trivial as has been demonstrated above and need not, as opposed to the classical case in Kawamata's publication, be deduced from a clever application of the cone theorem.
Last, according to Proposition 3.2, every flip \(\beta_{i}:X_{i-1}\dashrightarrow X_{i}\) is a \(K_{\mathcal{G}_{i-1}}\)-flop since every curve we contract is \(K_{\mathcal{G}_{i-1}}\)-trivial. This concludes the proof of theorem 1.1.
|
2309.16916 | ONNXExplainer: an ONNX Based Generic Framework to Explain Neural
Networks Using Shapley Values | Understanding why a neural network model makes certain decisions can be as
important as the inference performance. Various methods have been proposed to
help practitioners explain the prediction of a neural network model, of which
Shapley values are most popular. SHAP package is a leading implementation of
Shapley values to explain neural networks implemented in TensorFlow or PyTorch
but lacks cross-platform support, one-shot deployment and is highly
inefficient. To address these problems, we present the ONNXExplainer, which is
a generic framework to explain neural networks using Shapley values in the ONNX
ecosystem. In ONNXExplainer, we develop its own automatic differentiation and
optimization approach, which not only enables One-Shot Deployment of neural
networks inference and explanations, but also significantly improves the
efficiency to compute explanation with less memory consumption. For fair
comparison purposes, we also implement the same optimization in TensorFlow and
PyTorch and measure its performance against the current state of the art
open-source counterpart, SHAP. Extensive benchmarks demonstrate that the
proposed optimization approach improves the explanation latency of VGG19,
ResNet50, DenseNet201, and EfficientNetB0 by as much as 500%. | Yong Zhao, Runxin He, Nicholas Kersting, Can Liu, Shubham Agrawal, Chiranjeet Chetia, Yu Gu | 2023-09-29T01:07:38Z | http://arxiv.org/abs/2309.16916v2 | # OnNNX Explainer: an ONNX Based Generic Framework to Explain Neural Networks Using Shapley Values
###### Abstract
Understanding why a neural network model makes certain decisions can be as important as the inference performance. Various methods have been proposed to help practitioners explain the prediction of a neural network model, of Shapley values are most popular. SHAP package is a leading implementation of Shapley values to explain neural networks implemented in TensorFlow or PyTorch but lacks cross-platform support, one-shot deployment and is highly inefficient. To address these problems, we present the ONNExplainer, which is a generic framework to explain neural networks using Shapley values in the ONNX ecosystem. In ONNXexplainer, we develop its own automatic differentiation and optimization approach, which not only enables One-Shot Deployment of neural networks inference and explanations, but also significantly improves the efficiency to compute explanation with less memory consumption. For fair comparison purposes, we also implement the same optimization in TensorFlow and PyTorch and measure its performance against the current state of the art open-source counterpart, SHAP. Extensive benchmarks demonstrate that the proposed optimization approach improves the explanation latency of VGG19, ResNet50, DenseNet201, and EfficientNetB0 by as much as 500%.
## Introduction
Explainable AI (XAI), one of the pillars of the greater Responsible AI programme coming of age, is playing an increasingly vital role in deployments of machine learning models in commercial settings for two major reasons: first because such models are responsible for a growing portion of the automated decisions behind our everyday lives, where at the very least the consumers themselves want explanations of these decisions; and second because of the rising realization that most, if not all models, in particular Neural Networks (NNs) trained on a sufficiently large corpus, will contain hidden biases [1] that one would like to expose and rectify wherever they drive a model decision, hopefully before impacting consumers. Not surprisingly, a large number of techniques have been proposed over the years to explain model decisions [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [90], [91], [93], [94], [95], [96], [97], [98], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [98], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [99], [99], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98]
planations together as one model, is enabled to simplify on-boarding explainable models on device.
## Related Work
As Machine Learning models are deployed in different industries and gradually playing an increasingly important role for stakeholders to make critical business decisions, the explainability of a model prediction becomes critically important. Many methods have been proposed to explain machine learning models. In general, the model explainers can be divided into two categories: model-agnostic Ribeiro et al. (2016); Strumbelj and Kononenko (2010); Wachter et al. (2017); Mothilal et al. (2020) and model-specific, such as deep learning explainers Shrikumar et al. (2017); Selvaraju et al. (2017); Binder et al. (2016); Sundararajan et al. (2017); Simonyan et al. (2019); Dhamdhere et al. (2018). In SHAP (SHapley Additive explanations Lundberg and Lee (2017)), the authors propose a unified framework to interpret machine learning models based on Shapley values Nowak and Radzik (1994) and SHAP contains both model-agnostic and model-specific explainers.
Model-agnostic explainers, such as LIME Ribeiro et al. (2016) and kernel explanation in SHAP Lundberg and Lee (2017), perturbs the input and trains a surrogate model to approximate models' prediction to obtain explanations without opening up the black-box models. Among deep learning explainers, backpropagated (BP) gradients-based approaches predominate because it attributes the importance scores to each feature in the input naturally. Saliency maps have been used for some time to visualize/interpret the images Simonyan et al. (2013); Dabkowski and Gal (2017). Attribution propagation approach propagates the contributions of all neurons in the network to the input Sundararajan et al. (2017); Montavon et al. (2017); Shrikumar et al. (2017). DeepLIFT Shrikumar et al. (2017) outperforms other BP methods by backpropagating negative relevance to increase class-sensitivity and solve the convergence problem and it is the only BP method that passes the test in one recent work which theoretically analyzes BP methods Sivat et al. (2020).
In addition to LIME and SHAP, several other open source python packages for XAI have been introduced Biecek (2018); Baniecki et al. (2021); Yang et al. (2022); Arya et al. (2019); Nori et al. (2019); captum.ai (2023). Many of them are an unification of existing methods or mostly a wrapper of LIME and SHAP with interactive utilities. For instance, OmniXAI Yang et al. (2022) builds model-specific and model-agnostic explainers on top of LIME and SHAP. Captum (captum.ai (2023) is a different case with PyTorch interface which integrates several explainers for NNs. However, none of them allow framework interoperability or easy deployment on different hardware in modern inference servers. Our approach proposes to use DeepLIFT Shrikumar et al. (2017) to compute Shapley values for NNs in ONNX ecosystem, which allows us to combine inference and explanation together as one model file to deploy across various hardware in real time.
## Deep Learning Important FeaTures (DeepLIFT)
In this manuscript we use an approximation to Shapley values based on DeepLIFT, the philosophy of which is to explain the difference in output from some reference output in terms of difference of the input from the corresponding reference input, measuring the target input's importance on the model prediction through back-propagation Shrikumar et al. (2017).
In this section, we introduce mathematical operators and their notations in this manuscript. For a neural network model, let \(t\) denote the output of a neuron in some intermediate layer and \(x_{0},x_{1},\cdots,x_{n}\) denote the necessary and sufficient inputs to compute the \(t\) from the neuron. The reference-from-difference \(\Delta t\) is denoted as \(\Delta t=t-t^{0}\), where \(t^{0}\) is the corresponding output of the neuron from the reference input \(x^{0}{}_{0},x^{0}{}_{1},\cdots,x^{0}{}_{n}\), which is chosen according to domain knowledge and heuristics (for MNIST digit recognition, for example, one could choose an all-black background image of 0's). DeepLIFT assigns contribution scores \(C_{\Delta x_{i}\Delta t}\) to \(\Delta x_{i}\) s.t.,
\[\sum_{i=1}^{n}C_{\Delta x_{i}\Delta t}=\Delta t, \tag{1}\]
where \(C_{\Delta x_{i}\Delta t}\) is the amount of difference-from-reference in \(t\), and it is attributed to or 'blamed' on the different-from-reference of \(x_{i}\). The intuition is, as explained in Lundberg and Lee (2017), that of a sort of fast approximation of Shapley values where we examine the effect of 'including' each \(x_{i}\) in place of its reference default \(x^{0}{}_{i}\).
Then the multiplier/derivative is defined as :
\[m_{\Delta x\Delta t}=\frac{C_{\Delta x\Delta t}}{\Delta x}, \tag{2}\]
where \(\Delta x\) is the difference-from-reference in input \(x\) and \(\Delta t\) is the difference-from-reference in output \(t\). Since the contribution of \(\Delta x\) to \(\Delta t\) is divided by the input difference, \(\Delta x\), we can use the multiplier as a discrete version of a partial derivative Shrikumar et al. (2017).
The chain rule for the multiplier can then be defined as the following,
\[m_{\Delta x_{i}\Delta t}=\sum_{j=1}^{n}m_{\Delta x_{i}\Delta y_{j}}m_{\Delta y _{j}\Delta t}, \tag{3}\]
where \(x_{i}\) is the neuron input for layer \(H_{l}\); \(y_{0},y_{1},\cdots,y_{n}\) are neuron outputs for layer layer \(H_{l}\) and neuron inputs for successor of \(H_{l}\). The analogy to partial derivatives allows us to compute the contributions of model output w.r.t. model input via back-propagation. The Shapley values can then be approximated by the average as:
\[\phi\approx Avg(\mathcal{M}*(X-R)), \tag{4}\]
where \(\mathcal{M}\) is the final matrix computed by multiplier w.r.t. the model input in the back-propagation and \(X\) is the input and \(R\) is input's reference. Our focus is how to implement
and accelerate \(\mathcal{M}\) in the ONNX ecosystem for any neural network. In ONNXExplainer, we adjust gradient (multiplier) computation for nonlinear operators (e.g., _Sigmoid_ and _MaxPooling_) and use the original gradients computation for linear operators (e.g., _MatMul_ and _Conv_).
## Method
ONNX is an open-source standard representing neural networks in order to make it easier for researchers and engineers to move NNs among different deep learning frameworks and computing devices (onnx.ai 2021). We propose ONNXExplainer to explain and interpret NNs in the ONNX ecosystem. As in SHAP (Lundberg and Lee 2017), ONNXExplainer uses DeepLIFT (Shrikumar, Greenside, and Kundaje 2017) to compute the Shapley values as a measure of feature importance for NNs. SHAP depends on the automatic differentiation mechanism in TensorFlow and PyTorch to compute gradients for Shapley value. However, in onboarding deep learning models on a device for inference, current deep learning frameworks will discard gradient information and only keep the forward pass graph. In this scenario, SHAP cannot be saved or called directly from the inference engine, e.g., ONNXruntime, because of the missing dependencies and computation graphs to calculate differentiation and gradients. Instead, ONNXExplainer provides its own automatic differentiation mechanism to compute gradients for NNs so that practitioners can use it to serve their NN models with Shapley values in a production pipeline. To make ONNXExplainer explain a general NN, it has three key components as shown in Figure 1: Neural Network Parser, Gradients/Multipliers Computation, and Automatic Differentiation. Moreover, an optimization approach is provided to improve the inference time by pre-computing intermediate outputs to forward propagation inside ONNXExplainer.
Neural Network ParserAs shown in Figure 1, a general NN model is converted to an ONNX format model. The ONNX model contains computation nodes to inference the inputs. After feeding the ONNX Model for NNs to ONNXExplainer, it first establishes the forward symbolic graph. In the forward symbolic graph one computation node is linked to other computation nodes because the output of the computation node is always either the input to other computation node(s) or the output of the ONNX model. With that, we can build the backward graph's main structure whose vertexes carry information about the computation nodes1.
Footnote 1: The python style code for the Parser is provided in the Appendix A.
Gradients/Multipliers ComputationEach deep learning framework contains hundreds of operators. We need to implement/define gradients (for linear operators) and multipliers (for nonlinear operators) in the ONNX ecosystem as well. In the current manuscript, we have provided gradients/multipliers computation for more than 25 operators, including _Concat_, _Add_, _Mul_, _MatMul_, _Gemm_, _Sigmoid_, _ReLU_, _Softmax_, _Conv_, _MaxPool_, _AveragePool_, _GlobalAveragePool_, _Transpose_, _BatchNormalization_, and others. When executing the forward symbolic graph, some intermediate outputs needed for the gradient computation might be retained in memory for other operations (Abadi et al. 2016; Paszke et al. 2017, 2019b). In this way, the deep learning frameworks can avoid extra computations when training the NNs. However, those intermediate outputs usually are opaque to the users. After training, current deep learning frameworks in the market will freeze the model and keep only the forward symbolic graph for inference. Under this scenario, we have to implement the gradients/multipliers computation for the inference operators only using the existing forward symbolic graph2. To summarize at a high level, we use the _Linear Rule_ for linear operations to compute gradients and the _Rescale Rule_ and the _ReevalCancel Rule_ for nonlinear operations to compute multipliers (Shrikumar, Greenside, and Kundaje 2017).
Footnote 2: Details are provided in the Appendix B.
Automatic DifferentiationAutomatic differentiation is useful for implementing machine learning algorithms such as back-propagation for training neural networks. PyTorch and TensorFlow both use automatic differentiation. However, when the models get on-boarded on device for inference purpose, they usually are frozen as a file which containing only the model structure (forward pass) and corresponding parameters. DeepLIFT needs the back-propagation to compute Shapley values and SHAP is reliant on the _tf.gradient_ for _TensorFlow_ and the _torch.autograd.grad_ for _PyTorch_, both of which are not serializable. Hence, we need to build our own automatic differentiation in the ONNX ecosystem.
The differentiation algorithm conducts Depth First Search (DFS) to identify all of the operators in the backward pass
Figure 1: A Schematic ONNXExplainer workflow for NN models, containing Parser, Gradient/Multiplier Computation, Automatic Differentiation (Depth First Search and gradient flows), and inference time optimization (Cache)
Figure 2: Four types of gradient flows.
from the output to the input to the model and sums the partial gradients that each operator contributes. Before introducing how the DFS works, we demonstrate the four types of gradient flows as shown in Figure 2:
* one2one: The one2one type is simple: both incoming and outgoing gradients just have one branch. We multiply the incoming gradients with the local gradients if any to obtain the outgoing gradients. Activation functions are typical operators of this type. If the operator has no local gradients, we just pass the incoming gradients to the successors in the backward pass.
* many2one: The many2one type has multiple flows of incoming gradients but one flow of outgoing gradients. We sum all incoming gradients at first and then multiply this summation with the local gradients if any to obtain the outgoing gradients.
* one2many: The one2many type has one flow of incoming gradients and multiple flows of outgoing gradients. After multiplying the incoming gradients with local gradients if any, we split or assign the outgoing gradients to the successors.
* many2many: The many2many type is the combination of many2one and one2many.
ONNExplainer uses DFS (Depth-First Search) to reverse the forward symbolic graph to compute Shapley values. The procedure is summarized in Algorithm 1. The DFS algorithm takes the backward graph \(G\) and the first computation node \(N\) as the inputs and it returns a list of computation nodes. Here the backward graph \(G\) is obtained from Neural Network Parser and \(N\) is the first computation node in the backward pass. Each vertex in \(G\) contains information to perform DFS and we use the name of the visiting computation node to get that information. From lines 1-3, we create an empty stack and push \(N\) onto the stack, marking \(N\) as visited. Line 4 defines the loss \(y_{x}-y_{r}\) to compute gradients w.r.t. the model input. The rest of the algorithm details how to traverse all computations nodes in the backward pass. Function \(F_{grad}\) returns a list of computation nodes \(O\) to compute gradients for the visiting node \(C\) and the incoming gradients \(grad_{in}\) for next node(s) in line 7. If the neighboring node \(W\) of \(C\) is not visited and it receives all incoming gradient flows, we push \(W\) onto the stack and mark it as visited.
**Automatic Differentiation Acceleration and Computation Graph Simplification** In SHAP when the reference data is fed to the model, there are redundant operators during Shapley value's generation. For example, the target point's output is recomputed every time when comparing with a reference point, thus making SHAP inefficient. Meanwhile, the automatic differentiation's acceleration algorithm inside ONNXExplainer optimizes the existing computing approach and simplifies the computation graph to generate Shapley values. Some of the optimization and simplification strategies are like caching commonly-used intermediate outputs
Figure 4: The diagrams of computing Shapley values using SHAP and our optimization approach for a simple forward symbolic graph with two computation nodes: _MatMul_ and _Sigmoid_. In diagrams, blue nodes are for the forward pass and orange nodes are for the backward pass. For instance, an orange Sigmoid node is the gradient computation of a blue Sigmoid node in the back-propagation. The grey area in each box is the output shape of the computation node. We use 5 reference samples to explain the data in the example.
Figure 3: Inference graph to the demo model.
during the forward pass for backward propagation. Figure 3 shows a demo neural network graph to explain the details of our optimization and simplification algorithm. This simple forward symbolic graph contains only two nodes: _MatMul_ with weight W and _Sigmoid_, their corresponding dimensions listed in Figure 3. Figure 4 shows two diagrams of how our approach and SHAP compute Shapley values for the NN model in Figure 3. SHAP explains the input based on an iterative comparison to the sample inside the reference data set one by one. In the example in Figure 4, we have 5 reference samples as \(R\) of \(5\times 32\) to explain one example \(X\) of \(1\times 32\). We summarise how our optimization approach simplifies the computation graph of Shapley values compared to the usual SHAP in two respects:
* Forward pass: For SHAP, shown in Sub-figure 3(a) as the blue blocks, each sample inside the reference set has to inference both the target and the reference to get the final output difference. Thus for this demo, SHAP has to infer 10 times in total: 5 for the one-time reference 5 for the target input. On the other hand, as shown in Sub-figure 3(b) as the mirror forward pass, the forward symbolic graph in the optimized approach only infers the target once then caches the output and broadcast for further usage. \(h_{R}\) is the outcome of _MatMul_ and also is the input to _Sigmoid_ for the reference data. \(\hat{y}_{R}\) is the outcome of _Sigmoid_ for the reference points. Both \(h_{R}\) and \(\hat{y}_{R}\) are ingested in computing the multiplier for _Sigmoid_ in explaining new data on-the-fly.
* Backward pass: This example has one linear operator, _MatMul_, and one nonlinear operator, _Sigmoid_, and their optimized gradient computations are indicated respectively as following.
1. _Sigmoid_: As shown in the shaded areas of Figure 3(a), since SHAP computes the target output 5 times in the previous forward step, it needs to first split the data then do the subtraction and finally tile back in order to pass further operators. Meanwhile, as the shaded areas in Figure 3(b) show, the target's output from the forward pass is directly broadcast to the subtraction operator. Thus, due to the redundant computation in the forward pass, SHAP has two more _Splits_ and one more _Tile_ than our optimization approach. Moreover, because the subtraction outputs' dimension is reduced from \(10\times 1\) to \(5\times 1\), the _Greater_ in the optimization approach takes half the number of floating-point operations as SHAP. Both SHAP and our approach use the reference's _Sigmoid_ gradients as the incoming gradients for _MatMul_. The difference is our approach computes the gradients only one time when constructing the backward symbolic graph and SHAP keeps recomputing the same gradients at run-time.
2. _MatMul_: Similar to the _Greater_ operation in _Sigmoid_, the transposed \(W\) in the optimization approach multiplies with half the number of floating-point operations that SHAP needs to acquire the gradients.
### One-shot Deployment
As shown in Figure 1, one of the major contributions to the proposed ONNXExplainer is its ability to save the forward neural network and its corresponding graph to calculate Shapley values together in a single ONNX file for one-shot deployment. Meanwhile the current open-sourced libraries, such as SHAP [17], need to call their own APIs (Application Programming Interfaces) in order to generate Shapley values, accounting for an extra step during deployment. Moreover, these libraries depend on other deep learning frameworks, such as TensorFlow or PyTorch, as their computation backend, which makes the on-device deployment even more complicated.
## Results
### Experimental Settings
DatasetWe use images of \(3\times 224\times 224\) with 10 classes on the ILSVRC-2012 dataset [13] to evaluate the explanation time between our optimization approach and SHAP. We use a reference input of all zeros both for the optimization approach and SHAP.
Neural NetworksWe use four representative models: VGG19 (V19) [18], ResNet50 (R50) [12], DenseNet201 (D201) [14], and EfficientNetB0 (EB0) [15] in our benchmarks. V19 is one of the neural networks with a large number of weights with 19 weighted layers and no jump layers. R50 is one variant of residual NNs that stack residual blocks, in which skip connections are jump layers to convert regular networks to residual networks via addition. R50 has 107 weighted layers and 16 "add" jump layers. D201 is the largest DenseNet variant which concatenates all previous layers with the current layer to form skip connections. It has 33 jump layers. EB0 is a scaling NN with much fewer parameters and faster speed that is used for on-device platforms. It has 25 jump layers of 9 additions and 16 multiplications3. Footnote 3: Details of the NNs and theoretical analysis of computational complexity and memory consumption to explain them can be found in the Appendix C.
MachineWe use two machines to perform our benchmarks4.
Footnote 4: All brand names, logos and/or trademarks are the property of their respective owners, are used for identification purposes only, and do not necessarily imply product endorsement or affiliation by the authors and their associated employers or institutions.
Machine A: CPU: AMD EPYC 7513 32-Core Processor; Total Memory:1056 GB Machine B: GPU: V100; CPU: Intel(R) Xeon(R) Gold 6130 CPU; Total Memory: 790 GB; Nvidia Driver: 510.108.03; CUDA version: 10.1;
All benchmarks running on three machines use the same library dependencies: TensorFlow 1.15.0, onnxruntime-gpu 1.12.0, onnx 1.12.0, Torch 1.13.1.
### The Visualization of Explanations from SHAP and optimized ONNX Explainer
To evaluate contribution scores obtained by different explainers, we design the following task: in test dataset, we randomly select 10 images from 10 classes to conduct a user study. We do this by using OPT-ONNX and SHAP (NON-TF) to compute Shapley values to explain 100 images, respectively. Then, we only plot simulation images of Shapley values for the predicted class by four models. We arrange original images and simulation images of Shapley values (randomized order) in a row and send 100 rows of images for each model to users. Table 1 shows users' agreement on whether the contribution acquired by OPT-ONNX and NON-TF are the same. We observe that all users have a high agreement score (over 99%) that OPT-ONNX and NON-TF explain the images equally well. In some cases, OPT-ONNX does better than NON-TF. Figure 5 shows example images and corresponding contribution score simulation images by OPT-ONNX and NON-TF for each model. It can be concluded that it is hard to tell the difference between the two simulation images visually5.
Footnote 5: Numerical comparisons between the explainers can be found in the Appendix D.
### Memory Consumption and Latency Analysis
In this subsection, we use the optimization approach and SHAP described in previous sections for a detailed memory consumption and latency analysis. SHAP supports explaining NNs implemented in TensorFlow (NON-TF) [1] and PyTorch (NON-PT) [22]. Other than ONNXExplainer (OPT-ONNX), we implement ONNXexplainer with no optimization (NON-ONNX) and optimized SHAPs. In this manuscript, we implement the same optimization process (optimized SHAPs) in TensorFlow (OPT-TF) and PyTorch (OPT-PT) as in ONNX. Then we can compare the latency for three pairs of explainers in the same frameworks: OPT-ONNX versus NON-ONNX, OPT-TF versus NON-TF, OPT-PT and NON-PT. In addition, we use half-precision for the four NN models with GPUs in benchmarks.
Memory ConsumptionAs mentioned earlier, the optimization reduces the memory consumption in the forward pass and theoretically in the backward pass too. Non-optimized explainers explain the data one by one and so do its counterparts in the benchmarks (more details in Appendix C). The largest number of reference images is used to measure the memory usage in both optimized and non-optimized explainers. Table 2 shows the largest number of reference images that can be used on a V100 GPU for each explainer. In general, the optimized approach can use many more reference images than its counterparts both using single and half precision floating point except for OPT-TF for V19 in half precision. The more reference samples, the more accurate the explanation is. In terms of frameworks, OPT-ONNX for EB0 and V19 and OPT-TF for R50 and D201 gain a superior edge over the number of reference images both in single and half precision. In terms of models, the optimized explainers for EB0 can use more reference images than their counterparts in other three models except for
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline NN & agreement (\%) & ONNX (\%) & SHAP (\%) \\ \hline V19 & 100.0 & 0.0 & 0.0 \\ \hline R50 & 99.4 & 0.6 & 0.0 \\ \hline D201 & 99.8 & 0.2 & 0.0 \\ \hline EB0 & 99.6 & 0.4 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 1: User agreement on closeness of contribution scores between OPT-ONNX and NON-TF. The third column means ONNX is better and the last column means SHAP is better.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & V19 & R50 & D201 & EB0 \\ \hline OPT-ONNX & **86/166** & **182/362** & **78/158** & **166/255** \\ \hline NON-ONNX & 61/121 & 130/256 & 47/93 & 68/135 \\ \hline OPT-TF & **79**/149 & **157/242** & **60/115** & **154/232** \\ \hline NON-TF & 76/**150** & 81/150 & 34/66 & 78/141 \\ \hline OPT-PT & **97**/175** & **112/253** & **72/127** & **114/266** \\ \hline NON-PT & 72/163 & 107/243 & 49/104 & 89/204 \\ \hline \end{tabular}
\end{table}
Table 2: The largest number of reference images that can be used on a V100 GPU [19].
Figure 5: Visualization of original images and its corresponding Shapley values simulation images by OPT-ONNX and SHAP (NON-TF). Each row represents a respective model. The first column is the images being explained, the second column is the simulation images by OPT-ONNX, and the last one is simulation images by SHAP[TF].
OPT-PT for D201 in single precision and OPT-TF for D201 in half precision.
Latency AnalysisWe infer and explain 100 images in each benchmark. After loading the models, the first few inference requests can be significantly slower at run-time due to deferred initialization and optimizations. Thus we consider the time to explain the first image out of 100 as the cold start time (or warmup time) and average the per-image latency of the rest for each benchmark.
We run benchmarks using CPUs on Machine A and the results are shown in Figure 6. The optimized explainers are much faster at explaining the images than the non-optimized explainers in all settings. Additionally, the speedup efficiency obtained by ONNX and TensorFlow is stronger than PyTorch in all models. Framework-wise and for optimized explainers, PyTorch is the fastest for all models and TensorFlow comes next fastest.
Figures 7 show latency comparisons on a V100 GPU6. It can be observed that optimized explainers are superior to non-optimized explainers in most of benchmarking configurations with one exception of cold start time for R50 by TensorFlow. In this exception, OPT-TF spends a little more time explaining R50 than NON-TF during warmup time in a few data points. Compared to benchmarks on CPU, TensorFlow and ONNX both need to significantly warm up the models. In particular, it takes TensorFlow hundreds of seconds to explain the first image in all models. This tells us substantially warming up the models would guarantee smooth production traffic without any delays. In terms of average latency, OPT-TF and OPT-PT performs very close in all models. It seems that OPT-TF has the largest acceleration gains than its peers for R50, D201, and EB0 and OPT-ONNX surpasses OPT-TF and OPT-PT in speeding up the explanation for V19.
Footnote 6: Half-precision benchmarks can be found in the Appendix E.
## Conclusion
In this work we propose the ONNXexplainer to explain NNs using Shapley values in the ONNX ecosystem. In ONNX-Explainer, we build our own automatic differentiation which enables one-shot deployment of NNs in inference pipelines, such as Triton and ONNXruntime. The optimization by pre-computing outcomes from the reference data reduces lots of redundant computations in explaining NNs. We develop and benchmark optimized explainers and non-optimized explainers in three major deep learning frameworks (ONNX, TensorFlow, and PyTorch) and test and compare the explainers using four typical neural networks: VGG19, ResNet50, DenseNet201, and EfficientNetB0. The benchmarks show that our optimized explainers significantly outperform counterparts in terms of inference resources usage and explanation latency.
Future work will continue to generalize ONNXexplainer by: 1) supporting gradients/multiplier computation for more operations, such as _Loop_, _GatherND_; 2) supporting more NN structures, such as Bidirectional LSTM/GRU models; 3) further optimization of the gradients computation graph.
## Author Contributions
Conceptualization, methodology, Algorithm, Y.Z.; Experiments, Y.Z., R.H, N.K.; writing-original draft preparation, Y.Z., R.H, N.K., C.L.; writing-review and editing, R.H, Y.Z., N.K., S.A., C.L., C.C., Y.G.; visualization, Y.Z., R.H.
Figure 6: Latency Comparison[14] between Optimization and Non-optimization/SHAP in three Frameworks (ONNX, TensorFlow, and PyTorch) using CPU cores on Machine A.
Figure 7: Latency Comparison[14] between Optimization and Non-optimization/SHAP in three Frameworks (ONNX, TensorFlow, and PyTorch) using GPU on Machine B. We use 45, 30, and 45 for NON-ONNX, NON-TF, and NON-PT for DenseNet201 because of memory limitation. |
2304.00150 | E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid
Mechanics | We contribute to the vastly growing field of machine learning for engineering
systems by demonstrating that equivariant graph neural networks have the
potential to learn more accurate dynamic-interaction models than their
non-equivariant counterparts. We benchmark two well-studied fluid flow systems,
namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow,
and compare equivariant graph neural networks to their non-equivariant
counterparts on different performance measures, such as kinetic energy or
Sinkhorn distance. Such measures are typically used in engineering to validate
numerical solvers. Our main findings are that while being rather slow to train
and evaluate, equivariant models learn more physically accurate interactions.
This indicates opportunities for future work towards coarse-grained models for
turbulent flows, and generalization across system dynamics and parameters. | Artur P. Toshev, Gianluca Galletti, Johannes Brandstetter, Stefan Adami, Nikolaus A. Adams | 2023-03-31T21:56:35Z | http://arxiv.org/abs/2304.00150v1 | # E(3) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics
###### Abstract
We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts. We benchmark two well-studied fluid flow systems, namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow, and compare equivariant graph neural networks to their non-equivariant counterparts on different performance measures, such as kinetic energy or Sinkhorn distance. Such measures are typically used in engineering to validate numerical solvers. Our main findings are that while being rather slow to train and evaluate, equivariant models learn more physically accurate interactions. This indicates opportunities for future work towards coarse-grained models for turbulent flows, and generalization across system dynamics and parameters.
## 1 Particle-based fluid mechanics
Navier-Stokes equations (NSE) are omnipresent in fluid mechanics, hydrodynamics or weather modeling. However, for the majority of problems, solutions are analytically intractable, and obtaining accurate predictions necessitates falling back to numerical solution schemes. Those can be split into two categories: grid/mesh-based (Eulerian description) and particle-based (Lagrangian description).
**Smoothed Particle Hydrodynamics.** In this work, we investigate Lagrangian methods and more precisely the Smoothed Particle Hydrodynamics (SPH) approach, which was independently developed by Gingold & Monaghan (1977) and Lucy (1977) to simulate astrophysical systems. Since then, SPH has established as the preferred approach in various applications ranging from free surfaces such as ocean waves (Violeau & Rogers, 2016), through fluid-structure interaction systems (Zhang et al., 2021), to selective laser melting in additive manufacturing (Weiraher et al., 2019).
Figure 1: Velocity magnitude of Taylor-Green vortex (a) and x-velocity of reverse Poiseuille (b).
The main idea behind SPH is to represent the fluid properties at discrete points in space and to use truncated radial interpolation kernel functions to approximate them at any arbitrary location. The kernel functions are used to estimate state statistics which define continuum-scale interactions between particles. The justification for truncating kernel support is the assumption of local interactions between particles. The resulting discretized equations are then integrated in time using numerical integration techniques like symplectic Euler by which the particle positions are updated.
To generate training data, we implemented our own SPH solver based on the transport velocity formulation by Adamir et al. (2013), which promises a homogeneous particle distribution over the domain. We then selected two flow cases, both of which are well-known in the fluid mechanics community: the 3D laminar Taylor-Green Vortex and the 3D reverse Poiseuille Flow. We are planning to open-source the datasets in the near future.
Taylor-Green Vortex.The Taylor-Green vortex system (TGV, see Figure 1 (a)) with Reynolds number of \(\text{Re}=100\) is neither laminar nor turbulent, i.e. there is no layering of the flow (typical for laminar flows), but also the small scales caused by vortex stretching do not lead to a fully developed energy cascade (typical for turbulent flows) Brachet et al. (1984). The TGV has been extensively studied starting with Taylor & Green (1937) and continuing all the way to Sharma & Sengupta (2019). The TGV system is typically initialized with a velocity field given by
\[u=-\cos(kx)\cos(ky)\cos(kz)\;,\qquad v=\sin(kx)\cos(ky)\cos(kz)\;,\qquad w=0\;, \tag{1}\]
where \(k\) is an integer multiple of \(2\pi\). The TGV datasets used in this work consist of 8/2/2 trajectories for training/validation/testing, where each trajectory comprises 8000 particles. Each trajectory spans 1s physical time and was simulated with \(dt=0.001\) resulting in 1000 time steps per trajectory. The ultimate goal would be to learn the dynamics over much larger time steps than those taken by the numerical solver, but with this dataset we just want to demonstrate the applicability of learned approaches to reproducing numerical solver results.
Reverse Poiseuille Flow.The Poiseuille flow, i.e. laminar channel flow, is another well-studied flow case in fluid mechanics. However, channel flow requires the treatment of wall-boundary conditions, which is beyond the focus of this work. In this work, we therefore consider data obtained by reverse Poiseuille flow (RPF, see Figure 1 (b)) (Fedosov et al., 2008), which essentially consists of two opposing streams in a fully periodic domain. Those flows are exposed to opposite force fields, i.e., the upper and lower half are accelerated in negative \(x\) direction and positive \(x\) direction, respectively. Due to the fact that the flow is statistically stationary (the vertical velocity profile has a time-independent mean value), the RPF dataset consists of one long trajectory spanning 120s. The flow field is discretized by 8000 particles and simulated with \(dt=0.001\), followed by sub-sampling at every 10th step. Learning to directly predict every 10th state is what we call temporal coarse-graining. The resulting number of training/validation/testing instances is the same as for TGV, namely 8000/2000/2000.
## 2 (Equivariant) graph network-based simulators
We first formalize the task of autoregressively predicting the next state of a Lagrangian fluid mechanics simulation based on the notation from Sanchez-Gonzalez et al. (2020). Let \(X^{t}\) denote the state of a particle system at time \(t\). One full trajectory of \(K+1\) steps can be written as \(\mathbf{X}^{t_{0:K}}=(X^{t_{0}},\dots,X^{t_{K}})\). Each state \(\mathbf{X}^{t}\) is made up of \(N\) particles, namely \(\mathbf{X}^{t}=(\mathbf{x}^{t}_{1},\mathbf{x}^{t}_{2},\dots\mathbf{x}^{t}_{N})\), where each \(\mathbf{x}_{i}\) is the state vector of the \(i\)-th particle. However, the inputs to the learned simulator can span multiple time instances. Each node \(\mathbf{x}^{t}_{i}\) can contain node-level information like the current position \(\mathbf{p}^{t}_{i}\) and a time sequence of \(H\) previous velocity vectors \(\hat{\mathbf{p}}^{t_{i+k-H:k}}\), as well as global features like the external force field \(\mathbf{f}_{i}\) in the reverse Poiseuille flow. To build the connectivity graph, we use an interaction radius of \(\sim 1.5\) times the average interparticle distance. This results in around 10-20 one-hop neighbors.
Graph Network-based Simulator.The Graph Network-based Simulator (GNS) framework (Sanchez-Gonzalez et al., 2020) is one of the most popular learned surrogates for engineering particle-based simulations. The main idea of the GNS model is to use the established encoder-processor-decoder architecture (Battaglia et al., 2018) with a processor that stacks several message passing layers (Gilmer et al., 2017). One major strength of the GNS model lies in its simplicity given that all its building blocks are simple MLPs. However, the performance of GNS when predicting
long trajectories strongly depends on choosing the right amount of Gaussian noise to perturb input data. Additionally, GNS and other non-equivariant models are less data-efficient (Batzner et al., 2022). For these reasons, we implement and tune GNS as a comparison baseline, and use it as an inspiration for which setup, features, and hyperparameters to use for equivariant models.
Steerable E(3)-equivariant Graph Neural Network.Steerable E(3)-equivariant Graph Neural Networks (SEGNNs) (Brandstetter et al., 2022) are an instance of E(3)-equivariant GNNs, i.e., GNNs that are equivariant with respect to isometries of the Euclidean space (rotations, translations, and reflections). Most E(3)-equivariant GNNs that are tailored towards molecular property prediction tasks, (Batzner et al., 2022; Batatia et al., 2022) restrict the parametrization of the Clebsch-Gordan tensor products to an MLP-parameterized embedding of pairwise distances. In contrast, SEGNNs use general steerable node and edge attributes which can incorporate any kind of physical quantity, and directly learn the weights of the Clebsch-Gordan tensor product. Indeed, extensions of methods such as NequIP (Batzner et al., 2022) towards general physical features would results in something akin to SEGNN.
Steerable attributes strongly impact the Clebsch-Gordan tensor products, and thus finding physically meaningful edge and node attributes is crucial for good performance. In particular, we chose edge attributes \(\hat{a}_{ij}=V(\mathbf{p}_{ij})\), where \(V(\cdot)\) is the spherical harmonic embedding and \(\mathbf{p}_{ij}=\mathbf{p}_{i}-\mathbf{p}_{j}\) are the pairwise distances. We further choose node attributes \(\hat{a}_{i}=V(\tilde{\mathbf{p}}_{i})+\sum_{k\in\mathcal{N}(i)}\hat{a}_{ik}\), where \(\tilde{\mathbf{p}}_{i}\) are averaged historical velocities and \(\mathcal{N}(i)\) is the \(i\)-neighborhood. As for node and edge features, we found that concatenated historical velocities for the nodes and pairwise displacements for the edges capture best the Navier-Stokes dynamics.
For training SEGNNs, we verified that adding Gaussian noise to the inputs (Sanchez-Gonzalez et al., 2020) indeed significantly improves performance. We further found that explicitly concatenating the external force vector \(\mathbf{f}_{i}\) to the node features boosts performance in the RPF case. However, adding \(\mathbf{f}_{i}\) to the node attributes \(\hat{a}_{i}t=V(\mathbf{f}_{i})+V(\tilde{\mathbf{p}}_{i})+\sum_{k\in\mathcal{N }(i)}\hat{a}_{ik}\) does not improve performance.
Other models, like EGNN by Satorras et al. (2021), achieve equivariance by working with invariant messages, but it does not allow the same flexibility in terms of features. On a slightly more distant note, there has been a rapid raise in physics-informed machine learning (Raissi et al., 2019) and operator learning (Li et al., 2021), where functions or surrogates are learned in an Eulerian (grid-based) way. SEGNN is a sound choice for Lagrangian fluid mechanics problems since it is designed to work directly with vectorial information and particles.
## 3 Results
The task we train on is the autoregressive prediction of accelerations \(\tilde{\mathbf{p}}\) given the current position \(\mathbf{p}_{i}\) and \(H=5\) past velocities of the particles. We measured the performance of the GNS and the SEGNN models in four aspects when evaluating on the test dataset: (i) _Mean-squared error_ (MSE) of particle positions \(\text{MSE}_{p}\) when rolling out a trajectory over 100 time steps (1 physical second for both flow cases). This is also the validation loss during training. (ii) _Sinkhorn distance_, as an optimal transport distance measure between particle distributions. Lower values indicate that the particle distribution is closer to the reference one. (iii) _Kinetic energy_\(E_{kin}\) (\(=0.5mv^{2}\)) as a global measure of physical behavior. Performance comparisons are summarized in Table 1. GNS and SEGNN models have roughly the same number of parameters for Taylor-Green (both have 5 layers and 128-dim features), whereas for reverse Poiseuille SEGNN has three times less parameters than GNS (SEGNN has 64-dim features). Looking at the timing in Table 1, equivariant models of similar size are one order of magnitude slower than non-equivariant ones. This is a known result and is related to the constraint of how the Clebsch-Gordan tensor product can be implemented on accelerators like GPUs.
Taylor-Green Vortex.One of the major challenges of the Taylor-Green dataset are the varying input and output scales throughout a trajectory, by up to one order of magnitude. Consequently, this results in the larger importance of first-time steps in the loss even after data normalization. Figure 2 (a) summarizes the most important performance properties of the Taylor-Green vortex experiment. In general, both models match the ground truth kinetic energy well, but GNS drifts away from the reference SPH curve earlier. Both learned solvers, seem to preserve larger system velocities resulting in higher \(E_{kin}\). The rollout MSE for this case matches the behavior seen in \(E_{kin}\).
**Reverse Poiseuille Flow.** The challenge of the reverse Poiseuille case lies in the different velocity scales between the main flow direction (\(x\)-axis) and the \(y\) and \(z\) components of the velocity. Although such unbalanced velocities are used as inputs, target accelerations in \(x\)-, \(y\)-, and \(z\)-direction all underlie similar distributions. This, combined with temporal coarsening makes the problem sensitive to input deviations. Figure 2 (b) shows that SEGNN reproduces the particle distribution almost perfectly, whereas GNS shows signs of particle clustering, resulting in a larger Sinkhorn distance. Interestingly, the shear layers in-between the inverted flows (around planes \(y=\{0,1,2\}\)) seem to have the largest deviation from the ground truth, which could be source of clusters, see Figure 3.
## 4 Future Work
In this work, we demonstrate that equivariant models are well suited to capture underlying physics properties of particle-based fluid mechanics systems. Natural future steps are enforcing physical behaviors such as homogeneous particle distributions, and including recent developments for neural PDE training into the training procedure of Sanchez-Gonzalez et al. (2020). The latter include e.g., the push-forward trick and temporal bundling (Brandstetter et al., 2022). One major weakness of recursively applied solvers, which these strategies aim to mitigate, is error accumulation, which in most cases leads to out-of-distribution states, and consequently unphysical behavior after several rollout steps. We conjecture that together with such extensions equivariant models offer a promising direction to tackle some of the long-standing problems in fluid mechanics, such as the learning of coarse-grained representations of turbulent flow problems, e.g. Taylor-Green (Brachet et al., 1984), or learning the multi-resolution dynamics of NSE problems (Hu et al., 2017).
\begin{table}
\begin{tabular}{c|c c c|c c c} & \multicolumn{2}{c|}{Taylor-Green vortex} & \multicolumn{4}{c}{Reverse Poiseuille flow} \\ \cline{2-7} & SEGNN & GNS & SPH & SEGNN & GNS & SPH \\ \hline \(\text{MSE}_{\mathbf{p}}\) & 7.7e-5 & 1.3e-4 & - & 7.7e-3 & 8.0e-3 & - \\ \(\text{MSE}_{E_{kin}}\) & 5.3e-5 & 1.3e-4 & - & 2.8e-1 & 3.0e-1 & - \\ \hline Sinkhorn & 1.3e-7 & 1.1e-7 & - & 7.8e-8 & 1.9e-6 & - \\ Time [ms] & 290 & 32 & 9.7 & 180 & 33 & 110 \\ \(\#\) params & 720k & 630k & - & 180k & 630k & - \\ \end{tabular}
\end{table}
Table 1: Performance measures on the Taylor-Green vortex and reverse Poiseuille flow. The Sinkhorn distance is averaged over test rollouts, the inference time is obtained for one rollout step of 8000 particles.
Figure 2: Taylor-Green vortex (a) and reverse Poiseuille (b) performance evolution. |
2309.09822 | Is the Computing Continuum Already Here? | The computing continuum, a novel paradigm that extends beyond the current
silos of cloud and edge computing, can enable the seamless and dynamic
deployment of applications across diverse infrastructures. By utilizing the
cloud-native features and scalability of Kubernetes, this concept promotes
deployment transparency, communication transparency, and resource availability
transparency. Key features of this paradigm include intent-driven policies, a
decentralized architecture, multi-ownership, and a fluid topology. Integral to
the computing continuum are the building blocks of dynamic discovery and
peering, hierarchical resource continuum, resource and service reflection,
network continuum, and storage and data continuum. The implementation of these
principles allows organizations to foster an efficient, dynamic, and seamless
computing environment, thereby facilitating the deployment of complex
distributed applications across varying infrastructures. | Jacopo Marino, Fulvio Risso | 2023-09-18T14:44:52Z | http://arxiv.org/abs/2309.09822v1 | # Is the Computing Continuum Already Here?
###### Abstract
The computing continuum, a novel paradigm that extends beyond the current silos of cloud and edge computing, can enable the seamless and dynamic deployment of applications across diverse infrastructures. By utilizing the cloud-native features and scalability of Kubernetes, this concept promotes _deployment_ transparency, _communication_ transparency, and _resource availability_ transparency. Key features of this paradigm include intent-driven policies, a decentralized architecture, multi-ownership, and a fluid topology. Integral to the computing continuum are the building blocks of dynamic discovery and peering, hierarchical resource continuum, resource and service reflection, network continuum, and storage and data continuum. The implementation of these principles allows organizations to foster an efficient, dynamic, and seamless computing environment, thereby facilitating the deployment of complex distributed applications across varying infrastructures.
cloud computing, computing continuum
## I Introduction
Despite the huge amount of research aiming at the creation of the computing continuum (just to limit our view to relevant ongoing European projects [1, 2, 3, 4, 5]), apparently this concept is already here. Indeed, we have already many applications operating seamlessly across a wide spectrum of cloud and edge infrastructures, not to mention user devices running a multitude of applications, offering the expected service and appearing to interact without perceivable constraints. However, this paper argues that the present reality does not fully align with our final expectations and desired outcomes for the computing continuum, mainly due to a lack of _transparency_. In fact, the current implementation of the computing continuum requires distinct variants and/or configurations for each running services, which take into account the actual location of each component.
Indeed, despite the development of universal interfaces for application orchestration, existing industry practices perceives each infrastructure (i.e., datacenter clusters, but also user devices) as isolated silos, resulting in a fragmented perception of available resources [6, 7]. This fragmentation hampers the seamless deployment of fully distributed applications due to various influencing factors such as resiliency, performance considerations, latency issues over Wide-Area Networks (WANs) [8, 9, 10], hybrid-cloud and multi-cloud approaches [11], and non-technical factors like legal regulations and physical isolation policies. This lack of integration significantly limits workload dynamism and inhibits the deployment of complex applications with specific requirements [12, 13, 14]. In the prevalent use of Kubernetes as the orchestration platform, users are burdened with the placement of pods and services, and must deal with different interfaces based on whether the service is local to the cluster or hosted externally and exposed via a public endpoint. This introduces complexity and inconsistency in the approach, as users are required to be aware of the endpoints of the services they wish to use, thereby involving them in the intricacies of the infrastructure. For a more seamless and user-friendly experience, an abstraction layer should be implemented to facilitate the deployment of microservices across the continuum with the same ease as operating within a Kubernetes cluster.
We argue that the computing continuum should include the ideas presented in [15], in which clusters are extended to create a (borderless) virtual environment that overcomes the presented issues, the so called _liquid computing_. The key principles of the envisioned computing continuum include _deployment_ transparency, _communication_ transparency, and _resource availability_ transparency. To implement it, we acknowledge Kubernetes as the reference technology due to its cloud-native features and scalability. By embracing this vision, organizations can achieve a more efficient and dynamic environment, standardizing the communications between services and avoiding different deployments for different endpoints.
## II The liquid computing pillars
Liquid computing builds upon the principles of cloud and edge computing, transcending cluster boundaries to offer a flexible computing environment. Compared to the current computing continuum, this approach provides _deployment_ transparency, and _resource availability_ transparency, thus promoting optimal utilization of available resources.
In terms of _deployment_ transparency, liquid computing presents an enhanced strategy for deploying multi-microservice applications. Unlike traditional configurations that restrict subsequent location modifications without initiating a new deployment phase, liquid computing allows microservices to start in the most suitable location based on service requirements and infrastructure status. This dynamic nature simplifies user operations, with the system autonomously determining the optimal service location. The first building block supporting this approach is dynamic discovery and
peering, which promotes decentralized governance and facilitates resource and service consumption relationships between clusters. This flexible, decentralized system enables dynamic resource allocation and usage, eliminating the need for manual coordination and paving the way for a more agile computing environment. The second building block is the hierarchical resource continuum. Upon establishing peering relationships, local clusters gain logical access to remote resource slices, exposed and available for application offloading. This method accommodates the limited knowledge propagation and multi-ownership constraints inherent in a computing continuum environment. The abstraction of peered clusters into virtual big nodes facilitates resource optimization, simplifying borrowing computational capacity, and permits application deployment on user-defined slices of the infrastructure.
_Communication_ transparency is another crucial aspect of the computing continuum. Given the varying nature of microservice communication within Kubernetes clusters, the need for different primitives and explicit configurations may arise. However, liquid computing mitigates this complexity by proposing a virtual cluster that spans across multiple real clusters, enabling seamless microservice interaction regardless of location. This process decreases the need for intricate configurations, thus reducing potential errors. In this context, the network continuum serves as the third essential building block. In a liquid computing environment with applications spread across multiple clusters, this mechanism glues together separate network fabrics into a virtual network continuum, hence facilitating transparent communication between microservices, irrespective of their physical location. The network fabric transparently handles potential configuration conflicts (e.g., overlapping IP addressing spaces), ensuring seamless and efficient communication across the liquid computing environment.
Liquid computing also ensures _resource availability_ transparency. In contrast to traditional scenarios where microservices can only utilize resources within their respective clusters, liquid computing enables a service to access all resources within a virtual domain, irrespective of their physical location. As such, a service can dynamically scale based on resource availability within the entire virtual infrastructure, eradicating traditional cluster boundaries. The final two building blocks of this approach are resource and service reflection, and the storage and data continuum. Resource and service reflection ensures that control plane information is present in both local and remote clusters, facilitating the execution of workloads. On the other hand, the storage and data continuum addresses the needs of stateful workloads by providing persistent storage and data proximity, leveraging the concept of data gravity to minimize network traffic, reduce latency, and ensure regulatory compliance. Together, these elements promote seamless workload execution and optimal data management across the liquid computing environment.
The outlined building blocks jointly establish a robust basis for actualizing liquid computing. Although the primary focus herein is Kubernetes, the fundamental principles and strategies are universally applicable across varying orchestration platforms. This adaptability facilitates the integration of liquid computing principles within a broad range of computing environments, thereby accommodating the distinct needs and traits of various deployment scenarios.
## III Research challenges and Conclusion
The journey towards the full realization of the computing continuum is well underway, as evidenced by the advent and ongoing development of liquid computing. Embracing a dynamic, seamless, and transparent paradigm, the defined features embody the key characteristics that define the computing continuum. By fostering deployment, communication, and resource availability transparency, it ensures a flexible, efficient, and integrated computing environment, weaving together previously isolated computing infrastructures.
Despite these significant strides, it is imperative to note that the actualization of a complete computing continuum is an ongoing endeavor. Several challenges lie on the horizon that warrant further research. Decentralization of control calls for innovative strategies to maintain stability and security in an ever-evolving, fluid landscape. The efficacy of communication and resource availability transparency can be further bolstered by delving deeper into optimization strategies.
Moreover, the dynamic nature of data and services traversing across jurisdictions within a computing continuum environment amplifies the need for robust strategies to maintain data privacy and sovereignty. In addition, we have to address the non-trivial problem of cross-domain authentication and authorization, mutual trust, and the run-time security at large (e.g., the hosting cluster should be protected from potential malicious actions carried out by the guest workloads, and the guest workload should be guaranteed from malicious actions, such as code tampering or stealing, potentially carried out by the hosting infrastructure). Simultaneously, the handling of stateful workloads across multiple clusters within the storage and data continuum presents another rich area of exploration.
Furthermore, the reach of the computing continuum extends beyond Kubernetes. Consideration and integration with other orchestration platforms offer potential opportunities for enhancing the universality of the computing continuum, adding new layers of functionality and flexibility.
In essence, the computing continuum, while not fully actualized, is closer to reality than ever before, courtesy of the depicted principles and practices, and thanks to promising open-source projects such as Liqo.io [16] and ongoing European projects fully committed to this vision, such as FLUIDOS [17]. However, to proclaim the arrival of the complete computing continuum at this juncture would be premature. The identified research challenges underscore the need for continued innovation and exploration in the field. Through this persistent effort, we move steadily closer to the full realization of the computing continuum, a promising future where computing resources are seamlessly integrated, dynamic, and optimally utilized.
## Acknowledgements
This work was supported by European Union's Horizon Europe research and innovation programme under grant agree
ment No 101070473, project FLUIDOS (Flexible, scaLable, secUre, and decentrallseD Operating System).
This research was conducted as part of Jacopo Marino Ph.D. programme, under the financing of the Piano Nazionale di Ripresa e Resilienza (PNRR) and the NextGenerationEU initiative.
|
2309.16902 | Investigating Shift Equivalence of Convolutional Neural Networks in
Industrial Defect Segmentation | In industrial defect segmentation tasks, while pixel accuracy and
Intersection over Union (IoU) are commonly employed metrics to assess
segmentation performance, the output consistency (also referred to equivalence)
of the model is often overlooked. Even a small shift in the input image can
yield significant fluctuations in the segmentation results. Existing
methodologies primarily focus on data augmentation or anti-aliasing to enhance
the network's robustness against translational transformations, but their shift
equivalence performs poorly on the test set or is susceptible to nonlinear
activation functions. Additionally, the variations in boundaries resulting from
the translation of input images are consistently disregarded, thus imposing
further limitations on the shift equivalence. In response to this particular
challenge, a novel pair of down/upsampling layers called component attention
polyphase sampling (CAPS) is proposed as a replacement for the conventional
sampling layers in CNNs. To mitigate the effect of image boundary variations on
the equivalence, an adaptive windowing module is designed in CAPS to adaptively
filter out the border pixels of the image. Furthermore, a component attention
module is proposed to fuse all downsampled features to improve the segmentation
performance. The experimental results on the micro surface defect (MSD) dataset
and four real-world industrial defect datasets demonstrate that the proposed
method exhibits higher equivalence and segmentation performance compared to
other state-of-the-art methods.Our code will be available at
https://github.com/xiaozhen228/CAPS. | Zhen Qu, Xian Tao, Fei Shen, Zhengtao Zhang, Tao Li | 2023-09-29T00:04:47Z | http://arxiv.org/abs/2309.16902v1 | # Investigating Shift Equivalence of Convolutional Neural Networks in Industrial Defect Segmentation
###### Abstract
In industrial defect segmentation tasks, while pixel accuracy and Intersection over Union (IoU) are commonly employed metrics to assess segmentation performance, the output consistency (also referred to equivalence) of the model is often overlooked. Even a small shift in the input image can yield significant fluctuations in the segmentation results. Existing methodologies primarily focus on data augmentation or anti-aliasing to enhance the network's robustness against translational transformations, but their shift equivalence performs poorly on the test set or is susceptible to nonlinear activation functions. Additionally, the variations in boundaries resulting from the translation of input images are consistently disregarded, thus imposing further limitations on the shift equivalence. In response to this particular challenge, a novel pair of down/upsampling layers called component attention polyphase sampling (CAPS) is proposed as a replacement for the conventional sampling layers in CNNs. To mitigate the effect of image boundary variations on the equivalence, an adaptive windowing module is designed in CAPS to adaptively filter out the border pixels of the image. Furthermore, a component attention module is proposed to fuse all downsampled features to improve the segmentation performance. The experimental results on the micro surface defect (MSD) dataset and four real-world industrial defect datasets demonstrate that the proposed method exhibits higher equivalence and segmentation performance compared to other state-of-the-art methods. Our code will be available at [https://github.com/xiaozhen228/CAPS](https://github.com/xiaozhen228/CAPS).
shift equivalence, industrial defect segmentation, U-Net, convolutional neural network (CNN), deep learning.
## I Introduction
Visual inspection methods based on convolutional neural networks (CNNs) have attracted considerable interest in recent years for industrial quality control of diverse scenes, such as steel surfaces [1], printed circuit boards [2], rail surfaces [3], textured fabrics [4], and many others. Concurrently, segmentation-based networks for defect detection have become popular due to their ability to provide precise location and contour information of defects [5, 6].
However, most segmentation-based networks in defect detection primarily focus on improving segmentation metrics such as pixel accuracy and Intersection over Union (IoU), while neglecting the crucial aspect of output consistency. Output consistency refers to the concept of shift equivalence, which implies that if input images are shifted by a certain number of pixels, the corresponding segmentation masks produced by the network should also exhibit the same pixel offsets. Despite the long-held belief that CNNs inherently possess shift equivalence [7, 8], several studies [9, 10, 11] have revealed that input translation significantly affects the segmentation outcomes, especially in the industrial inspection field. To shed light on the issue of shift equivalence in CNNs, Fig. 1 visually portrays the impact of input translations on the segmentation masks. The defective raw image is partitioned into three sub-images: green, black, and red, with each pair of adjacent sub-images differing by only one pixel in position. The sub-images are subsequently fed into the segmentation network, yet the resulting segmentation masks exhibit significant disparities. This situation often occurs in the following industrial settings: 1) when the same part is repeatedly captured by machine vision equipment with slight pixel translations due to mechanical deviations, leading to significant fluctuations in segmentation outcomes; 2) in a defective image, the same defects may vary by just a few pixels in position from image to image due to sampling, resulting in highly inconsistent segmentation
Fig. 1: Influence of input image translation on output segmentation masks. Initially, the black window in the original image is translated upwards and downwards by one pixel, resulting in the green and red windows, respectively. Subsequently, the images within the three windows are cropped and individually fed into the image segmentation network. The ground truth indicates a defect area of 44 pixels, while the predicted defect areas for the green, black, and red sub-images in the network output are 55, 49, and 63 pixels, respectively.
outcomes. Therefore, the issue of shift equivalence has gained widespread attention among scholars in recent years.
The strategies to address the problem of shift equivalence in CNNs can be broadly categorized into learning-based and design-based approaches [12]. The former primarily focuses on enhancing network robustness through a data-driven approach, such as data augmentation. However, its segmentation performance shows a significant decline in the test set. The latter strategy seeks to redesign the network architecture in order to rectify the lack of equivalence in CNNs without relying on data. One key factor contributing to the loss of shift equivalence in CNNs is the downsampling layers, such as pooling layers and strided convolution layers, which violate the Nyquist-Shannon sampling theorem as highlighted by Zhang [9]. Therefore, it is meaningful to devise a new downsampling technique to cover traditional sampling layers such as MaxPooling, ensuring that the downsampled feature maps remain as similar as possible before and after image translation. Currently, two new downsampling design techniques have been proposed to reduce the disparities in downsampled feature maps, namely, anti-aliasing and component selection. Anti-aliasing-based methods, exemplified by BlurPool [9], aim to minimize differences between adjacent pixels by incorporating a low-pass filter (LPF) to remove high-frequency components from the image. However, these methods face limitations in nonlinear systems, especially when nonlinear activation functions like ReLU are present in the network [13]. On the other hand, the component-selection-based methods, represented by adaptive polyphase sampling (APS) [10] and learnable polyphase sampling (LPS) [11], were designed to select the same components as much as possible during downsampling before and after translation, thereby achieving structural equivalence in CNNs. Although the component-selection-based methods have demonstrated effectiveness in improving shift equivalence, they have not taken into account the variations in image boundaries that occur when input images are shifted in the manner depicted in Fig. 1. These variations result in random pixels at the image boundaries, making it challenging to ensure the similarity of downsampled results or the selection of identical components before and after image translation, thereby further constraining the shift equivalence. Furthermore, selecting specific component implies discarding the remains, which has an impact on the segmentation performance.
To address this issue, a novel method called component attention polyphase sampling (CAPS) is proposed in this
Fig. 2: A visual comparison of two downsampling methods and their corresponding upsampling techniques based on a one-dimensional signal. (a) MaxPooling process. (b) same input signal in Fig. 2(a) after an one-unit leftward translation with the MaxPooling process. (c) the proposed method. (d) same input signal in Fig. 2(c) after an one-unit leftward translation with the CAPD process. First, assume that the unshifted signal input to the downsampling layer is [1, 2, 3, 4, 3, 2]. Then, the shifted signal, when shifted one unit to the left, is represented as [2, 3, 4, 3, 2, 5]. The stride and pooling size during downsampling are both set to 2. As depicted in Figs. 2(a) and (b), the downsampling results after MaxPooling for the unshifted signal and the shifted signal are [2, 4, 3] and [3, 4, 5], respectively. It is observed that the results of MaxPooling are quite different [2, 4, 3] vs. [3, 4, 5]. However, the proposed CAPD keeps the results similar after downsampling [[1, 9, 3, 2,0] vs. [2, 0, 3, 9, 2,1] ) as shown in Figs. 2(c) and (d). Specifically, the input signal is first sampled into two components, Component 1 and Component 2, according to the parity index. Then, the boundary elements of the components are filtered by adding a window and the corresponding weights are acquired from the component attention module. Lastly, the different components are weighted and fused to obtain the downsampled results.
paper. CAPS contains two essential layers, namely component attention polyphase downsampling (CAPD) and component attention polyphase upsampling (CAPU), to replace the conventional downsampling and upsampling layers in CNNs. The CAPD aims to ensure maximum similarity of the downsampled results before and after image translation. It mainly consists of three parts: a polyphase downsampling process, an adaptive windowing (AW) module, and a component attention (CA) module. Initially, the input image undergoes polyphase downsampling, generating four components with half of the original spatial resolution. These components are then extracted as features and sequentially processed through the AW and CA modules to generate attention weights corresponding to each component. The downsampled results are finally achieved by fusing the different initial component features with the attention weights. The AW module effectively mitigates the boundary effect caused by shifts in images, thereby enhancing the consistency of downsampled features. On the other hand, the CA module captures global features of the components through global average pooling (GAP) and employs one-dimensional convolution to facilitate component-wise attention, leading to significant improvement in defect segmentation performance through the fusion of all downsampled components. Corresponding to the implementation of downsampling using CAPD, CAPU restores the downsampled features to their original spatial positions, thereby ensuring shift equivalence in segmentation networks.
Fig. 2 provides a visual comparison of two downsampling methods and their corresponding upsampling techniques based on a one-dimensional signal. It can be seen from Fig. 2 that MaxPooling selects the maximum value at fixed positions as the downsampled result. When the input undergoes translation, the maximum value within the corresponding pooling region has already changed, leading to significant alterations in the downsampled result. However, the proposed CAPS samples the input signal into two components based on its odd and even indices. When the input undergoes translation, only the odd and even indices are swapped, and the values within each component remain the same. The fusion results of CAPD are also largely identical for the similarity of components.
Our contributions are summarized as follows:
1. A pair of down/upsampling layers called CAPS is proposed to address the shift equivalence problem and can serve as alternatives for conventional downsampling and upsampling layers in CNNs. To the best of our knowledge, this work is the first to investigate the issue of shift equivalence in the field of industrial defect segmentation, considering the boundary variations caused by image translation and leveraging all downsampled component features.
2. CAPD, a novel downsampling layer, is designed to maximize the similarity of downsampled results before and after image translation. The AW module mitigates boundary effect, while the CA module integrates different components to enhance segmentation performance.
3. The proposed method outperforms other state-of-the-art anti-aliasing-based and component-selection-based methods in both shift equivalence and segmentation performance on the micro surface defect (MSD) dataset and four real-world industrial defect datasets.
## II Related Work
In this section, defect detection relying on segmentation networks is first reviewed. Subsequently, a comprehensive works related to shift equivalence is introduced.
### _Defect Segmentation_
Defect segmentation, a crucial technique in defect detection, has gained significant traction in real-world industrial vision scenarios. Currently, deep learning methods (e.g. FCN [14], DeepLabv3+ [15], U-Net [16], etc.) have emerged as popular choices for industrial vision defect segmentation due to their robust characterization and generalization capabilities. Wang et al. [17] employed a three-stage FCN to enhance the accuracy and generalization ability of defect segmentation in tire X-ray images. To address the limited sample issue in print defect datasets, Valente et al. [18] utilized computer graphics techniques to synthesize new training samples at the pixel level, resulting in commendable segmentation performance using DeepLabv3+. Miao et al. [19] designed a loss function of U-Net network based on the Matthews correlation coefficient to tackle the challenges posed by limited data volume and imbalanced sample categories. To alleviate the contextual feature loss caused by multiple convolutions and pooling during the encoding process, Yang et al. [20] introduced a multi-scale feature fusion module into the U-Net network. They integrated a module named bidirectional convolutional long short-term memory block attention into the skip connection structure, effectively capturing global and long-term features. Zheng et al. [21] devised a novel Residual U-Structure embedded within U-Net, complemented by a coordinate attention module to integrate multi-scale and multi-level features. In summary, current methodologies predominantly focus on extracting rich features to enhance defect segmentation performance, often overlooking the significance of shift equivalence. Therefore, it holds great significance to study shift equivalence in CNNs.
### _Shift Equivalence_
In contrast to shift equivalence in segmentation tasks, shift equivalence in image classification - also known as shift invariance - has been extensively studied with some promising results [25, 26]. The strict distinction between shift equivalence and shift invariance is presented in Section III-A. Learning-based approaches such as data augmentation [27] and deformable convolution [22, 28] are data-driven ways to improve shift equivalence. Deformable convolution introduces additional learnable offset parameters within standard convolutional operations, which can acquire adaptive receptive fields and learn geometric transformations automatically. Deformable U-Net (DUNet) [28] is a typical application of deformable convolution in the field of image segmentation. It aims to replace the traditional convolutional layers in U-Net [16] with a deformable convolutional block to make the network adaptive in adjusting the receptive field and sampling locations according to the segmentation targets. However, this
approach is time-consuming and relies heavily on training data, making it difficult to apply to real industrial scenarios. Modern CNNs lack equivalence unless the input image is shifted by an integer multiple of the downsampling factor, which is impractical [25]. As a result, the redesign of traditional downsampling methods has emerged as an effective strategy for improving equivalence.
Anti-aliasing has proven to be an effective approach for enhancing equivalence by addressing the violation of Nyquist-Shannon sampling theorem during downsampling [9]. In this approach, the MaxPooling layer (stride 2) was partitioned into a dense MaxPooling layer (stride 1) and a naive downsampling layer (stride 2). Additionally, a low-pass filter (LPF) was utilized to blur the features after dense maxpooling layer, effectively mitigating aliasing effect. Zhang's work [9] paved the way for further advancements, such as the utilization of adaptive low-pass filter kernels based on BlurPool [26], and the design of Pyramidal BlurPooling (PBP) structure to gradually reduce the size of the LPF kernel at each downsampling layer [23]. Despite anti-aliasing-based approaches offering some degree of improvement in shift equivalence, their ability to address the equivalence problem remains limited. First, aliasing can only be completely eliminated in linear systems by blurring prior to downsampling, which contradicts the prevalent use of nonlinear activation functions (e.g., ReLU) in current CNNs [13]. Second, it employs LPF before downsampling, resulting in a trade-off between image quality and shift equivalence [29].
Another elegant approach for improving downsampling is component-selection-based methods. These methods involve downsampling the input using fixed strides to obtain a set of downsampled components, followed by a strategy to select one of the components as the downsampled feature map. By ensuring consistent selection of the same component for each downsampling operation, shift equivalence in CNNs can be achieved by restoring the feature map to its corresponding position during upsampling. Chaman et al. [10] employed the max-norm polyphase component during downsampling in their APS method, proving complete shift equivalence under circular shifts. Gomez et al. [11] utilized a small neural network to adaptively select components, thereby further improving segmentation performance without compromising equivalence in LPS. Both APS and LPS utilized LPF before downsampling and after upsampling, as ref. [10] indicated that anti-aliasing can further improve the segmentation performance significantly. However, APS and LPS did not achieve the expected performance in terms of equivalence when faced with common shifts in input images. This can be attributed to two main reasons. First, boundary variations resulting from image translation are not considered during the downsampling process, making it challenging to ensure consistent component selection before and after image translation. Second, selecting a single component as the downsampled result discards the majority of features, leading to reduced segmentation performance. In order to solve the problem of information loss during downsampling, Liu et al. [24] proposed a novel multi-level wavelet CNN (MWCNN) method, which employs discrete wavelet transform (DWT) and inverse wavelet transform (IWT) to downsample and upsample the features respectively. However, MWCNN concatenates the four components directly after DWT while ignoring their order, resulting in the loss of shift equivalence.
### _Positioning of Our CAPS_
The proposed approach in this paper focuses on a structural redesign of the network to enhance shift equivalence in the segmentation task. Specifically, CAPS improves the
equivalence of the segmentation networks by redesigning the downsampling layer CAPD and the upsampling layer CAPU. Table I analyzes representative methods and ours in terms of whether to consider image boundaries, key points, and major shortcomings. Compared to other methods, the proposed method considers the variations in image boundaries due to translation, thereby enhancing the consistency of downsampled results before and after translation. Moreover, unlike other methods that rely on selecting a single component after downsampling, CAPS incorporates the fusion of multiple downsampled component features, thereby improving the overall segmentation performance.
## III Problem Description
Here, we provide definitions of shift equivalence and shift invariance, along with graphical examples to illustrate the issues first. Then, the boundary problem when input translation occurs is pointed out and a preliminary solution is proposed. To enhance readability, one-dimensional signals are employed to illustrate this section. It is worth noting that a two-dimensional image can be seen as an extension of a one-dimensional signal.
### _Definitions of shift invariance and shift equivalence_
To clarify the concept of shift equivalence, it is important to first distinguish it from the related concept of shift invariance. Shift invariance refers to a mapping that remains constant before and after the input is shifted, and is commonly used to indicate that the translation of an input image does not affect the final predicted class in image classification tasks. For an input signal \(\mathbf{x}\) and its shifted vision \(T_{N}(\mathbf{x})\), an operation \(\tilde{f}\) considered to be shift-invariant can be defined as:
\[\tilde{f}(\mathbf{x})=\tilde{f}(T_{N}(\mathbf{x})) \tag{1}\]
where \(N\) denotes the number of signal shifts in the circular or common shift way. Shift equivalence, however, dictates that the output should shift concurrently with the input, which is commonly utilized to describe image segmentation tasks. Accordingly, an operation \(\tilde{f}\) is shift-equivalent can be expressed as:
\[T_{N}(\tilde{f}(\mathbf{x}))=\tilde{f}(T_{N}(\mathbf{x})) \tag{2}\]
### _Description of shift equivalence and boundary effect_
Fig. 3 illustrates the downsampling methods based on component selection for shift equivalence, such as APS and LPS. In the second row, the initial signal \(\mathbf{x}\) comprises four elements: an orange triangle, a blue square, a grey pentagon, and a red pentagram. The signal \(\mathbf{x}\) is then sampled into _component 1_ and _component 2_ according to the odd/even indices, one of which is eventually selected as the result of downsampling. The components acquired from initial signal \(\mathbf{x}\) and its circular shift vision \(T_{N}(\mathbf{x})\) (first row) contain the same elements, only in a different order. Therefore, it can theoretically guarantee shift equivalence if the same components are selected during downsampling when the input images are shifted, as proved by ref. [11]. Nevertheless, as depicted in the third row of Fig. 3, in the case of a common shift, its _component 1_ corresponds precisely to _component 2_ of the initial signal, while its _component 2_ manifests an additional element (represented by the green circle) absent in the initial signal. Moreover, as shown in the third row, the orange triangle in the initial signal does not appear in the common shift version \(T_{N}(\mathbf{x})\). Hence, random variations in the input signal boundaries cause variability in the selection of the downsampled components, resulting in the loss of full shift equivalence. Due to the boundary variations that occur during common shifts, Eq. 2 considers only the unchanged part of the input image before and after the translation. It is important to note that, unless explicitly specified, all shifts of input images in this paper specifically pertain to common shift rather than circular shift.
As shown in Fig. 3, the previous component-selection-based method only kept a certain component as the result of downsampling, which does not make full use of all components. So fusing all components in a set of specific weights is a good way to exploit full characteristics. Moreover, for the boundary variations that make downsampled results uncertain, an effective way to reduce the variation of image boundaries is to adaptively crop feature boundaries based on the input dimension.
## IV Proposed Method
In the following text, the pipeline of the proposed method is introduced and then the design details of CAPD and CAPU are expressed. Following that, the equivalence proof regarding CAPS is provided. Lastly, the loss function is specified.
### _Pipeline_
The U-Net is widely used in industrial defect segmentation for its strong segmentation capabilities and simple architecture [19, 20, 21]. It not only has fast inference speed to meet the demand of industrial real-time segmentation, but also has a very good guarantee of segmentation performance for its skip connection structure to fuse more level features. Moreover, the symmetric downsampling and upsampling structure of U-Net can be easily replaced with the proposed CAPS to verifying its performance. Thus, the U-Net is adopted as the base model in this paper and other compared methods have been further improved on this basis to enhance shift equivalence.
Fig. 3: The downsampling method based on component selection for shift equivalence. The rectangular region in the middle row represents the initial signal. _Component 1_ and _Component 2_ sample the odd and even positions of the input signal, respectively. The component-selection-based approaches select one of the two components as the result of downsampling according to a specific strategy.
The standard U-Net consists of an encoder for feature extraction and a decoder for recovering the original spatial resolution. By using a _Crop_ and a _Skip connection_ operations, lower-level features are concatenated with higher-level features along the channel dimension to fuse more informative features. CAPS is incorporated into the network architecture as illustrated in Fig. 4. Unlike the standard U-Net, the CAPD layer is designed to perform downsampling instead of MaxPooling in the encoder. Similarly, the CAPU layer replaces the transposed convolution for upsampling the features in the decoder.
Let us denote \(\mathbf{X}\in\mathbb{R}^{H\times W\times C}\) as an input defect image and the output of the model \(\hat{\mathbf{Y}}\in\mathbb{R}^{H\times W\times 2}\) can be modeled as:
\[\hat{\mathbf{Y}}=f_{model}(\mathbf{X},\theta) \tag{3}\]
where \(f_{model}:\mathbb{R}^{H\times W\times C}\mapsto\mathbb{R}^{H\times W\times 2}\), \(\theta\) are parameters in the proposed model, and the elements in \(\hat{\mathbf{Y}}\) are constrained to binary values of 0 or 1, symbolizing the background and the defect, respectively. Through the process of back-propagation, performed during the training phase, the optimal parameters \(\theta^{*}\) of the proposed model can be expressed as below:
\[\theta^{*}=\operatorname*{arg\,max}_{\theta}l(\hat{\mathbf{Y}},\mathbf{Y}) \tag{4}\]
in which \(\mathbf{Y}\) denotes the ground truth of the input image \(\mathbf{X}\) and \(l\) represents the loss function as outlined in Section IV-E.
### _Component Attention Polyphase Downsampling (CAPD) Layer_
The architecture of CAPD is visualized in Fig. 5 and the downsampling process is divided into three stages. The first stage is a polyphase downsampling process and the output four downsampled components are fed into a small neural network for feature extraction. Moving on to the second stage, the feature maps derived from the first stage are processed through the AW module and CA module. This stage is to adaptively remove uncertain feature boundaries and determine the initial weights for the different components. The final downsampled result is acquired in the third stage by weighting and fusing the initial four components.
**Polyphase downsampling and feature extraction.** First, the input features undergo polyphase downsampling, resulting in a reduction of the original spatial resolution by half. They are partitioned into four components based on their spatial locations. Given an input feature \(\mathbf{F}\in\mathbb{R}^{h\times w\times c}\), four components are achieved by polyphase downsampling in the spatial dimension at equal intervals:
\[\mathbf{F}_{(i,j)}[x,y,z]=\mathbf{F}[2x+i,2y+j,z] \tag{5}\]
where \(\mathbf{F}_{(i,j)}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times c},i,j\in \{0,1\}\) denotes the four downsampled components as illustrated in Fig. 5. These components are then passed through two convolutional layers with [3\(\times\)3, 128] and [3\(\times\)3, 64] convolutional kernels, respectively. To fully extract their features, a [1\(\times\)1, 1] convolution kernel is then utilized to compress the features in the channel dimension. The resulting output \(\mathbf{P}_{(i,j)}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times 1}\) is subsequently employed as the input for the subsequent AW module.
**AW module and CA module.** The process of windowing in the AW module is expressed as follows:
\[\mathbf{z}=Cat\{GAP\{\mathbf{P}_{(i,j)}[hs:-hs,ws:-ws,:]\}\} \tag{6a}\] \[hs=\text{int}\left(\frac{h}{2}\times\beta\times\frac{1}{2}\right)\] (6b) \[ws=\text{int}\left(\frac{w}{2}\times\beta\times\frac{1}{2}\right) \tag{6c}\]
where \(\mathbf{z}\in\mathbb{R}^{1\times 4}\) denotes the output of the AW module and \(\beta\) corresponds to the proportion of the cropped feature boundaries. The symbols \(GAP\) and \(Cat\) refer to the operations of global average pooling and concatenation, respectively. After conducting an ablation analysis of hyperparameters, the hyperparameter \(\beta\) was set to 0.25 to achieve optimal equivalence.
The CA module is intended to enable cross-component interaction for feature fusion motivated by [30]. The initial weights \(\boldsymbol{\rho}\in\mathbb{R}^{1\times 4}\) of the four components from the CA module can be mathematically expressed as:
\[\boldsymbol{\rho}=\sigma(\mathbf{H}^{(k)}*\mathbf{z}) \tag{7}\]
where \(\sigma(\cdot)\) represents the sigmoid function defined as \(\sigma(x)=1/(1+e^{-x})\), while \(\mathbf{H}\) denotes a one-dimensional convolution kernel with a size of \(k\). In this paper, the hyperparameter \(k\) was set to 2 since only four components of global features are required to interact and get attention weights. To summarize, the CA module serves two primary purposes: 1) acquiring initial weights for the fusion of components in the third stage through attention mechanisms, and 2) facilitating end-to-end learning of the network by ensuring that the corresponding polyphase downsampled components receive similar initial weights before and after the translation of input images.
**Fusion of components.** The utilization of component fusion approaches evidently demonstrates their advantage in improving segmentation performance compared to selecting a single component. However, every coin has two sides. Sometimes when the initial weights of different components are similar, the model fails to concentrate on a specific component. Hence, to enhance the consistency of the downsampled feature maps, a more discriminative component weight is necessary. In this regard, _T-softmax_ function [31] is incorporated to adjust the
Fig. 4: The network architecture of our proposed method. Compared to the standard U-Net network, only the downsampling and upsampling layers were replaced with CAPD (yellow arrows) and CAPU (orange arrows), respectively.
weights \(\rho_{i}\) resulting in a larger variance. The final component weight \(w_{i}\) is calculated as follows:
\[w_{i}=\frac{\exp(\rho_{i}/T)}{\sum_{j=0}^{3}\exp(\rho_{j}/T)},i=0,1,2,3 \tag{8}\]
where \(T\) denotes the temperature coefficient, set \(10^{-3}\) according to the ablation experiments to balance the shift equivalence and segmentation performance. Following this, the result of downsampling is denoted as:
\[\mathbf{D}_{c}=w_{0}\mathbf{F}_{(0,0)}+w_{1}\mathbf{F}_{(0,1)}+w_{2}\mathbf{F }_{(1,0)}+w_{3}\mathbf{F}_{(1,1)} \tag{9}\]
where \(\mathbf{D}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times c}\) is the final result of downsampling. In essence, the CAPD is ultimately designed to make the images after translation have similar downsampled feature maps \(\mathbf{D}\) as possible without losing feature information. Eventually, these downsampled feature maps are upsampled using the CAPU according to upsampling factor \(\gamma\), which keeps track of the positions that require restoration during the upsampling process:
\[\gamma=\arg\max(w_{i}),i=0,1,2,3 \tag{10}\]
### _Component Attention Polyphase Upsampling (CAPU) Layer_
The upsampling process of CSPU is straightforward, involving the placement of the components obtained from downsampling into predetermined spatial positions in the upsampled feature maps. Moreover, the remaining positions in the upsampled feature map are filled with zeros to mimic the uncertainty during the upsampling process. Fig. 6 illustrates a complete downsampling and upsampling process, assuming that the input feature is \(\mathbf{F}\in\mathbb{R}^{h\times w\times c}\). Specifically, the input \(\mathbf{F}\) first undergoes the CAPD layer, yielding the downsampled feature \(\mathbf{D}\in\mathbb{R}^{\frac{h}{2}\times\frac{h}{2}\times c}\) and the sampling factor \(\gamma\) corresponding to the maximum weight. Then the upsampled result \(\mathbf{U}\in\mathbb{R}^{h\times w\times c}\) is denoted as:
\[\mathbf{U}_{c}=T_{m,n}(U_{2}(\mathbf{D}_{c})) \tag{11}\]
where \(m\) and \(n\) map \(\gamma\) to a two-dimensional position encoding, which can be achieved through a simple binary encoding process represented by:
\[mn=\phi(\gamma) \tag{12}\]
where \(m\) corresponds to the first bit of the encoding result and \(n\) represents the second bit of the encoding result. The function \(\phi\) converts a decimal number into a binary code. \(T_{m,n}(\cdot)\) represents translating the input feature by \(m\) and \(n\) pixels in the \(x\) and \(y\) axes, respectively and \(U_{2}\) is the is a conventional upsampling operation. \(U_{2}(\mathbf{D}_{c})=\mathbf{Z}[x,y,z]\) can be calculated as:
\[\mathbf{Z}[x,y,z]=\begin{cases}\mathbf{D}_{c}[x/2,y/2,z],\text{when x and y are even}\\ 0,\quad\text{otherwise}.\end{cases} \tag{13}\]
Following APS and LPS, LPF is also added before CAPD and after CAPU to improve the segmentation performance. Moreover, in the next subsection, we can show that the CAPS is completely equivalent when the boundaries of the features are not considered and \(T\to 0\).
Fig. 5: The diagram of our proposed CAPD layer. In the first stage, the input features are first polyphasically sampled into four components according to odd and even indices. Then, these components are fed into a neural network with shared weights for feature extraction. The extracted features are used in the second stage to generate initial component weights that characterize the different levels of importance through the adaptive windowing and component attention modules, respectively. In the third stage, the initial weights are processed by _T-softmax_ function to obtain the final weights and the different components are weighted and fused using the final weights to acquire the final downsampled features.
Fig. 6: One complete downsampling and upsampling process.
### _Proof of shift equivalence for CAPS_
For the simplicity of the proof, the channel dimensions of the input features are not considered and the stride is set to 2. The final result can also be easily generalized to multiple channels and other stride. In addition, the boundaries of the features (i.e. pixels newly entering and moving out of the subimage during the translation process) are not be considered because the feature changes of the boundaries are random and unpredictable when common shift is performed on the image. Corresponding to \(U_{2}\) in Eq. 11, let \(D_{2}\) represent the traditional downsampling operation and \(\mathbf{Q}=D_{2}(\mathbf{F})\) is given by \(\mathbf{Q}[x,y]=\mathbf{F}[2x,2y]\). It is clear that \(\mathbf{Q}\) is a two-dimensional simplified version of the first downsampled component \(\mathbf{F}_{(0,0)}\) in Eq. 5, and the other downsampled components can be expressed as \(\{\mathbf{F}_{(i,j)}\}_{i,j=0}^{1}\), where \(\mathbf{F}_{i,j}=D_{2}(T_{-i,-j}(\mathbf{F}))=\mathbf{F}[2x+i,2y+j]\). Let us denote \(D_{2}^{c}(\cdot)\) and \(U_{2}^{c}(\cdot)\) as the CAPD and CAPU operator, which are defined as:
\[\mathbf{D}_{c}=\mathbf{F}_{m,n}=D_{2}^{c}(\mathbf{F})=D_{2}(T_{-m,-n}(\mathbf{ F})) \tag{14}\]
\[U_{2}^{c}(\mathbf{D}_{c},m,n)=T_{m,n}(U_{2}(\mathbf{D}_{c})) \tag{15}\]
where \(m\) and \(n\) denotes the index of component with the highest weight as indicated in Eq. 12. Note that the conditional equality of \(\mathbf{D}_{c}=\mathbf{F}_{m,n}\) in Eq. 14 is the temperature coefficient \(T\to 0\) in Eq. 8. We can now show that \(U_{2}^{c}\circ D_{2}^{c}\) is fully equivalent when variations in the image boundaries due to translation are not considered and \(T\to 0\):
\[U_{2}^{c}\circ D_{2}^{c}(\widetilde{\mathbf{F}})=T_{s_{x},s_{y}}(U_{2}^{c} \circ D_{2}^{c}(\mathbf{F})),\forall s_{x},s_{y}\in\mathbb{Z} \tag{16}\]
where \(\widetilde{\mathbf{F}}=T_{s_{x},s_{y}}(\mathbf{F})\) represents the result of translating the input \(\mathbf{F}\) by \(s_{x}\) and \(s_{y}\) pixels along the x-axis and y-axis directions, respectively.
_Proof._ Let \(m,n\) and \(\widetilde{m},\widetilde{n}\) denote the component index corresponding to the maximum weight obtained with \(\mathbf{F}\) and \(\widetilde{\mathbf{F}}\) as CAPS inputs, respectively. Then assume that \(s_{x}\) and \(s_{y}\) are both odd integers:
\[D_{2}^{c}(\mathbf{F}) =D_{2}(T_{-m,-n}(\mathbf{F})) \tag{17a}\] \[D_{2}^{c}(\widetilde{\mathbf{F}}) =D_{2}(T_{-\widetilde{m},-\widetilde{n}}(\widetilde{\mathbf{F}})) \tag{17b}\]
Based on the above properties we can get:
\[U_{2}^{c}\circ D_{2}^{c}(\mathbf{F})=T_{m,n}U_{2}D_{2}(T_{-m,-n}(\mathbf{F})) \tag{18}\]
Similarly for the input after translation:
\[U_{2}^{c}\circ D_{2}^{c}(\widetilde{\mathbf{F}}) =T_{\widetilde{m},\widetilde{n}}U_{2}D_{2}(T_{-\widetilde{m},- \widetilde{n}}(\widetilde{\mathbf{F}})) \tag{19a}\] \[=T_{\widetilde{m},\widetilde{n}}U_{2}D_{2}(T_{s_{x}-\widetilde{m},s_{y}-\widetilde{n}}(\mathbf{F}))\] (19b) \[=T_{\widetilde{m},\widetilde{n}}T_{s_{x}-1,s_{y}-1}U_{2}D_{2}(T_{ 1-\widetilde{m},1-\widetilde{n}}(\mathbf{F}))\] (19c) \[=T_{s_{x},s_{y}}(T_{m,n}U_{2}D_{2}(T_{-m,-n}(\mathbf{F}))) \tag{19d}\]
where the properties \(\widetilde{m}=1-m\) and \(\widetilde{n}=1-n\) (for odd \(s_{x}\) and \(s_{y}\)) are used in Eq. 19d and holds based on the fact that the weights of the corresponding components before and after the translation are the same. This is ensured by the global average pooling layer in CAPD as pointed out by [10, 11]. Then, Eq. 16 is shown to be valid by substituting Eq. 18 into Eq. 19d. The same conclusion can similarly be reached when \(s_{x}\) or \(s_{y}\) is even, since \(\widetilde{m}=m\) or \(\widetilde{n}=n\) corresponds to them.
In practice, the full shift equivalence of the network cannot be satisfied because the boundaries of the image change unpredictably before and after the input translation as shown in Fig. 3. Therefore, the AW module is designed to minimize the effect of boundary variations on shift equivalence. In addition, although the \(T\) in Eq. 8 closer to 0 favours shift equivalence, a higher \(T\) facilitates the fusion of component features and thus improves segmentation performance. Thus, \(T\) is set as a hyperparameter in this paper to balance shift equivalence and segmentation performance.
### _Loss Function_
A loss function that combines the cross-entropy loss \(l_{ce}\) and the Dice loss \(l_{de}\) was utilized. Mathematically, the loss function can be expressed as the sum of both losses, denoted as \(l=l_{ce}+l_{de}\). The values of cross-entropy loss \(l_{ce}\) and Dice loss \(l_{de}\) for a given sample image are represented as follows:
\[l_{ce}(\hat{\mathbf{Y}},\mathbf{Y})=-\frac{1}{HW}\sum_{i=1}^{H}\sum_{j=1}^{W} \log q(x_{ij},y_{ij}) \tag{20}\]
\[l_{de}(\hat{\mathbf{Y}},\mathbf{Y})=1-2\frac{\left|\hat{\mathbf{Y}}\cap \mathbf{Y}\right|}{\left|\hat{\mathbf{Y}}\right|+\left|\mathbf{Y}\right|} \tag{21}\]
where \(q(x_{ij},y_{ij})\) denotes the probability that the pixel \(x_{ij}\) is predicted to be the ground truth \(y_{ij}\). The meanings of \(\hat{\mathbf{Y}}\) and \(\mathbf{Y}\) are consistent with those in Eq. 4.
## V Experiments
In this section, the dataset utilized in the experiments is first described and details on the generation of the training and test datasets are provided. The metrics employed to evaluate shift equivalence and segmentation performance are then defined. Subsequently, the shift equivalence problem for the most advanced image segmentation networks is investigated. Six networks explicitly designed to address image shift equivalence are then compared, demonstrating the efficacy of the proposed method. Following that, the effect of boundary variations is analyzed and ablation experiments are conducted. Model complexity and runtime are also further analyzed after the ablation experiments. Lastly, four other real industrial datasets are used to validate the effectiveness of the proposed method.
### _Generation of training and test datasets_
A publicly available micro surface defect (MSD) of silicon steel strip dataset was used in the experiments [32]. The dataset consists of 35 images of surface defects in silicon steel strips, each with a resolution of 640x480. The defects are categorized into two groups: spot-defect images (SDI) and steel-pit-defect images (SPDI), containing 20 and 15 images, respectively. Notably, one distinctive characteristic of this dataset is the presence of random background textures in the original images, with the defects occupying a small portion of the overall image, as depicted in Fig. 1. The original dataset was divided in a ratio of 0.8, 0.1 and 0.1 to be used in
training, validation and testing phases, respectively. Given that micro-defects are relatively sparse in comparison to the overall image, small resolution images (128x128) were cropped from the original images for the generation of training, validation and test sets. A random sampling strategy where each raw image was sampled into 30 images was employed for training and validation datasets, while maintaining a 3:1 ratio between defective and normal images, as illustrated in Fig. 7(a). Two test sets were constructed to evaluate the segmentation performance and shift equivalence, respectively:
**Middle Defect Testset (MDT).** The MDT aims to evaluate the network when defects are located in the middle region of the image. To generate the MDT, sampling windows were moved across the images with a one-pixel increment, and only images with defects located within the yellow window were selected from the black window as shown in Fig. 7(b). The distance between the black and yellow window boundaries was set to 40 pixels.
**Boundary Defect Testset (BDT).** The BDT was created to assess the network when defects are positioned in the boundary regions of the image. The generation of the BDT followed a similar process to the MDT, but only included images where defects appeared between the black and yellow windows, as depicted in Fig. 7(c).
The visualization results of the MDT and BDT can be observed in Fig. 8(a) and Fig. 8(b), respectively.
### _Implementation Details_
Our experiments were conducted using the PyTorch deep learning library [33]. The network was optimized using the SGD optimizer [34] with an initial learning rate of 0.001 and a momentum value of 0.9. To further improve the training performance, a polynomial learning rate scheduling approach with a power of 0.9 was employed in the experiments. The training process was carried out for a maximum of 500 epochs, with a batch size of 32, utilizing an NVIDIA GeForce RTX 3090 GPU. The training phase continued until there was no decrease in the loss of the validation set for 10 consecutive epochs.
### _Evaluation Metrics_
To assess the shift equivalence in our experiments, we designed two new metrics, namely mean variance of Intersection-over-Union (mvIoU) and mean variance of defect area (mvda). Both metrics are designed to describe fluctuations in defect segmentation masks. A lower value for mvIoU and mvda indicates a higher level of shift equivalence. Additionally, we utilized the mean Intersection-over-Union (mIoU), precision, recall and f1-score as measures of segmentation performance. Let us consider the test set, whether it is the MDT or BDT, divided into \(N\) subsets: \(M_{j},\ j=1,2,\ldots,N\), each subset consists of images cropped from the same raw image. \(IoU(\hat{\mathbf{Y}}_{i},\mathbf{Y}_{i})\) denotes the IoU between the predicted segmentation \(\hat{\mathbf{Y}}_{i}\) and the ground truth \(\mathbf{Y}_{i}\) corresponding to input image \(\mathbf{X}_{i}\), as described in Eq. 4. The equivalence metrics mvIoU and mvda are defined as follows:
**mvIoU**: The mIoU of the set \(M_{j}\) is calculated as:
\[mIoU_{j}=\frac{1}{|M_{j}|}\sum_{i=1,\mathbf{X}_{i}\in M_{j}}^{|M_{j}|}IoU(\hat {\mathbf{Y}}_{i},\mathbf{Y}_{i}) \tag{22}\]
The metric mvIoU which portrays the equivalence of segmentation masks is formulated as:
\[\mathrm{mvIoU}=\frac{1}{N}\sum_{j=1}^{N}\frac{1}{|M_{j}|-1}\sum_{i=1,\mathbf{ X}_{i}\in M_{j}}^{|M_{j}|}(IoU(\hat{\mathbf{Y}}_{i},\mathbf{Y}_{i})-mIoU_{j})^{2} \tag{23}\]
**mvda**: Assume that the area of defects in \(\mathbf{X}_{i}\) is \(Area(\mathbf{X}_{i})\), then the average area of the predicted defects in the set \(M_{j}\) can be expressed as:
\[mArea_{j}=\frac{1}{|M_{j}|}\sum_{i=1,\mathbf{X}_{i}\in M_{j}}^{|M_{j}|}Area( \mathbf{X}_{i}) \tag{24}\]
The metric mvda is calculated as:
\[\mathrm{mvda}=\frac{1}{N}\sum_{j=1}^{N}\frac{1}{|M_{j}|-1}\sum_{i=1,\mathbf{ X}_{i}\in M_{j}}^{|M_{j}|}(Area(\mathbf{X}_{i})-mArea_{j})^{2} \tag{25}\]
To calculate the area of defects, only the connected defect domain with the largest area in the segmentation masks is considered, and the rest is deemed as overkill in defect segmentation.
### _Comparison with current advanced segmentation networks_
The current advanced segmentation network designs have not explicitly focused on shift equivalence due to their emphasis on segmentation performance. To investigate the shift
Fig. 8: Visualization of two test sets. (a) visualization of the MDT. (b) visualization of the BDT
Fig. 7: The three sampling methods for generating dataset. (a) random sampling for the training and validation dataset. (b) sliding sampling for the MDT. (c) sliding sampling for the BDT.
equivalence of current state-of-the-art segmentation networks, five high-performing networks were implemented for evaluation:1) UperNet [36]: A multi-task learning framework that performs well on image segmentation by parsing multiple visual concepts such as category, material and texture; 2) PSPNet [35]: A network with a pyramid pooling module designed to achieve excellent image segmentation performance by fusing features from different receptive fields; 3) DeepLabv3+ [15]: An encoder-decoder network that utilizes the Atrous Spatial Pyramid Pooling module to extract multi-scale contextual features using dilated convolutions; 4) Mask2former [37]: A network that employs a transformer decoder with masked attention, aiming to extract local features within the region of the predicted mask. It currently achieves state-of-the-art semantic segmentation performance on various publicly available datasets; 5) SAM-Adapter [38]: A network adapter that builds upon the Segment Anything Model [39] as a foundation model. It incorporates multiple visual prompts to adapt to downstream tasks. In our evaluation, ResNet-101 was used as the backbone for UperNet, PSPNet, and DeepLabv3+, while employing Swin-Transformer-large as the backbone for Mask2former. All four models utilize the official code from the mmsegmentation library 1 and employ the default pretrained model to achieve optimal segmentation performance. SAM-Adapter was implemented using the official code 2 initialized with SAM-Large pretrained parameters for feature extraction.
Footnote 1: mmsegmentation: [https://github.com/open-mmlab/mmsegmentation](https://github.com/open-mmlab/mmsegmentation)
Footnote 2: SAM-Adapter: [https://github.com/tianru-chen/SAM-Adapter-PyTorch](https://github.com/tianru-chen/SAM-Adapter-PyTorch)
Table II presents the results of five advanced image segmentation networks and ours, on the MDT and BDT. It is worth noting that our network, depicted in Fig. 4 is relatively more lightweight compared to the others, which introduces some unfairness in the comparison. However, our network still exhibits superior equivalence, particularly in terms of the mvda. The segmentation performance surpasses that of DeepLabv3+, indicating that the proposed method can simultaneously balance shift equivalence and segmentation performance. It is observed that most existing segmentation networks suffer from low shift equivalence, so it is crucial to explore methods for improving equivalence in both academic research and real-world industrial applications.
### _Comparison with other advanced shift equivalence methods_
The proposed CAPS method was compared with six advanced methods aiming to enhance shift equivalence, namely BlurPool [9], APS [10], LPS [11], PBP [23], MWCNN [24], and DUNet [22] on both the MDT and BDT. To ensure experimental fairness, all methods except DUNet utilized the U-Net structure depicted in Fig. 4 as a base model, with only the downsampling and upsampling layers replaced. Unlike other methods, DUNet [22] replaces only a portion of the standard convolutions in U-Net with deformable convolutional blocks. To adhere to the recommendation of APS and LPS, circular padding was utilized in all experiments, while keeping all other settings consistent with their original papers and codes 3 4 5 6 7 8.
Footnote 4: APS: [https://github.com/achaman2/truly_shift_invariant_cms](https://github.com/achaman2/truly_shift_invariant_cms)
Footnote 5: LPS: [https://tayramond.yeh.com/learnable_polyphase_sampling](https://tayramond.yeh.com/learnable_polyphase_sampling)
Table III provides a comparison of different methods that contribute to the improvement of shift equivalence on the MDT and BDT. The best results are shown in bold and the second best results are underlined for a clearer comparison. CAPS greatly reduces the mvIoU and mvda on both test sets, revealing better shift equivalence. Concretely, CAPS relatively reduces upon the second best method by 23.08% mvIoU, 70.28% mvda on the MDT and 12.50% mvIoU, 82.32% mvda on the BDT. The improvement in equivalence reveals the importance of considering variations in feature boundaries during the downsampling process. The results on the BDT substantially surpass those of other methods, further indicating that the AW module does not negatively impact segmentation performance and equivalence when defects are located at the image boundaries. Apart from the improvement in shift equivalence, CAPS achieve a new state-of-the-art of 75.15%, 75.93% mIoU, surpassing the previous best solution PBP by +0.55% and +0.36% on the MDT and BDT, respectively. Although CAPS does not reach optimality in term of precision, it exhibits +5.04% and +1.34% recall improvement compared with the second best method LPS. The optimal values obtained on the f1-score (0.8839 on the MDT and 0.8529 on the BDT) also demonstrate that CAPS has superior segmentation performance while balancing precision and recall. It can also be observed that DUNet and MWCNN exhibit poor shift
equivalence and segmentation performance, even worse than Baseline.
**Comparison of equivalence between the MDT and BDT.** Fig. 9 shows the comparison of mvda and mvIoU between the MDT and BDT. The red bar represents the value of mvda and corresponds to the y-axis on the left, while the blue bar represents mvIoU and corresponds to the y-axis on the right. It is evident that the baseline method exhibits lower equivalence compared to the other methods, emphasizing the importance of redesigning the downsampling and upsampling structure. The results illustrate that almost all methods achieve higher values of mvda and mvIoU on the BDT dataset compared to the MDT dataset, indicating the increased difficulty in maintaining equivalence when defects are located at the boundaries rather than the middle region.
**Qualitative result analysis of different methods.** The qualitative segmentation results of the different methods are shown in Fig. 10, using the more challenging BDT. The first row describes the result of original image without any shift, while each successive row is shifted to the left by a specified number of pixels. Compared to other methods, CAPS demonstrates nearly complete equivalence in the segmentation results, with the exception of a slight discrepancy observed when shifting by 9 pixels, as depicted in the ninth row. Conversely, other methods, such as BlurPool, exhibit significant fluctuations in the segmentation mask, particularly in terms of the area of defects.
### _Effect of boundary variations on shift equivalence_
To clarify the reasons for the poor performance of shift equivalence on the BDT, the images sampled from a raw image named _SDI_3_ were taken out individually to test their IoU. Fig. 11 shows the results in the form of box plots, where the height of the boxes indicates the extent of IoU fluctuation, and the black dots signify outliers. Notably, the IoU exhibits more outliers when the defects lie at the image boundaries. This suggests that the main reason for the weaker equivalence on the BDT than on the MDT is that the segmentation results on image boundaries are more susceptible to boundary variations due to translations. Therefore, some outliers in Fig. 11 are more likely to be generated on the BDT, affecting the equivalence of the network.
Fig. 11 illustrates the boundary differences in downsampled features that arise from translation. Specifically, the input to the proposed network consists of two sampled images, one with a black box and the other with a red box shown in Fig. 1. The latter is obtained by translating the former down one pixel. Assuming that the first channels in the feature maps of these two images after the first CAPD layer are denoted as \(\mathbf{D}_{1}\) and \(\mathbf{D}_{2}\) with a resolution of \(64\times 64\). The difference in the z-axis of Fig. 12 can be calculated as: \(Shift(\mathbf{D}_{1})-\mathbf{D}_{2}\). The middle pink area represents that the two features are identical before
Fig. 11: The IoU results when the raw image _SDI_3_ was sampled.
Fig. 9: Comparison of mvda and mvIoU between the MDT and BDT.
and after translation, but the boundary region features exhibit significant differences. These boundary differences introduce uncertainty in the component fusion process of CAPS. So the AW module is designed to disregard feature boundaries shown in Fig. 12 when generating the weights for component fusion. By doing so, the downsampling results are similar when the input image is shifted, which improves the shift equivalence.
Fig. 10: The qualitative segmentation results of the different methods on the BDT. The area of the defect is labeled at the top of the image using red font. Compared to other methods, the segmentation masks of CAPS have the least fluctuation and possess the best shift equivalence.
### _Ablation analysis_
In this subsection, we conduct an ablation analysis on the collected MDT and BDT to verify the effectiveness of modules designed in CAPS and analyze the hyperparameters in the proposed methods. Specifically, the efficacy of the AW, CA and LPF in CAPS is assessed. In addition to this, the effectiveness of data augmentation (DA) is further evaluated on CAPS by applying random transformations to the training data, including random rotation, flip, brightness, contrast and cutout [40].
**Effect of the AW:** Comparing the first and fourth rows in Table IV, it can be seen that the removal of AW module leads to a relative improvement of \(50.0\%\) and \(28.6\%\) in terms of mIoU on the MDT and BDT, respectively, while mvda increases by 4 to 6 times. This demonstrates the crucial role of the AW module in achieving shift equivalence. Furthermore, a substantial decrease in mIoU is observed when the AW module is removed. This overall reduction in segmentation performance highlights the significance of the AW module in maintaining segmentation performance.
**Effect of the CA:** As depicted in the second row of Table IV, the removal of the CA module leads to a relative decrease on mIoU by \(3.8\%\) and \(4.5\%\) on the MDT and BDT, respectively. Furthermore, the shift equivalence of the network decreases, as indicated by the increase on mvIoU and mvda. This emphasizes the effectiveness of utilizing means based on the fusion of attentional components, not only in enhancing the segmentation performance of the network but also in improving shift equivalence.
**Effect of the LPF:** As shown in the third and fourth rows of Table IV, the use of LPF effectively improves segmentation performance with +5.87% mIoU, +7.16% precision on the MDT and +3.72% mIoU, +4.67% precision on the BDT. However, shift equivalence is compromised, as indicated by the increase in both mvIoU and mvda on the MDT and BDT. This decrease in equivalence can be attributed to the LPF further blurring boundary features, leading to increased variations at the feature boundaries before and after translation. Therefore, it is suggested that when a higher demand for equivalence is prioritized over segmentation performance, the removal of LPF in CAPS can contribute to an increase in shift equivalence.
**Effect of the DA:** It can be seen from Table IV, there is a slight increase in segmentation performance (e.g. from 78.15% to 78.23% in mIoU) but a decrease in shift equivalence (e.g. from 2.4139 to 3.5132 in mvda) on the MDT. It can be seen that data augmentation enhances the diversity of samples, thus benefiting the segmentation performance, but it cannot accurately improve the network's shift equivalence. Sometimes the distribution between the augmented training data and the original test data is biased, which reduces the equivalence of the network.
**Effect of the hyperparameters:** Two main hyperparameters are used in our method: the \(\beta\), which controls the proportion of windowing, and the \(T\) when the components are fused. We investigated the impact of varying \(\beta\) and \(T\) within a certain range on mIoU and mvda as depicted in Fig. 13. The blue y-axis on the left represents mIoU, while the red y-axis on the right represents mvda. The solid and dashed lines in both images depict the experimental results on the MDT and BDT, respectively.
Specifically, in Fig. 13(a), the \(\beta\) indicates the truncation ratio of the AW module to the feature boundaries, which means that higher values make downsampling less affected by image boundaries. It can be found that the network exhibits the best shift equivalence for a \(\beta\) value of 0.25, with the mIoU only slightly lower than the highest value. Therefore, the hyperparameter \(\beta\) was consistently set to 0.25 in all experiments. As shown in Fig. 13(b), the temperature
Fig. 12: Boundary differences due to translation of the same component feature map.
control factor \(T\) determines how well the component features are fused. When \(T\) approaches 0, the _T-softmax_ function approximates the _argmax_ operation, thereby increasing the shift equivalence for smaller mvda no matter on the MDT and BDT. Conversely, as \(T\) approaches 1, the _T-softmax_ function becomes equivalent to the standard softmax function, enhancing component fusion, which benefits improving segmentation performance. However, when \(T\) exceeds \(10^{-3}\) the equivalence of the segmentation network drops drastically, as shown by the red solid and dashed lines in Fig. 13(b). To strike a balance between segmentation performance and shift equivalence we set \(T\) to \(10^{-3}\) to achieve this trade-off.
### _Model complexity and runtime analysis_
In order to check the complexity of our CAPS, the number of parameters and FLOPs are analysed in the proposed method and compared with other advanced methods as shown in Table V. The average inference time for a single image is illustrated in Table VI. Although the number of model parameters as well as FLOPs is larger than several methods, the inference time for a single image is within the requirements of real industrial scenarios (\(\leq\)40 ms). Specifically, the average inference time for a single image is 9.24 ms, 13.33 ms and 38.93 ms when the input size is 128x128, 256x256 and 512x512, respectively. Moreover, the proposed CAPS has the best shift equivalence among all the methods, so the appropriate increment in inference time compared to other methods is acceptable.
### _Performance in other real-world industrial defect datasets_
To validate the effectiveness of the proposed method, four additional datasets were used to further evaluate the performance. Specifically, they are screw, leather and hazelnut from MVTec Anomaly Detection Dataset (MVTec AD) [41] and photovoltaic modules from Maintenance Inspection Dataset (MIAD) [42]. Fig. 14 shows the original image and ground truth for sample images from different datasets. The number of images in each dataset and the size of the original images are shown in second and third columns of Table VII. The training, validation and test sets are generated in line with MSD as shown in V-A. Additionally, defects located both in the middle and the boundaries are collectively tested to assess the overall performance of different methods.
Fig. 13: Sensitivity analysis of the hyperparameter. (a) the analysis of \(\beta\). (b) the analysis of \(T\).
Segmentation performance and shift equivalence for four datasets are quantitatively demonstrated in Table VIII. The proposed CAPS achieves the best shift equivalence and remarkable segmentation performance compared with other methods. For shift equivalence, CAPS has the lowest mvIoU and mvda on all datasets, implying that the proposed method not only has the smallest IoU fluctuations, but also the highest stability of the predicted defect area. Moreover, it has the highest mIoU and f1-scores in 3 out of 4 datasets, showing its powerful defect segmentation ability. It can be observed that DUNet exhibits the best recall and f1-score on photovoltaic modules, and the second best recall and f1-score on hazelnut. But it slows down the inference speed as shown in Table VI, which is not suitable for industrial scenarios.
## VI Conclusion
This paper presents a novel approach fusing on investigating shift equivalence of CNNs in industrial defect segmentation. The proposed method designs a pair of down/upsampling layers named CAPS to replace conventional downsampling and upsampling layers. The downsampling layer CAPD performs an attention-based fusion of the different components considering the feature boundaries. The CAPU then upsamples the downsampling results to a specific spatial location,
Fig. 14: Visualization of four real industrial datasets. (a) screw (b) photovoltaic modules (c) leather (d) hazelnut
ensuring the equivalence of the segmentation results. On the industrial defect segmentation test sets MDT and BDT, the proposed method surpasses other advanced methods such as BlurPool, APS, LPS, PBP, MWCNN and DUNet in terms of shift equivalence and segmentation performance.
|
2304.00144 | Non-Archimedean Green's functions and Zariski decompositions | We study the non-Archimedean Monge-Amp\`ere equation on a smooth projective
variety over a discretely or trivially valued field. First, we give an example
of a Green's function, associated to a divisorial valuation, which is not Q-PL
(i.e. not a model function in the discretely valued case). Second, we produce
an example of a function whose Monge-Amp\`ere measure is a finite atomic
measure supported in a dual complex, but which is not invariant under the
retraction associated to any snc model. This answers a question by Burgos Gil
et al in the negative. Our examples are based on geometric constructions by
Cutkosky and Lesieutre, and arise via base change from Green's functions over a
trivially valued field; this theory allows us to efficiently encode the Zariski
decomposition of a pseudoeffective numerical class. | Sebastien Boucksom, Mattias Jonsson | 2023-03-31T21:44:27Z | http://arxiv.org/abs/2304.00144v1 | # Non-Archimedean Green's functions and Zariski decompositions
###### Abstract.
We study the non-Archimedean Monge-Ampere equation on a smooth variety over a discretely or trivially valued field. First, we give an example of a Green's function, associated to a divisorial valuation, which is not \(\mathbb{Q}\)-PL (i.e. not a model function in the discretely valued case). Second, we produce an example of a function whose Monge-Ampere measure is a finite atomic measure supported in a dual complex, but which is not invariant under the retraction associated to any snc model. This answers a question by Burgos Gil et al in the negative. Our examples are based on geometric constructions by Cutkosky and Lesieutre, and arise via base change from Green's functions over a trivially valued field; this theory allows us to efficiently encode the Zariski decomposition of a pseudoeffective numerical class.
## Introduction
In the seminal paper [23], Yau studied the Monge-Ampere equation
\[(\omega+\mathrm{dd}^{\mathrm{c}}\varphi)^{n}=\mu\] (MA)
on a compact \(n\)-dimensional Kahler manifold \((X,\omega)\), where \(\mu\) is a smooth, strictly positive measure on \(X\) of mass \(\int\omega^{n}\), and \(\varphi\) a smooth function on \(X\) such that the \((1,1)\)-form \(\omega+\mathrm{dd}^{\mathrm{c}}\varphi\) is positive. Yau proved that there exists a smooth solution \(\varphi\), unique up to a constant. If \(\omega\) is rational class, say \(\omega=c_{1}(L)\) for an ample line bundle \(L\), then \(\varphi\) can be viewed as a positive metric on \(L\), and \((\omega+\mathrm{dd}^{\mathrm{c}}\varphi)^{n}\) its the curvature measure.
As observed by Kontsevich, Soibelman, and Tschinkel [19, KT], when studying degenerating 1-parameter families of Kahler manifolds, it can be fruitful to use non-Archimedean geometry in the sense of Berkovich over the field \(\mathbb{C}(\!(\varpi)\!)\) of complex Laurent series. In this context, a Monge-Ampere operator was introduced by Chambert-Loir [10], and a version of (MA) was solved by the authors and Favre [1]; see below. Uniqueness of solutions was proved earlier by Yuan and Zhang [26].
Now, the method in [1] is variational in nature, inspired by [2] in the complex case. It has the advantage of being able to deal with more general measures \(\mu\), but the drawback of providing less regularity information on the solution. In fact, [1] only gives a continuous solution, and is thus closer in spirit to [15] than to [23].
It is therefore interesting to ask whether we can say more about the regularity of \(\varphi\) in (MA), at least for special measures \(\mu\). In the non-Archimedean setting, there are many possible regularity notions; to describe the one we are focusing on, we first need to make the non-Archimedean version of (MA) more precise, following [1, 1].
Let \(X\) be a smooth projective variety over \(K=\mathbb{C}(\!(\varpi)\!)\) of dimension \(n\). Consider a simple normal crossing (snc) model \(\mathcal{X}\) of \(X\), over the valuation ring \(K^{\circ}=\mathbb{C}[\![\varpi]\!]\). The
dual complex \(\Delta_{\mathcal{X}}\) embeds in the Berkovich analytification \(X^{\mathrm{an}}\), and there is a continuous retraction \(p_{\mathcal{X}}\colon X^{\mathrm{an}}\to\Delta_{\mathcal{X}}\).
A semipositive closed (1,1)-form on \(X^{\mathrm{an}}\) in the sense of _loc. cit._ is represented by a nef relative numerical class \(\omega\in\mathrm{N}^{1}(\mathcal{X}/\operatorname{Spec}K^{\circ})\) for some snc model \(\mathcal{X}\). We assume that the image \([\omega]\) of \(\omega\) in \(\mathrm{N}^{1}(X)\) is ample. In this case, there is a natural space \(\operatorname{CPSH}(\omega)=\operatorname{CPSH}(X,\omega)\) of continuous \(\omega\)-plurisubharmonic (psh) functions, and a Monge-Ampere operator taking a function \(\varphi\in\operatorname{CPSH}(\omega)\) to a positive Radon measure \(\varphi\to(\omega+\mathrm{dd}^{\mathrm{c}}\varphi)^{n}\) on \(X^{\mathrm{an}}\) of mass \([\omega]^{n}\); see also [1] for a local theory. When \([\omega]\) is rational, so that \([\omega]=c_{1}(L)\) for an ample (\(\mathbb{Q}\)-)line bundle \(L\) on \(X\), we can view any \(\varphi\in\operatorname{CPSH}(\omega)\) as a semipositive metric on \(L^{\mathrm{an}}\), with curvature measure \((\omega+\mathrm{dd}^{\mathrm{c}}\varphi)^{n}\).
As in [1], let us normalize the Monge-Ampere operator and write
\[\operatorname{MA}_{\omega}(\varphi):=\tfrac{1}{[\omega]^{n}}(\omega+\mathrm{ dd}^{\mathrm{c}}\varphi)^{n}.\]
The main result in [1] is that if \(\mu\) is a Radon probability measure on \(X^{\mathrm{an}}\) supported in some dual complex, then there exists \(\varphi\in\operatorname{CPSH}(\omega)\), unique up to an additive real constant, such that \(\operatorname{MA}_{\omega}(\varphi)=\mu\). More precisely, this was proved assuming that \(X\) is defined over an algebraic curve, an assumption that was later removed in [1]. Here we want to study whether for special measures \(\mu\), the solution is regular in some sense.
We first consider the class of _piecewise linear_ (PL) functions. A function \(\varphi\in\mathrm{C}^{0}(X^{\mathrm{an}})\) is (\(\mathbb{Q}\)-)PL if it is associated to a vertical \(\mathbb{Q}\)-divisor on some snc model, and PL functions are also known as _model functions_. The set \(\operatorname{PL}(X)\) of PL functions is a dense \(\mathbb{Q}\)-linear subspace of \(\mathrm{C}^{0}(X^{\mathrm{an}})\), and it is closed under taking finite maxima and minima.
If \(\varphi\in\operatorname{PL}(X)\cap\operatorname{CPSH}(\omega)\), then the measure \(\mu=\operatorname{MA}_{\omega}(\varphi)\) is a rational divisorial measure, i.e. a rational convex combination of Dirac masses at divisorial valuations. For example, when \([\omega]=c_{1}(L)\) is rational, then the space \(\operatorname{PL}(X)\cap\operatorname{CPSH}(\omega)\) can be identified with the space of semipositive _model metrics_ on \(L^{\mathrm{an}}\), represented by a nef model \(\mathcal{L}\) of \(L\), and \(\operatorname{MA}_{\omega}(\varphi)\) can be computed in terms of intersection numbers of \(\mathcal{L}\).
Assuming \(\omega\) rational, one may ask whether, conversely, the solution to \(\operatorname{MA}_{\omega}(\varphi)=\mu\), with \(\mu\) a rational divisorial measure, is necessarily PL. Here we focus on the case when \(\mu=\delta_{w}\) is a Dirac measure, where \(w\in X^{\mathrm{div}}\) is a divisorial valuation. In this case, it was proved in [1] that the solution \(\varphi_{w}\in\operatorname{CPSH}(\omega)\) to the Monge-Ampere equation
\[\operatorname{MA}_{\omega}(\varphi_{w})=\delta_{w},\quad\varphi_{w}(w)=0\]
is the _Green's function_ of \(w\), given by \(\varphi_{w}=\sup\{\psi\in\operatorname{CPSH}(\omega)\mid\psi(w)\leq 0\}\).
**Theorem A**.: _Assume that \(\omega\) is a rational semipositive closed (1,1)-form with \([\omega]\) ample, and that \(w\in X^{\mathrm{div}}\) is a divisorial valuation. Let \(\varphi_{w}\in\operatorname{CPSH}(\omega)\) be the Green's function satisfying (\(\star\) *> 1.1) above. Then:_
1. _in dimension 1,_ \(\varphi_{w}\in\operatorname{PL}(X)\)_;_
2. _in dimension_ \(\geq 2\)_, it may happen that_ \(\varphi_{w}\not\in\operatorname{PL}(X)\)_._
Writing \([\omega]=c_{1}(L)\), Theorem A says that the metric on \(L^{\mathrm{an}}\) corresponding to \(\varphi_{w}\) is a model metric in dimension 1, but not necessarily in dimension 2 and higher. This answers a question in [1], see Remark 8.8 in _loc. cit._
Here (i) is well known, for example from the work of Thuillier [15]; we give a proof in SS8.5. As for (ii), we present one example where \(X\) is an abelian surface, and another one where \(X=\mathbb{P}^{3}\). See Examples 9.11 and 9.12.
We will discuss the structure of these examples shortly, but mention here that they are both \(\mathbb{R}\)_-PL_, i.e. they belong to the space the smallest \(\mathbb{R}\)-linear subspace \(\mathbb{R}\mathrm{PL}(X)\) of \(\mathrm{C}^{0}(X^{\mathrm{an}})\) containing \(\mathrm{PL}(X)\) and stable under max and min. The question then arises whether also in higher dimension, the solution \(\varphi_{w}\) to (\(\star\)) is \(\mathbb{R}\)-PL for any divisorial valuation \(v\). While we don't have a counterexample to this exact question (with \(\omega\) rational, but see Example 7.6), we prove that the situation can be quite complicated in dimension three and higher.
Namely, let us say that a function \(\varphi\in\mathrm{C}^{0}(X^{\mathrm{an}})\) is _invariant under retraction_ if \(\varphi=\varphi\circ p_{\mathcal{X}}\) for some (and hence any sufficiently high) snc model \(\mathcal{X}\). For example, a function in \(X^{\mathrm{an}}\) is \(\mathbb{R}\)-PL iff it is invariant under retraction and its restriction to any dual complex \(\Delta_{\mathcal{X}}\) is \(\mathbb{R}\)-PL in the sense that it is affine on the cells of some subdivision of \(\Delta_{\mathcal{X}}\) into real simplices.
If \(\varphi\in\mathrm{CPSH}(\omega)\) is invariant under retraction, say \(\varphi=\varphi\circ p_{\mathcal{X}}\), then the Monge-Ampere measure \(\mathrm{MA}_{\omega}(\varphi)\) is supported in \(\Delta_{\mathcal{X}}\). However, if \(\mu\) is supported in \(\Delta_{\mathcal{X}}\), then the solution \(\varphi\) to \(\mathrm{MA}_{\omega}(\varphi)=\mu\) may not satisfy \(\varphi=\varphi\circ p_{\mathcal{X}}\), see [1, Appendix A]. Still, one may ask whether \(\varphi\) is invariant under retraction, that is, \(\varphi=\varphi\circ p_{\mathcal{X}^{\prime}}\) for any sufficiently high snc model \(\mathcal{X}^{\prime}\), see Question 2 in _loc. cit._. A version of this question (see Remark 8.8) in the context of Calabi-Yau varieties plays a key role in the recent work of Yang Li [13], see also [1, 14, 15]. Our next result provides a negative answer in general.
**Theorem B**.: _Let \(X=\mathbb{P}^{3}_{K}\), with \(K=\mathbb{C}(\!(\varpi)\!)\), and let \(\omega\) be the closed (1,1)-form associated to the numerical class of \(\mathcal{O}(1)\) on \(\mathbb{P}^{3}_{K^{\circ}}\). Then there exists \(\varphi\in\mathrm{CPSH}(\omega)\) such that \(\mathrm{MA}_{\omega}(\varphi)\) has finite support in some dual complex, but \(\varphi\) is not invariant under retraction. In particular, \(\varphi\not\in\mathbb{R}\mathrm{PL}(X)\)._
Let us now say more about the examples underlying Theorem B and Theorem A (ii). They all arise in the _isotrivial case_, when the variety \(X\) over \(K\) is the base change of a smooth projective variety \(Y\) over \(\mathbb{C}\), and the \((1,1)\)-form is defined by the pullback of an ample numerical class \(\theta\in\mathrm{N}^{1}(Y)\) to the trivial (snc) model \(Y_{K^{\circ}}\) of \(X=Y_{K}\). In this case, we can draw on the global pluripotential theory over a trivially valued field developed in [1], a theory which interacts well with algebro-geometric notions such as diminished base loci and Zariski decompositions of pseudoeffective classes.
Specifically, given a smooth projective complex variety \(Y\), and an ample numerical class \(\theta\in\mathrm{N}^{1}(Y)\), we have a convex set \(\mathrm{CPSH}(\theta)=\mathrm{CPSH}(Y,\theta)\subset\mathrm{C}^{0}(Y^{\mathrm{ an}})\) of continuous \(\theta\)-psh functions, where \(Y^{\mathrm{an}}\) now denotes the Berkovich analytification of \(Y\) with respect to the _trivial_ absolute value on \(\mathbb{C}\). A _divisorial valuation_ on \(Y\) is of the form \(v=t\operatorname{ord}_{E}\), where \(t\in\mathbb{Q}_{\geq 0}\) and \(E\subset Y^{\prime}\) is a prime divisor on a smooth projective variety \(Y^{\prime}\) with a proper birational morphism \(Y^{\prime}\to Y\). When instead \(t\in\mathbb{R}_{\geq 0}\), we say that \(v\) is a _real divisorial valuation_. If \(\Sigma\subset Y^{\mathrm{an}}\) is a finite set of real divisorial valuations, then we consider the Green's function of \(\Sigma\), defined as
\[\varphi_{\Sigma}:=\sup\{\varphi\in\mathrm{CPSH}(Y,\theta)\mid\varphi|_{\Sigma }\leq 0\}.\]
By [1, 1], \(\varphi_{\Sigma}\in\mathrm{CPSH}(Y,\theta)\), and the Monge-Ampere measure of \(\varphi_{\Sigma}\) is supported in \(\Sigma\).
The base change \(X=Y_{\mathbb{C}(\!(\varpi)\!)}\to Y\) induces a surjective map \(\pi\colon X^{\mathrm{an}}\to Y^{\mathrm{an}}\), and this map admits a canonical section \(\sigma\colon Y^{\mathrm{an}}\to X^{\mathrm{an}}\), called _Gauss extension_, and whose image consists of all \(\mathbb{C}^{\times}\)-invariant points in \(X^{\mathrm{an}}\). For any \(\varphi\in\mathrm{CPSH}(Y,\theta)\) we have \(\pi^{\star}\varphi\in\mathrm{CPSH}(X,\omega)\), and
\[\mathrm{MA}_{\omega}(\pi^{\star}\varphi)=\sigma_{\star}\,\mathrm{MA}_{\theta}( \varphi).\]
In particular, if \(v\in Y^{\mathrm{div}}\), then \(\pi^{\star}\varphi_{\{v\}}\) is the Green function for \(w:=\sigma(v)\in X^{\mathrm{div}}\). As both \(\pi^{\star}\) and \(\sigma^{\star}\) preserve the classes of \(\mathbb{Q}\)-PL and \(\mathbb{R}\)-PL functions, we see that in order to prove
Theorem A (ii), it suffices to find a surface \(Y\) and \(v\in Y^{\rm div}\), such that \(\varphi_{v}:=\varphi_{\{v\}}\) is not \(\mathbb{Q}\)-PL.
Further, to prove Theorem B, it suffices to find a finite set \(\Sigma\) of real divisorial valuations on \(Y\) such that \(\pi^{\star}\varphi_{\Sigma}\) fails to be invariant under retraction. Indeed, the Gauss extension map \(\sigma\) takes real divisorial valuations to Abhyankar valuations, and these are exactly the ones that lie in a dual complex. We then use the following criterion. Define the _center_ of any function \(\varphi\in\operatorname{PSH}(Y,\theta)\) by
\[Z_{Y}(\varphi):=c_{Y}\{\varphi<\sup\varphi\},\]
where \(c_{Y}\colon Y^{\rm an}\to Y\) is the center map, see SS3. We show that if \(\pi^{\star}\varphi\) is invariant under retraction, then \(Z_{Y}(\varphi)\subset Y\) is a strict Zariski closed subset, see Corollary 9.9. It therefore suffices to find a Green's function \(\varphi_{\Sigma}\) whose center is Zariski dense.
Our analysis of the Green's functions \(\varphi_{\Sigma}\) is based on a relation between \(\theta\)-psh functions and families of b-divisors. Namely, we can pick a strict birational morphism \(\rho\colon Y^{\prime}\to Y\), with \(Y^{\prime}\) smooth, prime divisors \(E_{i}\subset Y^{\prime}\), and \(c_{i}\in\mathbb{R}_{>0}\) such that \(\Sigma=\{c_{i}^{-1}\operatorname{ord}_{E_{i}}\}\). If we set \(D:=\sum_{i}c_{i}^{-1}E_{i}\), then we can express \(\varphi_{\Sigma}\) in terms of the _\(b\)-divisorial Zariski decomposition_ of the numerical class \(\rho^{\star}\theta-\lambda[D]\), for \(\lambda\in(-\infty,\lambda_{\rm psef}]\), where \(\lambda_{\rm psef}\in\mathbb{R}\) is the largest \(\lambda\) such that this class is pseudoeffective (psef), see Theorem 6.7. The analysis of the Zariski decomposition of a psef class \(\theta\) in terms of \(\theta\)-psh functions is of independent interest.
Let us first consider the case of dimension two. The Zariski decomposition of \(\rho^{\star}\theta-\lambda D\) then depends in an \(\mathbb{R}\)-PL way on \(\lambda\), and this implies that the Green's function \(\varphi_{\Sigma}\) is \(\mathbb{R}\)-PL. On the other hand, \(\varphi_{\Sigma}\) need not be \(\mathbb{Q}\)-PL. In fact, we prove in Theorem 6.10 that \(\varphi_{\Sigma}\) is \(\mathbb{Q}\)-PL iff the quantity
\[\operatorname{T}(\Sigma):=\sup\varphi_{\Sigma}\]
is a rational number. To prove Theorem A (ii), it therefore suffices to find a divisorial valuation \(v\) on a surface \(Y\) such that \(\operatorname{T}(v)\) is irrational, and such examples can be found with \(Y\) an abelian surface, and \(v=\operatorname{ord}_{E}\) for a prime divisor \(E\) on \(Y\).
Using a geometric construction by Cutkosky [10], we also give an example of a divisorial valuation \(v\) on \(Y=\mathbb{P}^{3}\) such that \(\varphi_{v}\) is \(\mathbb{R}\)-PL but not \(\mathbb{Q}\)-PL for \(\theta=c_{1}(\mathcal{O}(1))\), see Example 7.4. Being \(\mathbb{R}\)-PL, this example is invariant under retraction. As explained above, in order to prove Theorem B, it suffices to find \(\Sigma\) such that \(c_{Y}(\varphi_{\Sigma})\) is a Zariski dense subset of \(Y\). Using the notation above, we show that the center contains the image on \(Y\) of the diminished base locus of the pseudoeffective class \(\rho^{\star}\theta-\lambda_{\rm psef}[D]\) on \(Y^{\prime}\). We can then use a construction of Lesieutre [16], who showed that if \(Y=\mathbb{P}^{3}\), \(\theta=c_{1}(\mathcal{O}(1))\), and \(\rho\colon Y^{\prime}\to Y\) is the blowup at nine very general points, then there exists an effective \(\mathbb{R}\)-divisor \(D\) on \(Y^{\prime}\) supported on the exceptional locus on \(\rho\), such that the diminished base locus of \(\rho^{\star}\theta-D\) is Zariski dense. If we write \(D=\sum_{i=1}^{9}c_{i}E_{i}\), then we can take \(\Sigma=\{c_{i}^{-1}\operatorname{ord}_{E_{i}}\}\).
### Structure of the paper
The article is organized as follows. In SS1 we recall some facts from birational geometry and pluripotential theory over a trivially valued field. This is used in SS2 to relate \(\theta\)-psh functions and suitable families of \(b\)-divisors, after which we study the center of a \(\theta\)-psh function in SS3. In SS4 we define the extremal function \(V_{\theta}\in\operatorname{PSH}(\theta)\) associated to a psef class: by evaluating this function at divisorial valuations we recover the minimal vanishing order of \(\theta\) along a valuation. The extremal function is also closely related to various notions of Zariski decomposition of a psef class, as explored in SS5. After all this,
we are finally ready to study Green's functions in SS6 and SS7. Finally, in SS8 and SS9 we turn to the discretely valued case and prove Theorems A and B.
### Notation and conventions
A _variety_ over a field \(F\) is a geometrically integral \(F\)-scheme of finite type. We use the abbreviations _usc_ for 'upper semicontinuous', _lsc_ for 'lower semicontinuous', and _iff_ for 'if and only if'.
### Acknowledgement
The authors would like to thank Jose Burgos, Antoine Ducros, Gerard Freixas, Walter Gubler, John Lesieutre and Milan Perera for useful exchanges related to this work. This article is dedicated to the memory of Jean-Pierre Demailly, whose extraordinary contributions to complex analytic and algebraic geometry have had a tremendous influence on our own research.
The second author was partially supported by NSF grants DMS-1900025 and DMS-2154380.
## 1. Preliminaries
Throughout the paper--except in SS8--\(X\) denotes a smooth projective variety over an algebraically closed field \(k\) of characteristic \(0\).
### Positivity of numerical classes and base loci
We denote by \(\mathrm{N}^{1}(X)\) the (finite dimensional) vector space of numerical equivalence classes \(\theta=[D]\) of \(\mathbb{R}\)-divisors \(D\) on \(X\). It contains the following convex cones, corresponding to various positivity notions for numerical classes:
* the _pseudoeffective cone_\(\mathrm{Psef}(X)\), defined as the closed closed cone generated by all classes of effective divisors;
* the _big cone_\(\mathrm{Big}(X)\), the interior of \(\mathrm{Psef}(X)\);
* the _nef cone_\(\mathrm{Nef}(X)\), equal to the closed convex cone generated by all classes of basepoint free line bundles;
* the _ample cone_\(\mathrm{Amp}(X)\), the interior of \(\mathrm{Nef}(X)\);
* the _movable cone_\(\mathrm{Mov}(X)\), the closed convex cone generated by all classes of line bundles whose base locus have codimension \(2\).
These cones satisfy
\[\mathrm{Nef}(X)\subset\mathrm{Mov}(X)\subset\mathrm{Psef}(X),\]
where the first (resp. second) inclusion is an equality when \(\dim X\leq 2\) (resp. \(\dim X\leq 1\)), but is in general strict for \(\dim X>2\) (resp. \(\dim X>1\)). We will make use of the following simple property:
**Lemma 1.1**.: _If \(\theta\in\mathrm{N}^{1}(X)\) is movable, then \(\theta|_{E}\in\mathrm{N}^{1}(E)\) is pseudoeffective for any prime divisor \(E\subset X\)._
The _asymptotic base locus_\(\mathbb{B}(D)\subset X\) of a \(\mathbb{Q}\)-divisor \(D\) is defined as the base locus of \(\mathcal{O}_{X}(mD)\) for any \(m\in\mathbb{Z}_{>0}\) sufficiently divisible. The _diminished_ (or _restricted_) _base locus_ and the _augmented base locus_ of an \(\mathbb{R}\)-divisor \(D\) are respectively defined as
\[\mathbb{B}_{-}(D):=\bigcup_{A}\mathbb{B}(D+A)\quad\text{and}\quad\mathbb{B}_{ +}(D):=\bigcap_{A}\mathbb{B}(D-A),\]
where \(A\) ranges over all ample \(\mathbb{R}\)-divisors such that \(D-A\) (resp. \(D+A\)) is a \(\mathbb{Q}\)-divisor. Since ampleness is a numerical property, these loci only depend on the numerical class \(\theta=[D]\in\mathrm{N}^{1}(X)\), and will be denoted by \(\mathbb{B}_{-}(\theta)\subset\mathbb{B}_{+}(\theta)\).
The augmented base locus \(\mathbb{B}_{+}(\theta)\) is Zariski closed, and satisfies
\[\theta\in\operatorname{Big}(X)\Leftrightarrow\mathbb{B}_{+}(\theta)\neq X\quad \text{and}\quad\theta\in\operatorname{Amp}(X)\Leftrightarrow\mathbb{B}_{+}( \theta)=\emptyset.\]
The diminished base locus satisfies
\[\mathbb{B}_{-}(\theta)=\bigcup_{\varepsilon\in\mathbb{Q}_{>0}}\mathbb{B}_{+}( \theta+\varepsilon\omega) \tag{1.1}\]
for any \(\omega\in\operatorname{Amp}(X)\). It is thus an at most countable union of subvarieties, which is not Zariski closed in general, and can even be Zariski dense (see [14]). We further have
\[\theta\in\operatorname{Psef}(X) \Leftrightarrow\mathbb{B}_{-}(\theta)\neq X;\] \[\theta\in\operatorname{Nef}(X) \Leftrightarrow\mathbb{B}_{-}(\theta)=\emptyset;\] \[\theta\in\operatorname{Mov}(X) \Leftrightarrow\operatorname{codim}\mathbb{B}_{-}(\theta)\geq 2.\]
### The Berkovich space
We use [11, SS1] as a reference. The _Berkovich space_\(X^{\operatorname{an}}\) is defined as the Berkovich analytification of \(X\) with respect to the trivial absolute value on \(k\)[1]. We view it as a compact (Hausdorff) topological space, whose points are _semivaluations_, i.e. valuations \(v\colon k(Y)^{\times}\to\mathbb{R}\) for some subvariety \(Y\subset X\). We denote by \(v_{Y,\operatorname{triv}}\in X^{\operatorname{an}}\) the trivial valuation on \(k(Y)\), and set \(v_{\operatorname{triv}}=v_{X,\operatorname{triv}}\). These trivial semivaluations are precisely the fixed points of the scaling action \(\mathbb{R}_{>0}\times X^{\operatorname{an}}\to X^{\operatorname{an}}\) given by \((t,v)\mapsto tv\).
We denote \(X^{\operatorname{div}}\subset X^{\operatorname{an}}\) the (dense) subset of _divisorial valuations_, of the form \(v=t\operatorname{ord}_{E}\) with \(t\in\mathbb{Q}_{\geq 0}\) and \(E\) a prime divisor on a birational model \(\pi\colon Y\to X\) (the case \(t=0\) corresponding to \(v=v_{\operatorname{triv}}\), by convention). In the present work, where \(\mathbb{R}\)-divisors arise naturally, it will be convenient to allow \(t\) to be real, in which case we will say that \(v=t\operatorname{ord}_{E}\) is a _real divisorial valuation_. We denote by
\[X^{\operatorname{div}}_{\mathbb{R}}=\mathbb{R}_{>0}X^{\operatorname{div}}\]
the set of real divisorial valuations. It is contained in the space \(X^{\operatorname{lin}}\subset X^{\operatorname{an}}\) of _valuations of linear growth_ (see [1] and [11, SS1.5]).
### Rational and real piecewise linear functions
In [11], various classes of \(\mathbb{Q}\)-PL functions on \(X^{\operatorname{an}}\) were introduced, and the purpose of what follows is to discuss their \(\mathbb{R}\)-PL counterparts. First, any ideal \(\mathfrak{b}\subset\mathcal{O}_{X}\) defines a homogeneous function
\[\log|\mathfrak{b}|\colon X^{\operatorname{an}}\to[-\infty,0]\]
such that \(\log|\mathfrak{b}|(v):=-v(\mathfrak{b})\) for \(v\in X^{\operatorname{an}}\).
Second, any _flag ideal_\(\mathfrak{a}\), i.e. a coherent fractional ideal sheaf on \(X\times\mathbb{A}^{1}\) invariant under the \(\mathbb{G}_{m}\)-action on \(\mathbb{A}^{1}\) and trivial on \(X\times\mathbb{G}_{m}\), defines a continuous function
\[\varphi_{\mathfrak{a}}\colon X^{\operatorname{an}}\to\mathbb{R}\]
given by \(\varphi_{\mathfrak{a}}(v)=-\sigma(v)(\mathfrak{a})\), where \(\sigma\colon X^{\operatorname{an}}\to(X\times\mathbb{A}^{1})^{\operatorname{an}}\) is the _Gauss extension_. Concretely, we can write \(\mathfrak{a}=\sum_{\lambda\in\mathbb{Z}}\mathfrak{a}_{\lambda}\varpi^{-\lambda}\) for a decreasing sequence of ideals \(\mathfrak{a}_{\lambda}\subset\mathcal{O}_{X}\) such that \(\mathfrak{a}_{\lambda}=\mathcal{O}_{X}\) for \(\lambda\ll 0\) and \(\mathfrak{a}_{\lambda}=0\) for \(\lambda\gg 0\), and then \(\varphi_{\mathfrak{a}}=\max_{\lambda}(\log|\mathfrak{a}_{\lambda}|+\lambda)\).
We then denote by:
* \(\operatorname{PL}^{+}_{\hom}(X)\) the set of \(\mathbb{Q}_{+}\)-linear combinations of functions of the form \(\log|\mathfrak{b}|\) with \(\mathfrak{b}\subset\mathcal{O}_{X}\) a nonzero ideal;
* \(\operatorname{PL}^{+}(X)\) the set of functions \(\varphi\in\operatorname{C}^{0}(X^{\operatorname{an}})\) of the form \(\varphi=\max_{i}\{\psi_{i}+\lambda_{i}\}\) for a finite family \(\psi_{i}\in\operatorname{PL}^{+}_{\hom}(X)\) and \(\lambda_{i}\in\mathbb{Q}\); equivalently, functions of the form \(\varphi=\frac{1}{m}\varphi_{\mathfrak{a}}\) for a flag ideal \(\mathfrak{a}\) and \(m\in\mathbb{Z}_{>0}\);
* \(\operatorname{PL}(X)\) the set of differences of functions in \(\operatorname{PL}^{+}(X)\), called _rational piecewise linear functions_ (\(\mathbb{Q}\)_-PL functions_ for short).
The sets \(\operatorname{PL}^{+}_{\hom}(X)\) are stable under addition and \(\max\), while \(\operatorname{PL}(X)\) is a \(\mathbb{Q}\)-vector space, stable under \(\max\), and is dense in \(\operatorname{C}^{0}(X^{\operatorname{an}})\).
As in [1, SS3.1], we denote by \(\operatorname{PL}(X)_{\mathbb{R}}\) the \(\mathbb{R}\)-vector space generated by \(\operatorname{PL}(X)\). It is not stable under \(\max\) anymore; to remedy this, we further introduce:
* the set \(\operatorname{PL}^{+}(X)_{\mathbb{R}}\) of \(\mathbb{R}_{+}\)-linear combinations of functions in \(\operatorname{PL}^{+}(X)\);
* the set \(\mathbb{RPL}^{+}(X)\) of finite maxima of functions in \(\operatorname{PL}^{+}(X)_{\mathbb{R}}\); e
* the set \(\mathbb{RPL}(X)\) of differences of functions in \(\mathbb{RPL}^{+}(X)\); we call its elements _real piecewise linear functions_ (\(\mathbb{R}\)_-PL functions_ for short).
As one immediately sees, the sets \(\operatorname{PL}^{+}(X)_{\mathbb{R}}\) and \(\mathbb{RPL}^{+}(X)\) are convex cones in \(\operatorname{C}^{0}(X^{\operatorname{an}})\), and \(\mathbb{RPL}(X)\) is thus an \(\mathbb{R}\)-vector space. Further, \(\mathbb{RPL}^{+}(X)\), and hence \(\mathbb{RPL}(X)\), are clearly stable under \(\max\). Thus \(\mathbb{RPL}(X)\) is the smallest \(\mathbb{R}\)-linear subspace of \(\operatorname{C}^{0}(X^{\operatorname{an}})\) that is stable under \(\max\) and contains \(\operatorname{PL}(X)\).
Finally, introduce the convex cone \(\operatorname{PL}^{+}_{\hom}(X)_{\mathbb{R}}\) of \(\mathbb{R}_{+}\)-linear combinations of functions in \(\operatorname{PL}^{+}_{\hom}(X)\) (this is again not stable under \(\max\) anymore). We then have:
**Lemma 1.2**.: _A function \(\varphi\in\operatorname{C}^{0}(X^{\operatorname{an}})\) lies in \(\mathbb{RPL}^{+}(X)\) iff \(\varphi=\max_{i}\{\psi_{i}+\lambda_{i}\}\) for a finite family \(\psi_{i}\in\operatorname{PL}^{+}_{\hom}(X)_{\mathbb{R}}\) and \(\lambda_{i}\in\mathbb{R}\)._
Proof.: Since any function in \(\mathbb{RPL}^{+}(X)\) is a finite \(\max\) of functions \(\varphi\in\operatorname{PL}^{+}(X)_{\mathbb{R}}\), it suffices to show that \(\varphi\) is of the desired form. Write \(\varphi=\sum_{i=1}^{r}t_{i}\varphi_{i}\) with \(t_{i}\in\mathbb{R}_{>0}\) and \(\varphi_{i}\in\operatorname{PL}^{+}(X)\), i.e. \(\varphi_{i}=\max_{j}\{\psi_{ij}+\lambda_{ij}\}\) with \(\psi_{ij}\in\operatorname{PL}^{+}_{\hom}(X)\) and \(\lambda_{ij}\in\mathbb{Q}\). Then
\[\varphi=\max_{j_{1},\ldots,j_{r}}\sum_{i=1}^{r}t_{i}\left(\psi_{ij_{i}}+ \lambda_{ij_{i}}\right).\]
Since each \(\sum_{i}t_{i}\psi_{ij_{i}}\) lies in \(\operatorname{PL}^{+}_{\hom}(X)_{\mathbb{R}}\), this shows that \(\varphi\) is of the desired form.
Conversely, assume \(\varphi=\max_{i}\{\psi_{i}+\lambda_{i}\}\) for a finite family \(\psi_{i}\in\operatorname{PL}^{+}_{\hom}(X)_{\mathbb{R}}\) and \(\lambda_{i}\in\mathbb{R}\). For each \(i\), write \(\psi_{i}=\sum_{j}t_{ij}\psi_{ij}\) with \(\psi_{ij}\in\operatorname{PL}^{+}_{\hom}(X)\leq 0\). Pick \(v\in X^{\operatorname{an}}\) and \(i\) such that \(\varphi(v)=\psi_{i}(v)+\lambda_{i}\). Since \(\varphi\) is bounded, we can find \(c\in\mathbb{Q}\) such that \(\psi_{ij}(v)\geq c\) for all \(j\). This shows that \(\varphi=\max_{i}\varphi_{i}\) with \(\varphi_{i}:=\sum_{j}t_{ij}\max\{\psi_{ij},c\}+\lambda_{i}\). For all \(i,j\), \(\max\{\psi_{ij},c\}\) lies in \(\operatorname{PL}^{+}(X)\), thus \(\varphi_{i}\in\operatorname{PL}^{+}(X)_{\mathbb{R}}\), and hence \(\varphi\in\mathbb{RPL}^{+}(X)\).
### Homogeneous functions vs. \(b\)-divisors
We use [1, SS1] and [1, SS6.4] as references for what follows. Recall that
* a _(real) \(b\)-divisor over \(X\)_ is a collection \(B=(B_{Y})\) of \(\mathbb{R}\)-divisors on all (smooth) birational models \(Y\to X\), compatible under push-forward as cycles, i.e. an element of the \(\mathbb{R}\)-vector space \[\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}:=\varprojlim_{Y} \operatorname{Z}^{1}(Y)_{\mathbb{R}};\]
* a \(b\)-divisor \(B=(B_{Y})\) is _effective_ if \(B_{Y}\) is effective for all \(Y\); if \(B,B^{\prime}\) are \(b\)-divisors, then we write \(B\leq B^{\prime}\) iff \(B^{\prime}-B\) is effective;
* a \(b\)-divisor \(B\in\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}\) is said to be \(\mathbb{R}\)_-Cartier_ if there exists a model \(Y\), called a _determination_ of \(B\), such that \(B_{Y^{\prime}}\) is the pullback of \(B_{Y}\) for all higher birational models \(Y^{\prime}\); thus the space of \(\mathbb{R}\)-Cartier \(b\)-divisors can be identified with \[\varinjlim_{Y^{\prime}}\operatorname{Z}^{1}(Y)_{\mathbb{R}}.\]
**Example 1.3**.: _Any \(\mathbb{R}\)-divisor \(D\) on a model \(Y\to X\) determines an \(\mathbb{R}\)-Cartier \(b\)-divisor \(\overline{D}\in\operatorname{Car}_{\operatorname{b}}(X)_{\mathbb{R}}\), obtained by pulling back \(D\) to all higher models, and any \(\mathbb{R}\)-Cartier \(b\)-divisor is of this form._
For any \(B\in\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}\) and \(v\in X^{\operatorname{div}}\), we define \(v(B)\in\mathbb{R}\) as follows: pick a prime divisor \(E\) on a birational model \(Y\to X\) and \(t\in\mathbb{Q}_{\geq 0}\) such that \(v=t\operatorname{ord}_{E}\), and set
\[v(B):=t\operatorname{ord}_{E}(B_{Y}).\]
This is independent of the choices made, and the function \(\psi_{B}\colon X^{\operatorname{div}}\to\mathbb{R}\) defined by
\[\psi_{B}(v):=v(B)\]
is homogeneous (with respect to the scaling action of \(\mathbb{Q}_{>0}\)).
**Definition 1.4**.: _We say that a homogeneous function \(\psi\colon X^{\operatorname{div}}\to\mathbb{R}\) is of divisorial type if \(\psi(\operatorname{ord}_{E})=0\) for all but finitely many prime divisors \(E\subset X\)._
The next result is straightforward:
**Lemma 1.5**.: _The map \(B\mapsto\psi_{B}\) sets up a vector space isomorphism between \(\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}\) and the space of homogeneous functions of divisorial type on \(X^{\operatorname{div}}\). Moreover, \(B\in\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}\) is effective iff \(\psi_{B}\geq 0\)._
We endow \(\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}\) with the topology of pointwise convergence on \(X^{\operatorname{div}}\). If \(\Omega\) is a topological space, then a map \(f\colon\Omega\to\operatorname{Z}^{1}_{\operatorname{b}}(X)\) is thus continuous iff \(v\circ f\colon\Omega\to\mathbb{R}\) is continuous for all \(v\in X^{\operatorname{div}}\). We will also say that \(f\colon\Omega\to\operatorname{Z}^{1}_{\operatorname{b}}(X)\) is lsc (resp. usc) iff \(v\circ f\colon\Omega\to\mathbb{R}\) is lsc (resp. usc) for all \(v\in X^{\operatorname{div}}\).
If \(\Omega\) is a convex subset of a real vector space, then we say that \(f\colon\Omega\to\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}\) is convex if \(v\circ f\) is is convex for all \(v\in X^{\operatorname{div}}\). This amounts to \(f((1-t)x_{0}+tx_{1})\leq(1-t)f(x_{0})+tf(x_{1})\) for \(x_{0},x_{1}\in\Omega\), \(0\leq t\leq 1\). We say that \(f\) is concave if \(-f\) is convex.
Finally, if \(\Omega\subset\mathbb{R}\) is an interval, then \(f\colon\Omega\to\operatorname{Z}^{1}_{\operatorname{b}}(X)_{\mathbb{R}}\) is increasing (resp. decreasing) if \(v\circ f\) is increasing (resp. decreasing) for each \(v\in X^{\operatorname{div}}\).
Next we will generalize [1, Theorem 6.32] to real coefficients.
**Definition 1.6**.: _We denote by \(\operatorname{Car}^{+}_{\operatorname{b}}(X)_{\mathbb{R}}\) the convex cone of divisors \(B\in\operatorname{Car}_{\operatorname{b}}(X)_{\mathbb{R}}\) that are antieffective and relatively semiample over \(X\)._
**Proposition 1.7**.: _The map \(B\mapsto\psi_{B}\) induces an isomorphism between \(\operatorname{Car}_{\operatorname{b}}(X)_{\mathbb{R}}\) and the \(\mathbb{R}\)-vector space generated by (the restrictions to \(X^{\operatorname{div}}\) of) all functions \(\log|\mathfrak{b}|\) with \(\mathfrak{b}\subset\mathcal{O}_{X}\) a nonzero ideal. This isomorphism restricts to a bijection_
\[\operatorname{Car}^{+}_{\operatorname{b}}(X)_{\mathbb{R}}\stackrel{{ \sim}}{{\to}}\operatorname{PL}^{+}_{\hom}(X)_{\mathbb{R}}.\]
Proof.: The first point is a consequence of [1, Theorem 6.32], which also yields
\[\operatorname{Car}_{\operatorname{b}}(X)_{\mathbb{Q}}\stackrel{{ \sim}}{{\to}}\operatorname{PL}^{+}_{\hom}(X).\]
Since the right-hand side generates the convex cone \(\operatorname{PL}^{+}_{\hom}(X)_{\mathbb{R}}\), it suffices to show that the convex cone of antieffective and relatively semiample divisors in \(\operatorname{Car}_{\operatorname{b}}(X)_{\mathbb{R}}\) is generated by
antieffective and semiample divisors in \(\operatorname{Car}_{\mathrm{b}}(X)_{\mathbb{Q}}\). By definition of a relatively semiample \(\mathbb{R}\)-Cartier \(b\)-divisor, we have \(B=\sum_{i}t_{i}B_{i}\) with \(t_{i}>0\) and \(B_{i}\in\operatorname{Car}_{\mathrm{b}}(X)_{\mathbb{Q}}\) relatively semiample. By the Negativity Lemma (see [1, Proposition 2.12]), \(B^{\prime}_{i}:=B_{i}-\overline{B_{i,X}}\) is antieffective, and still relatively semiample. Denoting by \(B_{X}=-\sum_{\alpha}c_{\alpha}E_{\alpha}\) the irreducible decomposition of the antieffective \(\mathbb{R}\)-divisor \(B_{X}\), we infer
\[B=\sum_{i}t_{i}B^{\prime}_{i}+\sum_{\alpha}c_{\alpha}(-\overline{E_{\alpha}})\]
where \(-\overline{E_{\alpha}}\in\operatorname{Car}_{\mathrm{b}}(X)_{\mathbb{Q}}\) is antieffective and relatively semiample. The result follows.
### Numerical \(b\)-divisor classes
The space of _numerical \(b\)-divisor classes_ is defined as
\[\operatorname{N}_{\mathrm{b}}^{\mathrm{l}}(X):=\varprojlim_{Y}\operatorname{ N}^{\mathrm{l}}(Y),\]
equipped with the inverse limit topology (each finite dimensional \(\mathbb{R}\)-vector space \(\operatorname{N}^{\mathrm{l}}(Y)\) being endowed with its canonical topology).
Any \(b\)-divisor defines a numerical \(b\)-divisor class. This yields a natural quotient map
\[\operatorname{Z}_{\mathrm{b}}^{\mathrm{l}}(X)_{\mathbb{R}}\to\operatorname{N} _{\mathrm{b}}^{\mathrm{l}}(X)\quad B\mapsto[B].\]
One should be wary of the fact this map is _not_ continuous with respect to the topology of pointwise convergence of \(\operatorname{Z}_{\mathrm{b}}^{\mathrm{l}}(X)_{\mathbb{R}}\). However, we observe:
**Lemma 1.8**.: _For any finite set \(\mathcal{E}\) of prime divisors on \(X\), the quotient map \(B\mapsto[B]\) is continuous on the subspace \(\operatorname{Z}_{\mathrm{b}}^{\mathrm{l}}(X)_{\mathbb{R},\mathcal{E}}\) of \(b\)-divisors \(B\) such that \(B_{X}\) is supported by \(\mathcal{E}\)._
Proof.: For any model \(Y\to X\), each \(B_{Y}\) with \(B\in\operatorname{Z}_{\mathrm{b}}^{\mathrm{l}}(X)_{\mathbb{R},\mathcal{E}}\) lives in the finite dimensional vector space generated by the strict transforms of the elements of \(\mathcal{E}\) and the \(\pi\)-exceptional prime divisors. Thus \(B\mapsto[B_{Y}]\in\operatorname{N}^{\mathrm{l}}(Y)\) is continuous on \(\operatorname{Z}_{\mathrm{b}}^{\mathrm{l}}(X)_{\mathbb{R},\mathcal{E}}\), and the result follows.
The set of numerical classes of \(\mathbb{R}\)-Cartier \(b\)-divisors can be identified with the direct limit
\[\varprojlim_{Y}\operatorname{N}^{\mathrm{l}}(Y)\subset\operatorname{N}_{ \mathrm{b}}^{\mathrm{l}}(X).\]
In particular, any numerical class \(\theta\in\operatorname{N}^{\mathrm{l}}(X)\) defines a numerical \(b\)-divisor class \(\bar{\theta}=(\theta_{Y})_{Y}\in\operatorname{N}_{\mathrm{b}}^{\mathrm{l}}(X)\), where \(\theta_{Y}\) is the pullback of \(\theta\).
**Definition 1.9**.: _The cone of nef \(b\)-divisor classes_
\[\operatorname{Nef}_{\mathrm{b}}(X)\subset\operatorname{N}_{\mathrm{b}}^{ \mathrm{l}}(X)\]
_is defined as the closed convex cone generated by all classes of nef \(\mathbb{R}\)-Cartier \(b\)-divisors._
The following characterization is essentially formal (see [1, Lemma 2.10]).
**Lemma 1.10**.: _A \(b\)-divisor \(B\in\operatorname{Z}_{\mathrm{b}}^{\mathrm{l}}(X)_{\mathbb{R}}\) is nef iff \(B_{Y}\) is movable for all birational models \(Y\to X\). In other words, \(\operatorname{Nef}_{\mathrm{b}}(X)=\varprojlim_{Y}\operatorname{Mov}(Y)\)._
We finally record the following version of the Negativity Lemma (see [1, Proposition 2.12]).
**Lemma 1.11**.: _If \(B\in\operatorname{Z}_{\mathrm{b}}^{\mathrm{l}}(X)_{\mathbb{R}}\) is nef, then \(B\leq\overline{B_{Y}}\) for any birational model \(Y\to X\)._
### Plurisubharmonic functions
We use [11, SS4] as a reference. Given a \(\mathbb{Q}\)-line bundle \(L\in\operatorname{Pic}(X)_{\mathbb{Q}}\) and a numerical class \(\theta\in\operatorname{N}^{1}(X)\), we denote by
* \(\mathcal{H}^{\operatorname{gf}}(L)=\mathcal{H}^{\operatorname{gf}}_{\mathbb{Q}} (L)\) the set of _generically finite Fubini-Study_ functions for \(L\), i.e. functions \(\varphi\colon X^{\operatorname{an}}\to\mathbb{R}\cup\{-\infty\}\) of the form \[\varphi=m^{-1}\max_{i}\{\log|s_{i}|+\lambda_{i}\};\] where \(m\in\mathbb{Z}_{>0}\) is sufficiently divisible, \((s_{i})\) is a finite set of nonzero sections of \(mL\), and \(\lambda_{i}\in\mathbb{Q}\);
* \(\mathcal{H}_{\operatorname{hom}}(L)\subset\mathcal{H}^{\operatorname{gf}}(L)\) the set of _homogeneous Fubini-Study functions_, for which the \(\lambda_{i}\) can be chosen to be \(0\);
* \(\operatorname{PSH}(\theta)\) the set of \(\theta\)_-psh_ functions \(\varphi\colon X^{\operatorname{an}}\to\mathbb{R}\cup\{-\infty\}\), \(\varphi\not\equiv-\infty\), obtained as limits of decreasing nets \((\varphi_{i})\) of generically finite Fubini-Study functions \(\varphi_{i}\) for \(\mathbb{Q}\)-line bundles \(L_{i}\) such that \(c_{1}(L_{i})\to\theta\);
* \(\operatorname{PSH}_{\operatorname{hom}}(\theta)\subset\operatorname{PSH}(\theta)\) the subset of homogeneous \(\theta\)-psh functions, that is, functions \(\varphi\in\operatorname{PSH}(\theta)\) such that \(\varphi(tv)=t\varphi(v)\) for \(v\in X^{\operatorname{an}}\) and \(t\in\mathbb{R}_{>0}\).
All functions in \(\operatorname{PSH}(\theta)\) are finite valued on the set \(X^{\operatorname{div}}\subset X^{\operatorname{an}}\) of divisorial valuations, and we endow \(\operatorname{PSH}(\theta)\) with the topology of pointwise convergence on \(X^{\operatorname{div}}\). For all \(\varphi,\psi\in\operatorname{PSH}(\theta)\), we further have
\[\varphi\leq\psi\text{ on }X^{\operatorname{div}}\Longleftrightarrow\varphi\leq\psi \text{ on }X^{\operatorname{an}}.\]
In particular, the topology of \(\operatorname{PSH}(\theta)\) is Hausdorff. The set of \(\theta\)-psh functions is preserved by the action of \(\mathbb{R}_{>0}\) given by \((t,\varphi)\mapsto t\cdot\varphi\), where \((t\cdot\varphi)(v):=t\varphi(t^{-1}v)\).
**Lemma 1.12**.: _For any \(\theta\in\operatorname{N}^{1}(X)\) we have:_
1. \(\operatorname{PSH}(\theta)\neq\emptyset\Rightarrow\theta\in\operatorname{ PSF}(X)\)_;_
2. \(0\in\operatorname{PSH}(\theta)\Leftrightarrow\theta\in\operatorname{Nef}(X)\)_;_
3. \(\theta\in\operatorname{Big}(X)\Rightarrow\operatorname{PSH}(\theta)\neq\emptyset\)_._
As we shall see in Proposition 4.1, (i) is in fact an equivalence, rendering (iii) redundant.
Proof.: For (i) and (ii) see [11, (4.1),(4.3)]. If \(\theta\) is big, we find a big \(\mathbb{Q}\)-line bundle \(L\) such that \(\theta-c_{1}(L)\) is nef. Then \(\operatorname{PSH}(\theta)\supset\operatorname{PSH}(L)\supset\mathcal{H}^{ \operatorname{gf}}(L)\neq\emptyset\), which proves (iii).
**Example 1.13**.: _For any effective \(\mathbb{R}\)-divisor \(D\), \(\psi_{D}:=\psi_{\overline{D}}\) satisfies \(-\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}([D])\)._
Our assumption that \(X\) is smooth and \(k\) is of characteristic zero implies that the _envelope property_ holds, see [11, Theorem A] for any class \(\theta\in\operatorname{N}^{1}(X)\). This means that if \((\varphi_{\alpha})_{\alpha}\) is any family in \(\operatorname{PSH}(\theta)\) that is uniformly bounded above, and \(\varphi:=\sup_{\alpha}\varphi_{\alpha}\), then the usc regularization \(\varphi^{\star}\) is \(\theta\)-psh.
The envelope property has many favorable consequences, as discussed in [11, SS5]. For example, for any birational model \(\pi\colon Y\to X\) and any \(\theta\in\operatorname{N}^{1}(X)\) we have
\[\operatorname{PSH}(\pi^{\star}\theta)=\pi^{\star}\operatorname{PSH}(\theta); \tag{1.2}\]
see [11, Lemma 5.13].
### The homogeneous decomposition of a psh function
We refer to [11, SS6.3] for details on what follows. Fix \(\theta\in\mathrm{N}^{1}(X)\). For any \(\varphi\in\mathrm{PSH}(\theta)\) and \(\lambda\leq\sup\varphi\), setting
\[\widehat{\varphi}^{\lambda}:=\inf_{t>0}\{t\cdot\varphi-t\lambda\} \tag{1.3}\]
defines a homogeneous \(\theta\)-psh function \(\widehat{\varphi}^{\lambda}\in\mathrm{PSH}_{\mathrm{hom}}(\theta)\). The family \((\widehat{\varphi}^{\lambda})_{\lambda\leq\sup\varphi}\) is further concave, decreasing, and continuous for the topology of \(\mathrm{PSH}_{\mathrm{hom}}(\theta)\) (i.e. pointwise convergence on \(X^{\mathrm{div}}\)), and it gives rise to the _homogeneous decomposition_
\[\varphi=\sup_{\lambda\leq\sup\varphi}\{\widehat{\varphi}^{\lambda}+\lambda\}. \tag{1.4}\]
For \(\lambda=\sup\varphi=\varphi(v_{\mathrm{triv}})\), the function \(\widehat{\varphi}^{\max}:=\widehat{\varphi}^{\sup\varphi}\) computes the directional derivatives of \(\varphi\) at \(v_{\mathrm{triv}}\), i.e.
\[\widehat{\varphi}^{\max}(v)=\lim_{t\to 0_{+}}\frac{\varphi(tv)-\varphi(v_{ \mathrm{triv}})}{t} \tag{1.5}\]
for \(v\in X^{\mathrm{an}}\). The limit exists as the function \(t\mapsto\varphi(tv)\) on \((0,\infty)\) is convex and decreasing, see [11, Proposition 4.12].
**Example 1.14**.: _Assume \(\varphi=\varphi_{\mathfrak{a}}\) for a flag ideal \(\mathfrak{a}=\sum_{\lambda\in\mathbb{Z}}\mathfrak{a}_{\lambda}\varpi^{-\lambda}\) on \(X\times\mathbb{A}^{1}\). Then \(\widehat{\varphi}^{\max}=\log|\mathfrak{a}_{\lambda_{\max}}|\) where \(\lambda_{\max}:=\max\{\lambda\in\mathbb{Z}\mid\mathfrak{a}_{\lambda}\neq 0\}\) (see Example 6.28 in [11])._
## 2. Psh functions and families of \(b\)-divisors
We work with a fixed numerical class \(\theta\in\mathrm{N}^{1}(X)\).
### Homogeneous psh functions and \(\mathbf{b}\)-divisors
Recall that a function \(\psi\in\mathrm{PSH}_{\mathrm{hom}}(\theta)\) is uniquely determined by its values on \(X^{\mathrm{div}}\). We say that \(\psi\) is of divisorial type if its restriction to \(X^{\mathrm{div}}\) is of divisorial type, that is, \(\psi(\mathrm{ord}_{E})=0\) for all but finitely many prime divisors \(E\subset X\).
Slightly generalizing [11, Theorem 6.40], we show:
**Proposition 2.1**.: _The map \(B\mapsto\psi_{B}\) in SS1.4 sets up a 1-1 correspondence between:_
* _the set of_ \(b\)_-divisors_ \(B\in\mathrm{Z}^{1}_{\mathrm{b}}(X)_{\mathbb{R}}\) _such that_ \(B\leq 0\) _and_ \(\overline{\theta}+[B]\in\mathrm{N}^{1}_{\mathrm{b}}(X)\) _is nef;_
* _the set of_ \(\theta\)_-psh homogeneous functions_ \(\psi\in\mathrm{PSH}_{\mathrm{hom}}(\theta)\) _of divisorial type._
Proof.: Pick \(B\) as in (i). On the one hand, \(\psi_{\overline{B_{X}}}\in\mathrm{PSH}_{\mathrm{hom}}(-B_{X})\), see Example 1.13. On the other hand, since \(\overline{\theta}+[B]=\overline{(\theta+[B_{X}])}+([B]-\overline{[B_{X}]})\) is nef, it follows from [11, Theorem 6.40] that \(\psi_{B-\overline{B_{X}}}=\psi_{B}-\psi_{\overline{B_{X}}}\) lies in \(\mathrm{PSH}_{\mathrm{hom}}(\theta+B_{X})\). Thus
\[\psi_{B}\in\mathrm{PSH}(\theta+B_{X})+\mathrm{PSH}(-B_{X})\subset\mathrm{PSH }(\theta).\]
Conversely, pick \(\psi\) as in (ii), so that \(\psi=\psi_{B}\) with \(0\geq B\in\mathrm{Z}^{1}_{\mathrm{b}}(X)_{\mathbb{R}}\). By [11, Corollary 6.17], we can write \(\psi\) as the pointwise limit of a decreasing net \((\psi_{i})\) such that \(\psi_{i}\in\mathcal{H}_{\mathrm{hom}}(L_{i})\) with \(L_{i}\in\mathrm{Pic}(X)_{\mathbb{Q}}\) and \(\lim_{i}c_{1}(L_{i})=\theta\). Then \(\psi_{i}=\psi_{B_{i}}\) for a Cartier \(b\)-divisor \(0\geq B_{i}\in\mathrm{Car}_{\mathrm{b}}(X)_{\mathbb{Q}}\) such that \(\overline{L_{i}}+B_{i}\) is semiample (see [11, Lemma 6.34]), and hence \(\overline{c_{1}(L_{i})}+[B_{i}]\in\mathrm{N}^{1}_{\mathrm{b}}(X)\) is nef. Further, \(B_{i}\searrow B\) in \(\mathrm{Z}^{1}_{\mathrm{b}}(X)_{\mathbb{R}}\), and hence \([B_{i}]\to[B]\) in \(\mathrm{N}^{1}_{\mathrm{b}}(X)\) (see Lemma 1.8). Since \(\overline{c_{1}(L_{i})}+[B_{i}]\) is nef for all \(i\), we conclude, as desired, that \(\overline{\theta}+[B]\) is nef.
### Rees valuations
In order to formulate a version of Proposition 2.1 for general \(\theta\)-psh functions, the following notion will be useful.
**Definition 2.2**.: _Given any effective \(\mathbb{R}\)-divisor \(D\) on \(X\) with irreducible decomposition \(D=\sum_{\alpha}c_{\alpha}E_{\alpha}\) on \(X\), we denote by \(\Gamma_{D}\subset X_{\mathbb{R}}^{\operatorname{div}}\) the set of Rees valuations of \(D\), defined as the real divisorial valuations \(v_{\alpha}:=c_{\alpha}^{-1}\operatorname{ord}_{E_{\alpha}}\)._
Note that \(v_{\alpha}(D)=1\) for all \(\alpha\). We can now state a variant of [1, Theorem 6.21]:
**Proposition 2.3**.: _Pick \(\psi\in\operatorname{PSH}_{\operatorname{hom}}(\theta)\), and an effective \(\mathbb{R}\)-divisor \(D\) on \(X\). Then_
\[\max_{\Gamma_{D}}\psi\leq-1\Longleftrightarrow\psi+\psi_{D}\in\operatorname{ PSH}_{\operatorname{hom}}(\theta-D).\]
Recall that \(0\geq-\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}([D])\).
Proof.: If \(\psi+\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}(\theta-D)\), then \(\psi\leq-\psi_{D}\), and hence \(\max_{\Gamma}\psi\leq-1\), since \(\psi_{D}\equiv 1\) on \(\Gamma_{D}\). Conversely, assume \(\max_{\Gamma_{D}}\psi\leq-1\). Consider first the case where \(\theta=c_{1}(L)\) for a \(\mathbb{Q}\)-line bundle and \(\psi\in\mathcal{H}_{\operatorname{hom}}(L)\). For any \(m\) sufficiently divisible we thus have \(\psi=\frac{1}{m}\max_{i}\log|s_{i}|\) for a finite set of nonzero section \(s_{i}\in\operatorname{H}^{0}(X,mL)\). Using the notation of Definition 2.2, we get for all \(i\) and all \(\alpha\)
\[c_{\alpha}^{-1}\operatorname{ord}_{E_{\alpha}}(s_{i})=-\log|s_{i}|(v_{\alpha} )\geq m,\]
and hence \(\operatorname{ord}_{E_{\alpha}}(s_{i})\geq\lceil mc_{\alpha}\rceil\). This implies \(s_{i}=s_{i}^{\prime}s_{D_{m}}\) for some \(s_{i}^{\prime}\in\operatorname{H}^{0}(X,m(L-D_{m}))\), where
\[D_{m}:=m^{-1}\lceil mD\rceil=\sum_{\alpha}m^{-1}\lceil mc_{\alpha}\rceil E_{\alpha}\]
and \(s_{D_{m}}\in\operatorname{H}^{0}(X,D_{m})\) is the canonical section. Since \(\psi_{D_{m}}=-\log|s_{D_{m}}|\), we infer
\[\psi+\psi_{D_{m}}=\tfrac{1}{m}\max_{i}\log|s_{i}^{\prime}|\in\mathcal{H}_{ \operatorname{hom}}(L-D_{m})\subset\operatorname{PSH}_{\operatorname{hom}}(L- D_{m}).\]
When \(m\to\infty\), \(\psi_{D_{m}}\) decreases to \(\psi_{D}\), and \([D_{m}]\to[D]\) in \(\operatorname{N}^{1}(X)\), and we infer \(\psi+\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}(L-D)\).
In the general case, \(\psi\) can be written as the pointwise limit of a decreasing net \(\psi_{i}\in\mathcal{H}_{\operatorname{hom}}(L_{i})\), where \(L_{i}\in\operatorname{Pic}(X)_{\mathbb{Q}}\) satisfies that \(c_{1}(L_{i})-\theta\) is nef and tends to \(0\) (see [1, Corollary 6.17]). Pick \(t\in(0,1)\). For all \(i\) large enough and all \(\alpha\), we then have \(c_{\alpha}^{-1}\psi_{i}(\operatorname{ord}_{E_{\alpha}})\leq-t\), and hence
\[\psi_{i}+t\psi_{D}\in\mathcal{H}_{\operatorname{hom}}(L_{i}-tD)\subset \operatorname{PSH}_{\operatorname{hom}}(L_{i}-tD)\]
by the previous step of the proof. Since \(\psi_{i}+t\psi_{D}\) decreases to \(\psi+t\psi_{D}\) and \(L_{i}-tD\to\theta-tD\) in \(\operatorname{N}^{1}(X)\), we infer \(\psi+t\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}(\theta-tD)\) (see [1, Theorem 4.5]). Pick any \(\omega\in\operatorname{Amp}(X)\). Then \(\psi+t\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}(\theta-D+\omega)\) for all \(t\in(0,1)\) close to \(1\), so by the envelope property (see [1, Theorem 5.11]), we get \(\psi+\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}(\theta-D+\omega)\). As this is true for all \(\omega\in\operatorname{Amp}(X)\), we conclude \(\psi+\psi_{D}\in\operatorname{PSH}_{\operatorname{hom}}(\theta-D)\) (again see [1, Theorem 4.5]).
### Psh functions and families of b-divisors
We now extend Proposition 2.1 to general \(\theta\)-psh functions. We say that \(\varphi\in\operatorname{PSH}(\theta)\) is of divisorial type if the homogeneous psh function \(\widehat{\varphi}^{\max}\in\operatorname{PSH}_{\operatorname{hom}}(\theta)\) is of divisorial type, see SS1.7. By (1.5), this is equivalent to \(\varphi(\operatorname{ord}_{E})=\sup\varphi\) for all but finitely many prime divisors \(E\subset X\).
**Theorem 2.4**.: _There is a 1-1 correspondence between:_
* _the set of_ \(\theta\)_-psh functions_ \(\varphi\in\operatorname{PSH}(\theta)\) _of divisorial type;_
_._
2. _the set of continuous, concave, decreasing families_ \((B_{\lambda})_{\lambda\leq\lambda_{\max}}\) _of_ \(b\)_-divisors, for some_ \(\lambda_{\max}\in\mathbb{R}\)_, such that_ \(B_{\lambda}\leq 0\) _and_ \(\overline{\theta}+[B_{\lambda}]\in\mathrm{N}^{1}_{\mathrm{b}}(X)\) _is nef for all_ \(\lambda\leq\lambda_{\max}\)_._
_The correspondence is given by_
\[\varphi=\sup_{\lambda\leq\lambda_{\max}}\{\psi_{B_{\lambda}}+\lambda\},\quad \psi_{B_{\lambda}}=\widehat{\varphi}^{\lambda}. \tag{2.1}\]
_In particular, we have \(\lambda_{\max}=\sup\varphi\) and \(\widehat{\varphi}^{\max}=B_{\lambda_{\max}}\)._
Proof.: Pick a family \((B_{\lambda})_{\lambda\leq\lambda_{\max}}\) as in (ii). By Proposition 2.1, setting \(\psi_{\lambda}:=\psi_{B_{\lambda}}\) defines a continuous, concave and decreasing family \((\psi_{\lambda})_{\lambda\leq\lambda_{\max}}\) in \(\mathrm{PSH}_{\mathrm{hom}}(\theta)\). Since \(\theta\) has the envelope property, the ucs upper envelope \(\varphi:=\sup_{\lambda\leq\lambda_{\max}}^{\star}(\psi_{\lambda}+\lambda)\) lies in \(\mathrm{PSH}(\theta)\). On \(X^{\mathrm{div}}\), \(\varphi\) coincides with \(\sup_{\lambda\leq\lambda_{\max}}(\psi_{\lambda}+\lambda)\) (see [1, Theorem 5.6]). By Legendre duality, we further have \(\psi_{\lambda}=\widehat{\varphi}^{\lambda}\) for \(\lambda<\lambda_{\max}\) (see [1, Theorem 6.24], and hence also for \(\lambda=\lambda_{\max}\), by continuity of both sides on \((-\infty,\lambda_{\max}]\).
Conversely, pick \(\varphi\) as in (i), so that \(\widehat{\varphi}^{\max}\in\mathrm{PSH}_{\mathrm{hom}}(\theta)\) is of divisorial type. For each \(\lambda\leq\sup\varphi\) we then have \(0\geq\widehat{\varphi}^{\lambda}\geq\widehat{\varphi}^{\max}\), which shows that \(\widehat{\varphi}^{\lambda}\in\mathrm{PSH}_{\mathrm{hom}}(\theta)\) is also of divisorial type. By Proposition 2.1, we thus have \(\widehat{\varphi}^{\lambda}=\psi_{B_{\lambda}}\) for a \(b\)-divisor \(B_{\lambda}\leq 0\) such that \(\overline{\theta}+[B_{\lambda}]\) is nef, and the family \((B_{\lambda})_{\lambda\leq\sup\varphi}\) is concave, decreasing and continuous, since so is \((\widehat{\varphi}^{\lambda})_{\lambda\leq\sup\varphi}\).
**Remark 2.5**.: _Not every \(\theta\)-psh function is of divisorial type. For example, assume \(\dim X=1\), and pick a sequence \((p_{j})\) of closed points on \(X\), with corresponding ideals \(\mathfrak{m}_{j}\subset\mathcal{O}_{X}\), and a sequence \(\varepsilon_{j}\) in \(\mathbb{R}_{>0}\) such that \(\sum_{j}\varepsilon_{j}\leq\deg\theta\). Then \(\varphi:=\sum_{j}\varepsilon_{j}\log|\mathfrak{m}_{j}|\in\mathrm{PSH}(\theta)\), and \(-c_{j}=\varphi(\mathrm{ord}_{p_{j}})<\sup\varphi=0\) for all \(j\) (see [1, Example 4.13])._
## 3. The center of a \(\theta\)-psh function
In this section we introduce the notion of the center of a \(\theta\)-psh function. This is a subset of \(X\) defined in terms of the locus on \(X^{\mathrm{an}}\) where \(\varphi\) is smaller than its maximum.
### The center map
For any \(v\in X^{\mathrm{an}}\), we denote by \(c_{X}(v)\in X\) its center, and by
\[Z_{X}(v):=\overline{\{c_{X}(v)\}}\subset X\]
the corresponding subvariety. The center map \(c_{X}\colon X^{\mathrm{an}}\to X\) is surjective and anticontinuous, i.e. the preimage of a closed subset is open. In particular, any subvariety \(Z\subset X\) is of the form \(Z=Z_{X}(v)\) for some \(v\); we can simply take \(v=\mathrm{ord}_{Z}\).
More generally, for any subset \(S\subset X^{\mathrm{an}}\) we set
\[Z_{X}(S):=\bigcup_{v\in S}Z_{X}(v). \tag{3.1}\]
This is smallest subset of \(X\) that contains \(c_{X}(S)\) and is closed under specialization.
### The center of a \(\theta\)-psh function
We can now introduce
**Definition 3.1**.: _We define the center on \(X\) of any \(\theta\)-psh function \(\varphi\in\mathrm{PSH}(\theta)\) as_
\[Z_{X}(\varphi):=Z_{X}(\{\varphi<\sup\varphi\}).\]
**Example 3.2**.: _For any nonzero ideal \(\mathfrak{b}\subset\mathcal{O}_{X}\), the function \(\psi=\log|\mathfrak{b}|\) is \(\theta\)-psh if \(\theta\) is sufficiently ample, and then \(Z_{X}(\varphi)=V(\mathfrak{b})\). More generally, if \(\varphi=\sum_{i}t_{i}\log|\mathfrak{b}_{i}|\) with \(t_{i}\in\mathbb{R}_{>0}\) and \(\mathfrak{b}_{i}\subset\mathcal{O}_{X}\) a nonzero ideal, then \(Z_{X}(\varphi)=\bigcup_{i}V(\mathfrak{b}_{i})\)._
Recall that to any \(\theta\)-psh function \(\varphi\in\operatorname{PSH}(\theta)\) we can associate a homogeneous \(\theta\)-psh function \(\widehat{\varphi}^{\max}\in\operatorname{PSH}_{\hom}(\theta)\), see SS1.7.
**Lemma 3.3**.: _For any \(\varphi\in\operatorname{PSH}(\theta)\) we have \(\{\varphi<\sup\varphi\}=\{\widehat{\varphi}^{\max}<0\}\). As a consequence, \(Z_{X}(\varphi)=Z_{X}(\widehat{\varphi}^{\max})\). Moreover, the following conditions are equivalent:_
1. \(\varphi\) _is of divisorial type;_
2. \(\widehat{\varphi}^{\max}\) _is of divisorial type;_
3. \(Z_{X}(\varphi)=Z_{X}(\widehat{\varphi}^{\max})\) _contains at most finitely many prime divisors_ \(E\subset X\)_._
Proof.: Pick any \(v\in X^{\operatorname{an}}\). By (1.5) and the fact that \(t\mapsto\varphi(tv)\) is decreasing and convex, it follows that \(\varphi(v)<\sup\varphi\) iff \(\widehat{\varphi}^{\max}(v)<0\). Thus \(Z_{X}(\varphi)=Z_{X}(\widehat{\varphi}^{\max})\) since \(\sup\widehat{\varphi}^{\max}=0\).
Now the equivalence (i)\(\Leftrightarrow\)(ii) is definition, and (ii)\(\Leftrightarrow\)(iii) is clear since a prime divisor \(E\subset X\) belongs to \(Z_{X}(\widehat{\varphi}^{\max})\) iff \(\widehat{\varphi}^{\max}(\operatorname{ord}_{E})<0\).
Together with Example 1.14, Lemma 3.3 implies
**Corollary 3.4**.: _If \(\varphi=\varphi_{\mathfrak{a}}\) for a flag ideal \(\mathfrak{a}=\sum_{\lambda\in\mathbb{Z}}\mathfrak{a}_{\lambda}\varpi^{-\lambda}\) on \(X\times\mathbb{A}^{1}\), then \(Z_{X}(\varphi_{\mathfrak{a}})=V(\mathfrak{a}_{\lambda_{\max}})\), where \(\lambda_{\max}:=\max\{\lambda\in\mathbb{Z}\mid\mathfrak{a}_{\lambda}\neq 0\}\)._
**Theorem 3.5**.: _For any \(\varphi\in\operatorname{PSH}(\theta)\), the center \(Z_{X}(\varphi)\) is a strict subset of \(X\), and an at most countable union of (strict) subvarieties. Moreover, we have \(c_{X}^{-1}(Z_{X}(\varphi))=\{\varphi<\sup\varphi\}\)._
Proof.: Note that \(Z_{X}(\varphi)\) does not contain the generic point of \(X\), so \(Z_{X}(\varphi)\neq X\). Also note that by Lemma 3.3 we may assume that \(\varphi\) is homogeneous.
If \(\varphi\in\mathcal{H}_{\hom}(L)\) for a \(\mathbb{Q}\)-line bundle \(L\), so that \(\varphi=\frac{1}{m}\max_{i}\log|s_{i}|\) for a finite set of nonzero sections \(s_{i}\in\operatorname{H}^{0}(X,mL)\), then \(Z_{X}(\varphi)=\bigcap_{i}(s_{i}=0)\), which is Zariski closed. In general, \(\varphi\) can be written as the limit of a decreasing sequence \(\varphi_{m}\in\mathcal{H}_{\hom}(L_{m})\) with \(L_{m}\in\operatorname{Pic}(X)_{\mathbb{Q}}\) such that \(c_{1}(L_{m})\to\theta\) (see [1, Remark 6.18]). For any \(v\in X^{\operatorname{div}}\) we then have
\[c_{X}(v)\in Z_{X}(\varphi)\Leftrightarrow\varphi(v)<0\Leftrightarrow\varphi_ {m}(v)<0\text{ for some }m,\]
i.e. \(Z_{X}(\varphi)=\bigcup_{m}Z_{X}(\varphi_{m})\), an at most countable union of strict subvarieties.
Next pick \(v\in X^{\operatorname{an}}\), and set \(Z=Z_{X}(v)\). By [1, Proposition 4.12], \(\varphi(tv)=t\varphi(v)\) converges to \(\varphi(v_{Z,\operatorname{triv}})=\sup_{Z^{\operatorname{an}}}\varphi\) as \(t\to+\infty\), and hence \(\varphi(v)<0\Leftrightarrow\varphi\equiv-\infty\) on \(Z^{\operatorname{an}}\). By definition of the center, if \(c_{X}(v)\) lies in \(Z_{X}(\varphi)\), then we can find \(w\in X^{\operatorname{an}}\) such that \(\varphi(w)<0\) and \(c_{X}(v)\in Z_{X}(w)\), i.e. \(Z\subset Z_{X}(w)\). Then \(\varphi\equiv-\infty\) on \(Z_{X}(w)\supset Z\), which yields \(\varphi(v)<0\). Conversely, assume \(\varphi(v)<0\), and hence \(\varphi\equiv-\infty\) on \(Z^{\operatorname{an}}\). We can find \(w\in X^{\operatorname{div}}\) such that \(Z=Z_{X}(w)\). Since \(\varphi\equiv-\infty\) on \(Z^{\operatorname{an}}=Z_{X}(w)^{\operatorname{an}}\), we get \(\varphi(w)<0\), and hence \(c_{X}(v)\in Z_{X}(w)\subset Z_{X}(\varphi)\).
For later use we record
**Lemma 3.6**.: _If \(\varphi_{i}\in\operatorname{PSH}(\theta_{i})\), \(i=1,2\), then \(Z_{X}(\varphi_{1}+\varphi_{2})=Z_{X}(\varphi_{1})\cup Z_{X}(\varphi_{2})\)._
### Centers of PL functions
The following result will play a crucial role in what follows.
**Lemma 3.7**.: _If \(\varphi\in\operatorname{PSH}(\theta)\) lies in \(\mathbb{RPL}^{+}(X)\) (resp. \(\mathbb{RPL}(X)\)), then \(Z_{X}(\varphi)\) is Zariski closed (resp. not Zariski dense) in \(X\)._
Proof.: Assume first \(\varphi\in\mathbb{RPL}^{+}(X)\), and write \(\varphi=\max_{i}\{\psi_{i}+\lambda_{i}\}\) for a finite set \(\psi_{i}\in\operatorname{PL}^{+}_{\hom}(X)_{\mathbb{R}}\) and \(\lambda_{i}\in\mathbb{R}\). As in Example 1.14, we then have \(\max_{i}\lambda_{i}=\sup\varphi\), and \(\widehat{\varphi}^{\max}=\max_{\lambda_{i}=\sup\varphi}\psi_{i}\). This shows that
\[Z_{X}(\varphi)=Z_{X}(\widehat{\varphi}^{\max})=\bigcap_{\lambda_{i}=\sup\varphi} Z_{X}(\psi_{i})\]
is Zariski closed (see Example 3.2). Assume next \(\varphi\in\mathbb{R}\mathrm{PL}(X)\) and write \(\varphi=\varphi_{1}-\varphi_{2}\) with \(\varphi_{1},\varphi_{2}\in\mathbb{R}\mathrm{PL}^{+}(X)\). After replacing \(\theta\) with a sufficiently ample class, we may assume that \(\varphi_{1},\varphi_{2}\) are \(\theta\)-psh. By (1.5) we have \(\widehat{\varphi}^{\mathrm{max}}=\widehat{\varphi}_{1}^{\mathrm{max}}- \widehat{\varphi}_{2}^{\mathrm{max}}\), and hence
\[Z_{X}(\varphi)=Z_{X}(\widehat{\varphi}^{\mathrm{max}})\subset Z_{X}(\widehat{ \varphi}_{1}^{\mathrm{max}})\cup Z_{X}(\widehat{\varphi}_{2}^{\mathrm{max}})=Z _{X}(\varphi_{1})\cup Z_{X}(\varphi_{2}),\]
which cannot be Zariski dense, since \(Z_{X}(\varphi_{1})\) and \(Z_{X}(\varphi_{2})\) are both Zariski closed strict subsets by the first part of the proof.
## 4. Extremal functions and minimal vanishing orders
Next we define a trivially valued analogue of an important construction in the complex analytic case.
### Extremal functions
For any \(\theta\in\mathrm{N}^{1}(X)\), we define the _extremal function_\(V_{\theta}\colon X^{\mathrm{an}}\to[-\infty,0]\) as the pointwise envelope
\[V_{\theta}:=\sup\left\{\varphi\in\mathrm{PSH}(\theta)\mid\varphi\leq 0\right\}. \tag{4.1}\]
**Proposition 4.1**.: _For any \(\theta\in\mathrm{N}^{1}(X)\) we have_
\[\theta \in\mathrm{PSef}(X) \Rightarrow V_{\theta}\in\mathrm{PSH}_{\mathrm{hom}}(\theta);\] \[\theta \notin\mathrm{PSef}(X) \Rightarrow V_{\theta}\equiv-\infty;\] \[\theta \in\mathrm{Nef}(X) \Leftrightarrow V_{\theta}\equiv 0.\]
_In particular, \(\mathrm{PSH}(\theta)\) is nonempty iff \(\theta\) is pseudoeffective. For any \(\omega\in\mathrm{Amp}(X)\), we further have_
\[V_{\theta+\varepsilon\omega}\searrow V_{\theta}\text{ as }\varepsilon\searrow 0. \tag{4.2}\]
Proof.: Since the action \((t,\varphi)\mapsto t\cdot\varphi\) of \(\mathbb{R}_{>0}\) preserves the set of candidate functions \(\varphi\) in (4.1), \(V_{\theta}\) is necessarily fixed by the action, and hence homogeneous. If \(\theta\) is not psef, then \(\mathrm{PSH}(\theta)\) is empty (see Lemma 1.12), and hence \(V_{\theta}\equiv-\infty\). By Lemma 1.12, we also have \(V_{\theta}\equiv 0\) iff \(\theta\) is nef.
Next, assume \(\theta\in\mathrm{Big}(X)\). Then \(\mathrm{PSH}(\theta)\) is non-empty (see Lemma 1.12), and the envelope property implies that \(V_{\theta}^{\star}\) is \(\theta\)-psh and nonpositive. It is thus a candidate in (4.1), and hence \(V_{\theta}^{\star}\leq V_{\theta}\), which shows that \(V_{\theta}^{\star}=V_{\theta}\) is \(\theta\)-psh.
Assume now \(\theta\in\mathrm{P}\mathrm{Sef}(X)\), and pick \(\omega\in\mathrm{Amp}(X)\). For each \(\varepsilon>0\), the previous step yields \(V_{\varepsilon}:=V_{\theta+\varepsilon\omega}\in\mathrm{PSH}_{\mathrm{hom}}( \theta+\varepsilon\omega)\). For \(0<\varepsilon<\delta\) we further have \(\mathrm{PSH}(\theta)\subset\mathrm{PSH}(\theta+\delta\omega)\), and hence \(V_{\delta}\geq V_{\varepsilon}\geq V_{\theta}\). Set \(V:=\lim_{\varepsilon}V_{\varepsilon}\). For any \(\delta>0\) fixed, we have \(V_{\varepsilon}\in\mathrm{PSH}_{\mathrm{hom}}(\theta+\delta\omega)\) for \(\varepsilon\leq\delta\), and \(V_{\varepsilon}\searrow V\) as \(\varepsilon\to 0\). Thus \(V\in\mathrm{PSH}_{\mathrm{hom}}(\theta+\delta\omega)\) for all \(\delta>0\), and hence \(V\in\mathrm{PSH}_{\mathrm{hom}}(\theta)\). Since \(V\) is a candidate in (4.1), we get \(V\leq V_{\theta}\), and hence \(V_{\theta}=V=\lim_{\varepsilon}V_{\varepsilon}\). This proves that \(V_{\theta}\) is \(\theta\)-psh, as well as (4.2).
### Minimal vanishing orders
For \(\theta\in\mathrm{PSef}(X)\), the function \(V_{\theta}\in\mathrm{PSH}_{\mathrm{hom}}(\theta)\) is uniquely determined by its restriction to \(X^{\mathrm{div}}\), where it is furthermore finite valued. For any \(v\in X^{\mathrm{div}}\) we set
\[v(\theta):=-V_{\theta}(v)=\inf\{-\varphi(v)\mid\varphi\in\mathrm{PSH}(\theta),\,\varphi\leq 0\}\in\mathbb{R}_{\geq 0}. \tag{4.3}\]
Note that
\[v(\theta)=\sup_{\varepsilon>0}v(\theta+\varepsilon\omega) \tag{4.4}\]
for any \(\omega\in\mathrm{Amp}(X)\), by (4.2). As we next show, these invariants coincide with the minimal/asymptotic vanishing orders studied in [16, 11, 12].
**Proposition 4.2**.: _Pick \(v\in X^{\rm div}\). Then:_
* _the function_ \(\theta\mapsto v(\theta)\) _is homogeneous, convex and lsc on_ \({\rm Psef}(X)\)_; in particular, it is continuous on_ \({\rm Big}(X)\)_;_
* _for any_ \(\theta\in{\rm Psef}(X)\) _we have_ \[v(\theta)\leq\inf\left\{v(D)\mid D\equiv\theta\text{ effective $\mathbb{R}$-divisor}\right\},\] (4.5) _and equality holds when_ \(\theta\) _is big._
Note that equality in (4.5) fails in general for \(\theta\) is not big, as there might not even exist any effective \(\mathbb{R}\)-divisor \(D\) in the class of \(\theta\).
Proof.: Using \({\rm PSH}(\theta)+{\rm PSH}(\theta^{\prime})\subset{\rm PSH}(\theta+\theta^{ \prime})\) and \({\rm PSH}(t\theta)=t\,{\rm PSH}(\theta)\) for \(\theta,\theta^{\prime}\in{\rm Psef}(X)\) and \(t>0\), it is straightforward to see that \(\theta\mapsto v(\theta)\) is convex and homogeneous on \({\rm Psef}(X)\). Being also finite valued, it is automatically continuous on the interior \({\rm Big}(X)\). For any \(\omega\in{\rm Amp}(X)\) and \(\varepsilon>0\), \(\theta\mapsto v(\theta+\varepsilon\omega)\) is thus continuous on \({\rm Psef}(X)\), and (4.4) thus shows that \(\theta\mapsto v(\theta)\) is lsc, which proves (i).
Next pick \(\theta\in{\rm Psef}(X)\). For each effective \(\mathbb{R}\)-divisor \(D\equiv\theta\), the function \(-\psi_{D}\in{\rm PSH}_{\rm hom}(\theta)\), see Example 1.13, is a competitor in (4.1). Thus \(-v(D)=\psi_{D}(v)\leq V_{\theta}(v)=-v(\theta)\), which proves the first half of (ii). Now assume \(\theta\) is big, and denote by \(v^{\prime}(\theta)\) the right-hand side of (4.5). Both \(v(\theta)\) and \(v^{\prime}(\theta)\) are (finite valued) convex function of \(\theta\in{\rm Big}(X)\). They are therefore continuous, and it is thus enough to prove the equality \(v(\theta)=v^{\prime}(\theta)\) when \(\theta=c_{1}(L)\) with \(L\in{\rm Pic}(X)_{\mathbb{Q}}\) big. To this end, pick an ample \(\mathbb{Q}\)-line bundle \(A\), and set \(\omega:=c_{1}(A)\). By [1, Theorem 4.15], for any \(\varepsilon>0\) we can find \(\varphi\in\mathcal{H}^{\rm gf}(L+A)\) such that \(\varphi\geq V_{\theta}\) and \(\varphi(v_{\rm triv})=\sup\varphi\leq\varepsilon\). By definition, we have \(\varphi=m^{-1}\max_{i}\{\log|s_{i}|+\lambda_{i}\}\) with \(m\) sufficiently divisible and a finite family of nonzero sections \(s_{i}\in\mathrm{H}^{0}(X,m(L+A))\) and constants \(\lambda_{i}\in\mathbb{Q}\). Then \(\max_{i}\lambda_{i}=m\sup\varphi\leq m\varepsilon\), and \(m^{-1}v(s_{i})=v(D_{i})\) with \(D_{i}:=m^{-1}{\rm div}(s_{i})\equiv\theta+\omega\), and hence \(m^{-1}v(s_{i})\geq v^{\prime}(\theta+\omega)\). Thus
\[-v(\theta)=V_{\theta}(v)\leq\varphi(v)=m^{-1}\max_{i}\{v(s_{i})+\lambda_{i}\} \leq-v^{\prime}(\theta+\omega)+\varepsilon.\]
This shows \(v^{\prime}(\theta)\geq v(\theta)\geq v^{\prime}(\theta+\omega)\), and hence \(v^{\prime}(\theta)=v(\theta)\), since \(\lim_{\omega\to 0}v^{\prime}(\theta+\omega)=v^{\prime}(\theta)\) by continuity on the big cone.
**Remark 4.3**.: _If \(L\in{\rm Pic}(X)\) is big, then [1, Corollary 2.7] (or, alternatively, a small variant of the above argument) shows that \(v(c_{1}(L))\) is also equal to the asymptotic vanishing order_
\[v(\|L\|): =\lim_{m\to\infty}\tfrac{1}{m}\min\left\{v(s)\mid s\in\mathrm{H}^ {0}(X,mL)\setminus\{0\}\right\}\] \[=\inf\left\{v(D)\mid D\sim_{\mathbb{Q}}L\text{ effective $\mathbb{Q}$-divisor}\right\}.\]
**Remark 4.4**.: _Continuity of minimal vanishing orders beyond the big cone is a subtle issue. For any \(v\in X^{\rm div}\), the function \(\theta\mapsto v(\theta)\), being convex and lsc on \({\rm Psef}(X)\), is automatically continuous on any polyhedral subcone (cf. [10]). When \(\dim X=2\), it is in fact continuous on the whole of \({\rm Psef}(X)\), but this fails in general when \(\dim X\geq 3\) (see respectively Proposition III.1.19 and Example IV.2.8 in [14])._
### The center of an extremal function
The following fact plays a key role in what follows.
**Theorem 4.5**.: _For any \(\theta\in{\rm Psef}(X)\), the function \(V_{\theta}\in{\rm PSH}_{\rm hom}(\theta)\) is of divisorial type (see Definition 1.4). Further, its center \(Z_{X}(V_{\theta})\) coincides with the diminished base locus \(\mathbb{B}_{-}(\theta)\) (see SS1.1)._
The proof relies on the next result, which corresponds to [14, Corollary III.1.11] (see also [15, Theorem 3.12] in the analytic context).
**Lemma 4.6**.: _Pick \(\theta\in\operatorname{Psef}(X)\), and assume \(E_{1},\dots,E_{r}\subset X\) are prime divisors such that \(\operatorname{ord}_{E_{i}}(\theta)>0\) for all \(i\). Then \([E_{1}],\dots,[E_{r}]\) are linearly independent in \(\operatorname{N}^{1}(X)\). In particular, \(r\leq\rho(X)=\dim\operatorname{N}^{1}(X)\)._
Proof.: We reproduce the simple argument of [1, Theorem 3.5 (v)] for the convenience of the reader. By (4.4), after adding to \(\theta\) a small enough ample class we assume that \(\theta\) is big. Suppose \(\sum_{i}c_{i}[E_{i}]=0\) with \(c_{i}\in\mathbb{R}\), so that \(G:=\sum_{i}c_{i}E_{i}\) is numerically equivalent to \(0\), and choose \(0<\varepsilon\ll 1\) such that \(\operatorname{ord}_{E_{i}}(\theta)+\varepsilon c_{i}>0\) for all \(i\). Pick any effective \(\mathbb{R}\)-divisor \(D\equiv\theta\) and set \(D^{\prime}:=D+\varepsilon G\). Then \(D^{\prime}\) is effective, since
\[\operatorname{ord}_{E_{i}}(D^{\prime})=\operatorname{ord}_{E_{i}}(D)+ \varepsilon c_{i}\geq\operatorname{ord}_{E_{i}}(\theta)+\varepsilon c_{i}>0\]
for all \(i\). Since \(G\equiv 0\), we also have \(D^{\prime}\equiv\theta\), and (4.5) thus yields for each \(i\)
\[\operatorname{ord}_{E_{i}}(\theta)\leq\operatorname{ord}_{E_{i}}(D^{\prime})= \operatorname{ord}_{E_{i}}(D)+\varepsilon c_{i}.\]
Taking the infimum over \(D\) we get \(\operatorname{ord}_{E_{i}}(\theta)\leq\operatorname{ord}_{E_{i}}(\theta)+ \varepsilon c_{i}\) (see Proposition 4.2 (ii)), i.e. \(c_{i}\geq 0\) for all \(i\). Thus \(G\geq 0\), and hence \(G=0\), since \(G\equiv 0\). This proves \(c_{i}=0\) for all \(i\) which shows, as desired, that the \([E_{i}]\) are linearly independent.
Proof of Theorem 4.5.: By (4.3), the first assertion means that there are only finitely many prime divisors \(E\subset X\) such that \(\operatorname{ord}_{E}(\theta)>0\), and is thus a direct consequence of Lemma 4.6. Pick \(v\in X^{\operatorname{div}}\). The second point is equivalent to \(v(\theta)>0\Leftrightarrow c_{X}(v)\in\mathbb{B}_{-}(\theta)\). When \(\theta\) is big, this is the content of [1, Theorem B]. In the general case, pick \(\omega\in\operatorname{Amp}(X)\). Then \(v(\theta)>0\) iff \(v(\theta+\varepsilon\omega)>0\) for \(0<\varepsilon\ll 1\), by (4.4), while \(c_{X}(v)\in\mathbb{B}_{-}(\theta)\) iff \(c_{X}(v)\in\mathbb{B}_{-}(\theta+\varepsilon\omega)\) for \(0<\varepsilon\ll 1\), by (1.1). The result follows.
For later use, we also note:
**Lemma 4.7**.: _For any polyhedral subcone \(C\subset\operatorname{Psef}(X)\), we have:_
1. \(\theta\mapsto v(\theta)\) _is continuous on_ \(C\) _for all_ \(v\in X^{\operatorname{div}}\)_;_
2. _the set of prime divisors_ \(E\subset X\) _such that_ \(\operatorname{ord}_{E}(\theta)>0\) _for some_ \(\theta\in C\) _is finite._
Proof.: As mentioned in Remark 4.4, any convex, \(\operatorname{lsc}\) function on a polyhedral cone is continuous (see [15]), and (i) follows. To see (ii), pick a finite set of generators \((\theta_{i})\) of \(C\). Each \(\theta\in C\) can be written as \(\theta=\sum_{i}t_{i}\theta_{i}\) with \(t_{i}\geq 0\). By convexity and homogeneity of minimal vanishing orders, this implies \(\operatorname{ord}_{E}(\theta)\leq\sum_{i}t_{i}\operatorname{ord}_{E}(\theta_ {i})\), so that \(\operatorname{ord}_{E}(\theta)>0\) implies \(\operatorname{ord}_{E}(\theta_{i})>0\) for some \(i\). The result now follows from Lemma 4.6.
## 5. Zariski decompositions
Next we study the close relationship between the extremal function in SS4, and the various versions of the Zariski decomposition of a psef numerical class.
### The \(b\)-divisorial Zariski decomposition
Pick \(\theta\in\operatorname{N}^{1}(X)\) a psef class. By Theorem 4.5, the function \(X^{\operatorname{div}}\ni v\mapsto v(\theta)=-V_{\theta}(v)\) is of divisorial type. We denote by
\[\operatorname{N}(\theta)\in\operatorname{Z}^{1}_{\operatorname{b}}(X)_{ \mathbb{R}}\]
the corresponding effective \(b\)-divisor, which thus satisfies
\[\psi_{\operatorname{N}(\theta)}(v)=v(\operatorname{N}(\theta))=v(\theta)=-V_{ \theta}(v)\]
for all \(v\in X^{\operatorname{div}}\). This construction is birationally invariant:
**Theorem 5.1**.: _For any \(\theta\in\operatorname{Psef}(X)\), the \(b\)-divisor class_
\[\operatorname{P}(\theta):=\overline{\theta}-[\operatorname{N}(\theta)]\in \operatorname{N}_{\mathrm{b}}^{1}(X)\]
_is nef, and \(\operatorname{N}(\theta)\) is the smallest effective \(b\)-divisor with this property. Moreover,_
\[\operatorname{N}(\theta)\geq\overline{\operatorname{N}(\theta)_{Y}} \tag{5.1}\]
_for all birational models \(Y\to X\)._
We call \(\overline{\theta}=\operatorname{P}(\theta)+[\operatorname{N}(\theta)]\) the \(b\)_-divisorial Zariski decomposition_ of \(\theta\). At least when \(\theta\) is big, this construction is basically equivalent to [13, Theorem D], and to the case \(p=1\) of [1, SS2.2].
Note that the \(b\)-divisorial Zariski decomposition is birationally invariant:
**Lemma 5.2**.: _For any \(\theta\in\operatorname{Psef}(X)\) and any birational model \(\pi\colon Y\to X\), we have_
\[\operatorname{N}(\pi^{\star}\theta)=\operatorname{N}(\theta)\quad\text{and} \quad\operatorname{P}(\pi^{\star}\theta)=\operatorname{P}(\theta)\]
_in \(\operatorname{Z}_{\mathrm{b}}^{1}(X)_{\mathbb{R}}=\operatorname{Z}_{\mathrm{ b}}^{1}(Y)_{\mathbb{R}}\) and \(\operatorname{N}_{\mathrm{b}}^{1}(X)_{\mathbb{R}}=\operatorname{N}_{\mathrm{ b}}^{1}(Y)_{\mathbb{R}}\), respectively._
Proof.: Since \(\operatorname{PSH}(\pi^{\star}\theta)=\pi^{\star}\operatorname{PSH}(\theta)\), see (1.2), we have \(V_{\pi^{\star}\theta}=\pi^{\star}V_{\theta}\), and the result follows.
Proof of Theorem 5.1.: Since \(\psi_{-\operatorname{N}(\theta)}=V_{\theta}\) is \(\theta\)-psh, Proposition 2.1 shows that \(\overline{\theta}-[\operatorname{N}(\theta)]\) is nef, which yields the last point, by the Negativity Lemma (see Lemma 1.11). Conversely, if \(E\in\operatorname{Z}_{\mathrm{b}}^{1}(X)_{\mathbb{R}}\) is effective with \(\overline{\theta}-[E]\) nef, then \(-\psi_{E}\in\operatorname{PSH}_{\mathrm{hom}}(\theta)\), again by Proposition 2.1. Thus \(-\psi_{E}\leq V_{\theta}=-\psi_{\operatorname{N}(\theta)}\), and hence \(E\geq\operatorname{N}(\theta)\).
As a consequence of Proposition 4.2, we get
**Corollary 5.3**.: _The map \(\operatorname{Psef}(X)\ni\theta\mapsto\operatorname{N}(\theta)\in\operatorname {Z}_{\mathrm{b}}^{1}(X)\) is homogeneous, lsc, and convex._
### The divisorial Zariski decomposition
For any \(\theta\in\operatorname{Psef}(X)\), we denote by \(\operatorname{N}_{X}(\theta):=\operatorname{N}(\theta)_{X}\) the incarnation of \(\operatorname{N}(\theta)\in\operatorname{Z}_{\mathrm{b}}^{1}(X)_{\mathbb{R}}\) on \(X\), which thus satisfies
\[\operatorname{N}_{X}(\theta)=\sum_{E\subset X}\operatorname{ord}_{E}(\theta)E \tag{5.2}\]
with \(E\) ranging over all prime divisors of \(X\), and \(\operatorname{ord}_{E}(\theta)=0\) for all but finitely many \(E\).
For any effective \(\mathbb{R}\)-divisor \(D\) on \(X\) with numerical class \([D]\in\operatorname{Psef}(X)\), (4.5) yields
\[\operatorname{N}_{X}(D):=\operatorname{N}_{X}([D])\leq D. \tag{5.3}\]
More generally, the following variational characterization holds.
**Theorem 5.4**.: _For any \(\theta\in\operatorname{Psef}(X)\), the class_
\[\operatorname{P}_{X}(\theta):=\theta-[\operatorname{N}_{X}(\theta)]\in \operatorname{N}^{1}(X)\]
_is movable, and \(\operatorname{N}_{X}(\theta)\) is the smallest effective \(\mathbb{R}\)-divisor on \(X\) with this property._
Following [1], we call the decomposition
\[\theta=\operatorname{P}_{X}(\theta)+[\operatorname{N}_{X}(\theta)]\]
the _divisorial Zariski decomposition_ of \(\theta\). It coincides with the _\(\sigma\)-decomposition_ of [21].
Proof of Theorem 5.4.: By definition, \(\mathrm{P}_{X}(\theta)\) is the incarnation on \(X\) of \(\overline{\theta}-[\mathrm{N}(\theta)]\). By Theorem 5.1, the latter class is nef, and \(\mathrm{P}_{X}(\theta)\) is thus movable, by Lemma 1.10.
To prove the converse, assume first that \(\theta\) is movable. We then need to show \(\mathrm{N}_{X}(\theta)=0\), i.e. \(\mathrm{ord}_{E}(\theta)=0\) for each \(E\subset X\) prime (see (5.2)). By (4.5), this is clear if \(\theta=c_{1}(L)\) for a big line bundle \(L\) with base locus of codimension at least \(2\). Since the movable cone \(\mathrm{Mov}(X)\) is generated by the classes of such line bundles, the continuity of \(\theta\mapsto\mathrm{ord}_{E}(\theta)\) on the big cone yields the result when \(\theta\) is further big, and the case of an arbitrary movable class follows by (4.4).
Finally, consider any \(\theta\in\mathrm{Psef}(X)\) and any effective \(\mathbb{R}\)-divisor \(D\) on \(X\) such that \(\theta-[D]\) is movable. For any \(E\subset X\) prime we then have \(\mathrm{ord}_{E}(\theta-[D])=0\) by the previous step, and \(\mathrm{ord}_{E}([D])\leq\mathrm{ord}_{E}(D)\) by (5.3)). Thus
\[\mathrm{ord}_{E}(\theta)\leq\mathrm{ord}_{E}(\theta-[D])+\mathrm{ord}_{E}(D)= \mathrm{ord}_{E}(D).\]
This shows \(\mathrm{N}_{X}(\theta)\leq D\), which concludes the proof.
**Remark 5.5**.: _Theorem 5.4 implies the following converse of Lemma 1.10: a class \(\theta\in\mathrm{N}^{1}(X)\) is movable iff \(\theta=\alpha_{X}\) for a nef \(b\)-divisor class \(\alpha\in\mathrm{Nef}_{\mathrm{b}}(X)\)._
**Corollary 5.6**.: _Pick \(\theta\in\mathrm{Psef}(X)\) and a prime divisor \(E\subset X\). Then \((\theta-\mathrm{ord}_{E}(\theta)E)|_{E}\in\mathrm{N}^{1}(E)\) is pseudoeffective._
Proof.: We have \(\theta-\mathrm{ord}_{E}(\theta)[E]=\mathrm{P}_{X}(\theta)+\sum_{F\neq E} \mathrm{ord}_{F}(\theta)[F]\), where \(F\) ranges over all prime divisors of \(X\) distinct from \(E\). Since \(\mathrm{P}_{X}(\theta)\) is movable, \(\mathrm{P}_{X}(\theta)|_{E}\) is psef. On the other hand, \([F]|_{E}\) is psef for any \(F\neq E\), and the result follows.
**Lemma 5.7**.: _For any \(\theta\in\mathrm{Psef}(X)\) and any birational model \(\pi\colon Y\to X\), the incarnation of \(\mathrm{N}(\theta)\) on \(Y\) coincides with \(\mathrm{N}_{Y}(\pi^{\star}\theta)\). Further, the following are equivalent:_
1. _the_ \(b\)_-divisor_ \(\mathrm{N}(\theta)\) _is_ \(\mathbb{R}\)_-Cartier, and determined on_ \(Y\)_;_
2. \(\mathrm{P}_{Y}(\pi^{\star}\theta)\) _is nef._
Proof.: The first point follows from Lemma 5.2. If (i) holds then the nef \(b\)-divisor \(\overline{\theta}-\mathrm{N}(\theta)\) is \(\mathbb{R}\)-Cartier and determined on \(Y\). Thus \((\overline{\theta}-\mathrm{N}(\theta))_{Y}=\pi^{\star}\theta-\mathrm{N}_{Y}( \pi^{\star}\theta)=\mathrm{P}_{Y}(\pi^{\star}\theta)\) is nef, and hence (i)\(\Rightarrow\)(ii).
Conversely, assume (ii). Then \(\overline{\mathrm{N}(\theta)_{Y}}=\overline{\mathrm{N}_{Y}(\pi^{\star}\theta)}\) is an effective \(b\)-divisor, and the \(b\)-divisor class \(\overline{\theta}-[\overline{\mathrm{N}(\theta)_{Y}}]=\overline{\mathrm{P}_{Y }(\pi^{\star}\theta)}\) is nef. By Theorem 5.1 this implies \(\mathrm{N}(\theta)\leq\overline{\mathrm{N}(\theta)_{Y}}\), while \(\mathrm{N}(\theta)\geq\overline{\mathrm{N}(\theta)_{Y}}\) always holds (see (5.1)). This proves (ii)\(\Rightarrow\)(i).
Since any movable class on a surface is nef, we get:
**Corollary 5.8**.: _If \(\dim X=2\) then \(\mathrm{N}(\theta)=\overline{\mathrm{N}_{X}(\theta)}\) for all \(\theta\in\mathrm{Psef}(X)\)._
In contrast, see [12, Theorem IV.2.10] for an example of a big line bundle \(L\) on a \(4\)-fold \(X\) such that the \(b\)-divisor \(\mathrm{N}(L)\) is not \(\mathbb{R}\)-Cartier, i.e. \(\mathrm{P}_{Y}(\pi^{\star}L)\) is not nef for any model \(\pi\colon Y\to X\).
### Zariski exceptional divisors and faces
This section revisits [1, SS3.3].
**Definition 5.9**.: _We say that:_
1. _an effective_ \(\mathbb{R}\)_-divisor_ \(D\) _on_ \(X\) _is_ Zariski exceptional _if_ \(\mathrm{N}_{X}(D)=D\)_, or equivalently,_ \(\mathrm{P}_{X}([D])=0\)_;_
2. _a finite family_ \((E_{i})\) _of prime divisors_ \(E_{i}\subset X\) _is_ Zariski exceptional _if every effective_ \(\mathbb{R}\)_-divisor supported in the_ \(E_{i}\)_'s is Zariski exceptional._
_We also define a Zariski exceptional face \(F\) of \(\operatorname{Psef}(X)\) as an extremal subcone such that \(\operatorname{P}_{X}|_{F}\equiv 0\)._
Here a closed subcone \(C\subset\operatorname{Psef}(X)\) is extremal iff \(\alpha,\beta\in\operatorname{Psef}(X)\), \(\alpha+\beta\in C\) implies \(\alpha,\beta\in C\).
We first note:
**Lemma 5.10**.: _An effective \(\mathbb{R}\)-divisor \(D\) is Zariski exceptional iff \(\operatorname{N}(D)=\overline{D}\)._
Proof.: Assume \(\operatorname{N}_{X}(D)=D\). Then \(\operatorname{N}(D)\leq\overline{D}\), by Theorem 5.1, and \(\operatorname{N}(D)\geq\overline{\operatorname{N}_{X}(D)}=\overline{D}\) (see (5.1)). The result follows.
The above notions are related as follows:
**Theorem 5.11**.: _The properties following hold:_
* _if_ \(E\subset X\) _is a prime divisor, then_ \(E\) _is either movable (in which case_ \(E|_{E}\) _is psef), or it is Zariski exceptional;_
* _the set of Zariski exceptional families of prime divisors on_ \(X\) _is at most countable;_
* _for any_ \(\theta\in\operatorname{Psef}(X)\)_, the irreducible components of_ \(\operatorname{N}_{X}(\theta)\) _form a Zariski exceptional family; in particular,_ \(\operatorname{N}_{X}(\theta)\) _is Zariski exceptional;_
* _each Zariski exceptional family_ \((E_{i})\) _is linearly independent in_ \(\operatorname{N}^{1}(X)\)_, and generates a Zariski exceptional face_ \(F:=\sum_{i}\mathbb{R}_{\geq 0}[E_{i}]\) _of_ \(\operatorname{Psef}(X)\)_;_
* _conversely, each Zariski exceptional face_ \(F\) _of_ \(\operatorname{Psef}(X)\) _arises as in (iv)._
Proof.: Assume \(E\subset X\) is prime. Then \(\operatorname{N}_{X}(E)\leq E\) (see (5.3)), and hence \(\operatorname{N}_{X}(E)=cE\) with \(c\in[0,1]\). If \(c=1\), then \(E\) is Zariski exceptional. Otherwise,
\[E=(1-c)^{-1}(E-\operatorname{N}_{X}(E))\equiv(1-c)^{-1}\operatorname{P}_{X}(E)\]
is movable. This proves (i).
To see (ii), note that a Zariski exceptional prime divisor satisfies \(E=\operatorname{N}_{X}(E):=\operatorname{N}_{X}([E])\), and hence is uniquely determined by its numerical class \([E]\in\operatorname{N}^{1}(X)_{\mathbb{Q}}\). As a consequence, the set of Zariski exceptional primes is at most countable, and hence so is the set of Zariski exceptional families.
Pick \(\theta\in\operatorname{Psef}(X)\). We first claim that \(D:=\operatorname{N}_{X}(\theta)\) is Zariski exceptional. Since \(\operatorname{P}_{X}(\theta)=\theta-[D]\) and \(\operatorname{P}_{X}(D)=[D-\operatorname{N}_{X}(D)]\) are both movable, \(\theta-[\operatorname{N}_{X}(D)]\) is movable as well. Theorem 5.4 thus yields \(\operatorname{N}_{X}(D)\geq\operatorname{N}_{X}(\theta)=D\), which proves the claim in view of (5.3). Denote by \(D=\sum_{i=1}^{r}c_{i}E_{i}\) the irreducible decomposition of \(D\), and set \(f_{i}(x):=\operatorname{ord}_{E_{i}}(\sum_{j}x_{j}E_{j})\) for \(1\leq i\leq r\). This defines a convex function \(f_{i}\colon\mathbb{R}_{\geq 0}^{r}\to\mathbb{R}_{\geq 0}\) which satisfies \(f_{i}(x)\leq x_{i}\) for all \(x\), by (5.3). Since equality holds at the interior point \(x=c\in\mathbb{R}_{>0}^{r}\), we necessarily have \(f_{i}(x)=x_{i}\) for all \(x\in\mathbb{R}_{\geq 0}^{r}\), which proves (iii).
Next pick a Zariski exceptional family \((E_{i})\). By Lemma 4.6, the \([E_{i}]\) are linearly independent in \(\operatorname{N}^{1}(X)\). By definition, we have \(\operatorname{P}_{X}\equiv 0\) on \(F:=\sum_{i}\mathbb{R}_{\geq 0}[E_{i}]\). To see that \(F\) is an extremal face of \(\operatorname{Psef}(X)\), pick \(D:=\sum_{i}c_{i}E_{i}\) with \(c_{i}\geq 0\), and assume \([D]=\alpha+\beta\) with \(\alpha,\beta\in\operatorname{Psef}(X)\). We need to show that both \(\alpha\) and \(\beta\) lie in \(F\). By Definition 5.9 we have \(D=\operatorname{N}_{X}(D)\leq\operatorname{N}_{X}(\alpha)+\operatorname{N}_{X}(\beta)\), and hence
\[[\operatorname{N}_{X}(\alpha)]+[\operatorname{N}_{X}(\beta)]\leq \operatorname{P}_{X}(\alpha)+\operatorname{P}_{X}(\beta)+[\operatorname{N}_{X} (\alpha)]+[\operatorname{N}_{X}(\beta)]\\ =\alpha+\beta=[D]\leq[\operatorname{N}_{X}(\alpha)]+\operatorname {N}_{X}(\beta)], \tag{5.4}\]
with respect to the psef order on \(\operatorname{N}^{1}(X)\). Since \(\operatorname{Psef}(X)\) is strict, we infer \(\operatorname{P}_{X}(\alpha)=\operatorname{P}_{X}(\beta)=0\) and \([D]=[\operatorname{N}_{X}(\alpha)]+[\operatorname{N}_{X}(\beta)]\). Since \(\operatorname{N}_{X}(\alpha)+\operatorname{N}_{X}(\beta)-D\) is effective, it follows that
\(\operatorname{N}_{X}(\alpha)+\operatorname{N}_{X}(\beta)=D\). This implies that \(\operatorname{N}_{X}(\alpha)\) and \(\operatorname{N}_{X}(\beta)\) are supported in the \(E_{i}\)'s, which proves, as desired, that \(\alpha=[\operatorname{N}_{X}(\alpha)]\) and \(\beta=[\operatorname{N}_{X}(\beta)]\) both lie in \(F\). Thus (iv) holds.
Conversely, assume that \(F\subset\operatorname{Psef}(X)\) is a Zariski exceptional face, and pick a class \(\theta\) in its relative interior \(\hat{F}\). By (iii), the components \((E_{i})\) of \(\operatorname{N}_{X}(\theta)\) form a Zariski exceptional family, which thus generates a Zariski exceptional face \(F^{\prime}:=\sum_{i}\mathbb{R}_{\geq 0}[E_{i}]\). Since \(F\) and \(F^{\prime}\) are both extremal faces containing \(\theta\) in their relative interior, we conclude \(F=F^{\prime}\), which proves (v).
As a result, Zariski exceptional families are in 1-1 correspondence with Zariski exceptional faces, which are rational simplicial cones generated by Zariski exceptional primes.
For surfaces, we recover the classical picture (see e.g. [1, SS4]):
**Theorem 5.12**.: _Assume \(\dim X=2\). Then:_
1. _a finite family_ \((E_{i})\) _of prime divisors on_ \(X\) _is Zariski exceptional iff the intersection matrix_ \((E_{i}\cdot E_{j})\) _is negative definite;_
2. _for any_ \(\theta\in\operatorname{Psef}(X)\)_,_ \(\theta=\operatorname{P}_{X}(\theta)+[\operatorname{N}_{X}(\theta)]\) _coincides with the classical Zariski decomposition,_ i.e. \(\operatorname{P}_{X}(\theta)\) _is nef,_ \(\operatorname{N}_{X}(\theta)\) _is Zariski exceptional, and_ \(\operatorname{P}_{X}(\theta)\cdot\operatorname{N}_{X}(\theta)=0\)_._
### Piecewise linear Zariski decompositions
We introduce the following terminology:
**Definition 5.13**.: _Given a convex subcone \(C\subset\operatorname{Psef}(X)\), we say that the Zariski decomposition is piecewise linear (PL for short) on \(C\) if the map \(\operatorname{N}\colon C\to\operatorname{Z}^{1}_{\operatorname{b}}(X)_{ \mathbb{R}}\) extends to a PL map \(\operatorname{N}^{1}(X)\to\operatorname{Z}^{1}_{\operatorname{b}}(X)_{ \mathbb{R}}\), i.e. a map that is linear on each cone of some finite fan decomposition of \(\operatorname{N}^{1}(X)\). If the fan and the linear maps on its cones can further be chosen rational, then we say that the Zariski decomposition is \(\mathbb{Q}\)-PL on \(C\)._
**Lemma 5.14**.: _Let \(C\subset\operatorname{Psef}(X)\) be a convex cone, and assume that \(C\) is written as the union of finitely many convex subcones \(C_{i}\). Then the Zariski decomposition is PL (resp. \(\mathbb{Q}\)-PL) on \(C\) iff it is PL (resp. \(\mathbb{Q}\)-PL) on each \(C_{i}\)._
Proof.: The 'only if' part is clear. Conversely, assume the Zariski decomposition is PL (resp. \(\mathbb{Q}\)-PL) on each \(C_{i}\). After further subdividing each \(C_{i}\) according to a fan decomposition of \(\operatorname{N}^{1}(X)\), we may assume that there exists a linear (resp. rational linear) map \(L_{i}\colon\operatorname{N}^{1}(X)\to\operatorname{Z}^{1}_{\operatorname{b}}(X )_{\mathbb{R}}\) that coincides with \(\operatorname{N}\) on \(C_{i}\). If \(C_{i}\) has nonempty interior in \(C\), then \(L_{i}|_{\operatorname{Vect}C}\) is uniquely determined as the derivative of \(\operatorname{N}\) at any interior point of \(C_{i}\), and we have \(\operatorname{N}\geq L_{i}\) on \(C\) by convexity of \(\operatorname{N}\), see Corollary 5.3. Set \(F:=\max_{i}L_{i}\), where the maximum is over all \(C_{i}\) with nonempty interior in \(C\). Then \(F\colon\operatorname{N}^{1}(X)\to\operatorname{Z}^{1}_{\operatorname{b}}(X)_{ \mathbb{R}}\) is PL (resp. \(\mathbb{Q}\)-PL), \(\operatorname{N}\geq F\) on \(C\), and equality holds outside the union \(A\) of all \(C_{i}\) with empty interior in \(C\). Since \(A\) has zero measure, its complement is dense in \(C\). Since \(\operatorname{N}-F\) is lsc, see Corollary 5.3, we infer \(\operatorname{N}\leq F\) on \(C\), which proves the 'if' part.
As a consequence of [1, Theorem 4.1], we have:
**Example 5.15**.: _If \(X\) is a Mori dream space (e.g. of log Fano type), then:_
* _for each_ \(\theta\in\operatorname{Psef}(X)\)_, the_ \(b\)_-divisor_ \(\operatorname{N}(\theta)\) _is_ \(\mathbb{R}\)_-Cartier;_
* \(\operatorname{Psef}(X)\) _is a rational polyhedral cone;_
* _the Zariski decomposition is_ \(\mathbb{Q}\)_-PL on_ \(\operatorname{Psef}(X)\)_._
The next result is closely related to the theory of Zariski chambers studied in [1].
**Proposition 5.16**.: _If \(\dim X=2\), then the Zariski decomposition is \(\mathbb{Q}\)-PL on any convex cone \(C\subset\operatorname{Psef}(X)\) with the property that the set of prime divisors \(E\subset X\) with \(\operatorname{ord}_{E}(\theta)>0\) for some \(\theta\in C\) is finite._
By Lemma 4.7 (ii), the finiteness condition on \(C\) is satisfied as soon as \(C\) is polyhedral.
Proof.: For each Zariski exceptional face \(F\) of \(\operatorname{Psef}(X)\) with relative interior \(\mathring{F}\), set \(Z_{F}:=\operatorname{N}_{X}^{-1}(\mathring{F})\). Thus \(\theta\in\operatorname{Psef}(X)\) lies in \(Z_{F}\) iff the irreducible decomposition of \(\operatorname{N}_{X}(\alpha)\) are precisely the generators of \(F\). By Theorem 5.12 (ii), \(Z_{F}\) is a convex subcone of \(\operatorname{Psef}(X)\) (whose intersection with \(\operatorname{Big}(X)\) is a Zariski chamber in the sense of [1]); further, \(\operatorname{N}_{X}|_{Z_{F}}:Z_{F}\to\mathring{F}\) is the restriction of the orthogonal projection onto \(\operatorname{Vect}F\), which is a rational linear map. By Corollary 5.8, the Zariski decomposition is thus \(\mathbb{Q}\)-PL on \(Z_{F}\). Finally, the finiteness assumption guarantees that \(C\) meets only finitely many \(Z_{F}\)'s, and the result is thus a consequence of Lemma 5.14.
We conclude this section with a higher-dimensional situation in which Zariski decompositions can be analyzed. Assuming again that \(\dim X\) is arbitrary, consider next a \(2\)-dimensional cone \(C\subset\operatorname{N}^{1}(X)\) generated by two classes \(\theta,\alpha\in\operatorname{N}^{1}(X)\) such that \(\theta\in\operatorname{Nef}(X)\) and \(\alpha\notin\operatorname{Psef}(X)\). Set
\[C_{\operatorname{nef}}:=C\cap\operatorname{Nef}(X)\subset C_{\operatorname{ psef}}:=C\cap\operatorname{Psef}(X)\subset C,\]
and introduce the thresholds
\[\lambda_{\operatorname{nef}}:=\sup\{\lambda\geq 0\mid\theta+\lambda\alpha \in\operatorname{Nef}(X)\},\quad\lambda_{\operatorname{psef}}:=\sup\{\lambda \geq 0\mid\theta+\lambda\alpha\in\operatorname{Psef}(X)\},\]
so that \(C_{\operatorname{nef}}\) (resp. \(C_{\operatorname{psef}}\)) is generated by \(\theta\) and \(\theta_{\operatorname{nef}}:=\theta+\lambda_{\operatorname{nef}}\alpha\) (resp \(\theta_{\operatorname{psef}}:=\theta+\lambda_{\operatorname{psef}}\alpha\)).
The next result is basically contained in [10, SS6.5].
**Proposition 5.17**.: _With the above notation, suppose that \(C\) contains the class of a prime divisor \(S\subset X\) such that \(\operatorname{Nef}(S)=\operatorname{Psef}(S)\) and \(S|_{S}\) is not nef. Then:_
1. \(\theta_{\operatorname{psef}}=t[S]\) _with_ \(t>0\)_;_
2. \(\lambda_{\operatorname{nef}}=\lambda_{\operatorname{nef}}^{S}:=\sup\left\{ \lambda\geq 0\mid(\theta+\lambda\alpha)|_{S}\in\operatorname{Nef}(S)\right\}\)_;_
3. _the Zariski decomposition is PL on_ \(C_{\operatorname{psef}}\)_, with_ \[\operatorname{N}\equiv 0\text{ on }C_{\operatorname{nef}},\quad \operatorname{N}(a\theta_{\operatorname{nef}}+b[S])=b\overline{S}\text{ for all }a,b\geq 0.\]
Proof.: The assumptions imply that \(S|_{S}\) is not psef. By Theorem 5.11 (i), \(S\) is thus Zariski exceptional, and \([S]\) generates an extremal ray of \(\operatorname{Psef}(X)\). This ray is also extremal in \(C_{\operatorname{psef}}\), which proves (i).
Next, note that \(\lambda_{\operatorname{nef}}\leq\lambda_{\operatorname{nef}}^{S}\leq\lambda_{ \operatorname{psef}}\), by (i). Pick a curve \(\gamma\subset X\). We need to show \((\theta+\lambda_{\operatorname{nef}}^{S}\alpha)\cdot\gamma\geq 0\). This is clear if \(\gamma\subset S\) (since \((\theta+\lambda_{\operatorname{nef}}^{S}\alpha)|_{S}\) is nef), or if \(\alpha\cdot\gamma\geq 0\) (since \(\theta\cdot\gamma\geq 0\) and \(\lambda_{\operatorname{nef}}^{S}\geq 0\)). Otherwise, we have \(S\cdot\gamma\geq 0\) and \(\alpha\cdot\gamma\leq 0\), and we get again \((\theta+\lambda_{\operatorname{nef}}^{S}\alpha)\cdot\gamma\geq 0\) since
\[\theta+\lambda_{\operatorname{nef}}^{S}\alpha\equiv\theta_{\operatorname{ psef}}+(\lambda_{\operatorname{nef}}^{S}-\lambda_{\operatorname{psef}}) \alpha=t[S]+(\lambda_{\operatorname{nef}}^{S}-\lambda_{\operatorname{psef}})\alpha\]
with \(\lambda_{\operatorname{nef}}^{S}-\lambda_{\operatorname{psef}}\leq 0\). This proves (ii).
For (iii), note that \(\operatorname{N}\equiv 0\) on \(\operatorname{Nef}(X)\supset C_{\operatorname{nef}}\) (see Theorem 5.1). Further, \(\operatorname{N}([S])=\overline{S}\) (see Lemma 5.10), and hence \(\operatorname{N}(a\theta_{\operatorname{nef}}+b[S])\leq b\overline{S}\) for \(a,b\geq 0\). In particular, \(c:=\operatorname{ord}_{S}(a\theta_{\operatorname{nef}}+b[S])\leq b\). On the other hand, (5.1) yields
\[\operatorname{N}(a\theta_{\operatorname{nef}}+b[S])\geq\overline{\operatorname{ N}(a\theta_{\operatorname{nef}}+b[S])}\geq c\overline{S},\]
and it thus remains to see \(c=b\). By Corollary 5.6, \(((a\theta_{\operatorname{nef}}+b[S])-c[S])\,|_{S}\) lies in \(\operatorname{Psef}(S)=\operatorname{Nef}(S)\). By (b), we infer \(a\theta_{\operatorname{nef}}+(b-c)[S]\in C_{\operatorname{nef}}\), and hence \(b-c=0\), since \(C_{\operatorname{nef}}=\mathbb{R}_{\geq 0}\theta+\mathbb{R}_{\geq 0}\theta_{ \operatorname{nef}}\) intersects \(\mathbb{R}_{\geq 0}\theta_{\operatorname{nef}}+\mathbb{R}_{\geq 0}[S]\) only along \(\mathbb{R}_{\geq 0}\theta_{\operatorname{nef}}\).
## 6. Green's functions and Zariski decompositions
In this section we fix an ample class \(\omega\in\operatorname{Amp}(X)\).
### Green's functions and equilibrium measures
A subset \(\Sigma\subset X^{\operatorname{an}}\) is _pluripolar_ if \(\Sigma\subset\{\varphi=-\infty\}\) for some \(\varphi\in\operatorname{PSH}(\omega)\). By [1, Theorem 4.5], \(\Sigma\) is nonpluripolar iff
\[\operatorname{T}(\Sigma):=\sup_{\varphi\in\operatorname{PSH}(\omega)}(\sup \varphi-\sup_{\Sigma}\varphi)\in[0,+\infty]\]
is finite. The invariant \(\operatorname{T}(\Sigma)\), which plays an important role in [1, 1], is modeled in the Alexander-Taylor capacity (which corresponds to \(e^{-\operatorname{T}(\Sigma)}\)) in complex analysis.
**Definition 6.1**.: _For any subset \(\Sigma\subset X^{\operatorname{an}}\) we set_
\[\varphi_{\Sigma}=\varphi_{\omega,\Sigma}:=\sup\{\varphi\in\operatorname{PSH}( \omega)\mid\varphi|_{\Sigma}\leq 0\}. \tag{6.1}\]
Note that \(\varphi_{\Sigma}(v_{\operatorname{triv}})=\sup\varphi_{\Sigma}=\operatorname{ T}(\Sigma)\), and hence
\[\varphi_{\Sigma}\in\operatorname{PL}(X)\Longrightarrow\operatorname{T}( \Sigma)\in\mathbb{Q}. \tag{6.2}\]
**Theorem 6.2**.: _For any compact subset \(\Sigma\subset X^{\operatorname{an}}\), the following holds:_
1. \(\varphi_{\Sigma}=\sup\{\varphi\in\operatorname{CPSH}(\omega)\mid\varphi|_{ \Sigma}\leq 0\}\)_; in particular,_ \(\varphi_{\Sigma}\) _is lsc;_
2. _if_ \(\Sigma\) _is pluripolar then_ \(\varphi_{\Sigma}^{\star}\equiv+\infty\)_;_
3. _if_ \(\Sigma\) _is nonpluripolar, then_ \(\varphi_{\Sigma}^{\star}\) _is_ \(\omega\)_-psh and nonnegative; further,_ \(\mu_{\Sigma}:=\operatorname{MA}(\varphi_{\Sigma}^{\star})\) _is supported in_ \(\Sigma\)_,_ \(\int\varphi_{\Sigma}^{\star}\,\mu_{\Sigma}=0\)_, and_ \(\mu_{\Sigma}\) _is characterized as the unique minimizer of the energy_ \(\|\mu\|\) _over all Radon probability measures_ \(\mu\) _with support in_ \(\Sigma\)_._
Since the energy of a Radon probability measure \(\mu\) only appears in this statement, we simply recall here that it is defined as
\[\|\mu\|=\sup_{\varphi\in\mathcal{E}^{1}(\omega)}\left(\operatorname{E}( \varphi)-\int\varphi\,\mu\right)\in[0,+\infty], \tag{6.3}\]
and refer to [1, SS9.1] for more details.
**Definition 6.3**.: _Assuming \(\Sigma\) is nonpluripolar, we call \(\mu_{\Sigma}\) its equilibrium measure, and \(\varphi_{\Sigma}^{\star}\) its Green's function._
The latter is characterized as the normalized potential of \(\mu_{\Sigma}\) (in the terminology of [1, SS1.6]), i.e. the unique \(\varphi\in\mathcal{E}^{1}(\omega)\) such that \(\operatorname{MA}(\varphi)=\mu_{\Sigma}\) and \(\int\varphi\,\mu_{\Sigma}=0\).
Proof of Theorem 6.2.: Denote by \(\varphi_{\Sigma}^{\prime}\) the right-hand side in (i), which obviously satisfies \(\varphi_{\Sigma}^{\prime}\leq\varphi_{\Sigma}\). Pick \(\varphi\in\operatorname{PSH}(\omega)\) with \(\varphi|_{\Sigma}\leq 0\), and write \(\varphi\) as the limit of a decreasing net \((\varphi_{i})\) in \(\operatorname{CPSH}(\omega)\). For any \(\varepsilon>0\), a Dini type argument shows that \(\varphi_{i}<\varepsilon\) on \(\Sigma\) for \(i\) large enough. Thus \(\varphi_{i}\leq\varphi_{\Sigma}^{\prime}+\varepsilon\), and hence \(\varphi\leq\varphi_{\Sigma}^{\prime}+\varepsilon\). This shows \(\varphi_{\Sigma}\leq\varphi_{\Sigma}^{\prime}\), which proves (i).
Next, (ii) and the first half of (iii) follow from [1, Lemma 13.15]. Since the negligible set \(\{\varphi_{\Sigma}<\varphi_{\Sigma}^{\star}\}\) is pluripolar (see [1, Theorem 13.17]), it has zero measure for any measure \(\mu\) of finite energy [1, Lemma 9.2]. If \(\mu\) has support in \(\Sigma\), this yields \(\int\varphi_{\Sigma}^{\star}\,\mu=\int\varphi_{\Sigma}\,\mu=0\). By (6.3) we infer \(\|\mu\|\geq\operatorname{E}(\varphi_{\Sigma}^{\star})=\|\mu_{\Sigma}\|\). This proves that \(\mu_{\Sigma}\) minimizes the energy, while uniqueness follows from the strict convexity of the energy [1, Proposition 10.10].
Further mimicking classical terminology in the complex analytic setting, we introduce:
**Definition 6.4**.: _We say that a compact subset \(\Sigma\subset X^{\operatorname{an}}\) is regular if \(\varphi_{\Sigma}\in\operatorname{CPSH}(\omega)\)._
In particular, \(\Sigma\) is nonpluripolar (see Theorem 6.2).
**Lemma 6.5**.: _For any compact subset \(\Sigma\subset X^{\mathrm{an}}\), the following hold:_
1. \(\Sigma\) _is regular iff_ \(\varphi_{\Sigma}^{\star}\leq 0\) _on_ \(\Sigma\)_;_
2. _the regularity of_ \(\Sigma\) _is independent of_ \(\omega\in\mathrm{Amp}(X)\)_;_
3. _if_ \(\Sigma\subset X^{\mathrm{lin}}\) _then_ \(\Sigma\) _is regular._
Proof.: If \(\Sigma\) is regular, then \(\varphi_{\Sigma}^{\star}=\varphi_{\Sigma}\) vanishes on \(\Sigma\). Conversely, assume \(\varphi_{\Sigma}^{\star}\leq 0\) on \(\Sigma\). By (ii) and (iii) of Theorem 6.2, \(\Sigma\) is necessarily nonpluripolar, and \(\varphi_{\Sigma}^{\star}\) is \(\omega\)-psh. It is thus a competitor in (6.1), which implies that \(\varphi_{\Sigma}=\varphi_{\Sigma}^{\star}\) is \(\omega\)-psh, and also continuous by Theorem 6.2 (i).
Assume \(\Sigma\) is regular for \(\omega\), and pick \(\omega^{\prime}\in\mathrm{Amp}(X)\). Then \(t\omega-\omega^{\prime}\) is nef for \(t\gg 1\), and hence \(\mathrm{PSH}(\omega^{\prime})\subset t\,\mathrm{PSH}(\omega)\). This implies \(\varphi_{\omega^{\prime},\Sigma}\leq t\varphi_{\omega,\Sigma}\), and hence \(\varphi_{\omega^{\prime},\Sigma}^{\star}\leq t\varphi_{\omega,\Sigma}\). In particular, \(\varphi_{\omega^{\prime},\Sigma}^{\star}|_{\Sigma}\leq 0\), which proves that \(\Sigma\) is regular for \(\omega^{\prime}\), by (i).
Finally, assume \(\Sigma\subset X^{\mathrm{lin}}\). Since \(\{\varphi_{\Sigma}<\varphi_{\Sigma}^{\star}\}\) is pluripolar (see [1, Theorem 13.17]), it is disjoint from \(X^{\mathrm{lin}}\). As a result, \(\varphi_{\Sigma}^{\star}\in\mathrm{PSH}(\omega)\) vanishes on \(\Sigma\), and it again follows from (i) that \(\Sigma\) is regular.
### The Green's function of a real divisorial set
In what follows, we consider a _real divisorial set_, by which we mean a finite set \(\Sigma\subset X^{\mathrm{div}}_{\mathbb{R}}\) of real divisorial valuations. By Lemma 6.5 (iii), \(\Sigma\subset X^{\mathrm{lin}}\) is regular, i.e. \(\varphi_{\Sigma}\in\mathrm{CPSH}(\omega)\). When \(\Sigma=\{v\}\) for a single \(v\in X^{\mathrm{div}}_{\mathbb{R}}\), we simply write \(\varphi_{v}:=\varphi_{\Sigma}\).
**Example 6.6**.: _Assume \(\omega=c_{1}(L)\) with \(L\in\mathrm{Pic}(X)_{\mathbb{Q}}\) ample and \(v\in X^{\mathrm{div}}\). Then \(v\) is dreamy (with respect to \(L\)) in the sense of K.Fujita iff \(\varphi_{v}\in\mathcal{H}(L)\); see [1, SS1.7,Appendix A]._
If \(v_{\mathrm{triv}}\in\Sigma\), then \(\varphi_{\Sigma}\equiv 0\), and we henceforth assume \(v_{\mathrm{triv}}\notin\Sigma\). Pick a smooth birational model \(\pi\colon Y\to X\) which extracts each \(v\in\Sigma\), i.e. \(v=t_{v}\,\mathrm{ord}_{E_{v}}\) for a prime divisor \(E_{v}\subset Y\) and \(t_{v}\in\mathbb{R}_{>0}\). We then introduce the effective \(\mathbb{R}\)-divisor on \(Y\)
\[D:=\sum_{\alpha}t_{\alpha}^{-1}E_{\alpha},\]
whose set of Rees valuations \(\Gamma_{D}\) coincides with \(\Sigma\) (see Definition 2.2).
**Theorem 6.7**.: _With the above notation, the following holds:_
1. \(\sup\varphi_{\Sigma}=\mathrm{T}(\Sigma)\) _coincides with the pseudoeffective threshold_ \[\lambda_{\mathrm{psef}}:=\max\left\{\lambda\geq 0\mid\pi^{\star}\omega- \lambda D\in\mathrm{PSef}(Y)\right\};\]
2. \(\varphi_{\Sigma}\in\mathrm{CPSH}(\omega)\) _is of divisorial type, and the associated family of_ \(b\)_-divisors_ \((B_{\lambda})_{\lambda\leq\lambda_{\mathrm{psef}}}\) _(see Theorem_ 2.4_) is given by_ \[-B_{\lambda}=\left\{\begin{array}{ll}\mathrm{N}(\pi^{\star}\omega-\lambda D )+\lambda\overline{D}&\text{for }\lambda\in[0,\lambda_{\mathrm{psef}}]\\ 0&\text{for }\lambda\leq 0.\end{array}\right.\]
Proof.: Pick \(\lambda\in\mathbb{R}\). For any \(\psi\in\mathrm{PSH}(\omega)\), we have \(\psi+\lambda\leq\varphi\Leftrightarrow\psi|_{\Sigma}\leq-\lambda\), and hence
\[\widehat{\varphi}_{\Sigma}^{\lambda}=\sup\{\psi\in\mathrm{PSH}_{\mathrm{ hom}}(\omega)\mid\psi|_{\Sigma}\leq-\lambda\}.\]
When \(\lambda\leq 0\) this yields \(\widehat{\varphi}_{\Sigma}^{\lambda}=0\). Assume now \(\lambda>0\). Using Proposition 2.3 and \(\mathrm{PSH}_{\mathrm{hom}}(\pi^{\star}\omega)=\pi^{\star}\,\mathrm{PSH}_{ \mathrm{hom}}(\omega)\), we get
\[\pi^{\star}\widehat{\varphi}_{\Sigma}^{\lambda}=\sup\{\tau\in\mathrm{PSH}_{ \mathrm{hom}}(\pi^{\star}\omega-\lambda D)\}-\lambda\psi_{D}=V_{\pi^{\star}\omega -\lambda D}-\lambda\psi_{D}. \tag{6.4}\]
Now the left-hand side is not identically \(-\infty\) iff \(\lambda\leq\sup\varphi\), while for the right-hand side this holds iff \(\lambda\leq\lambda_{\mathrm{psef}}\), by Proposition 4.1. This proves (i), and also (ii), by Theorem 4.5.
**Corollary 6.8**.: _The center of \(\varphi_{\Sigma}\) satisfies_
\[Z_{X}(\varphi_{\Sigma})=\pi\left(\mathbb{B}_{-}(\pi^{\star}\omega-\lambda_{ \mathrm{psef}}D)\right)\cup Z_{X}(\Sigma).\]
_In particular, \(Z_{X}(\varphi_{\Sigma})\) is Zariski dense in \(X\) iff \(\mathbb{B}_{-}(\pi^{\star}\omega-\lambda_{\mathrm{psef}}D)\) is Zariski dense in \(Y\)._
Proof.: By Lemma (3.3), we have
\[Z_{X}(\varphi_{\Sigma})=Z_{X}(\widehat{\varphi}_{\Sigma}^{\mathrm{max}})=\pi (Z_{Y}(\pi^{\star}\widehat{\varphi}_{\Sigma}^{\mathrm{max}})).\]
It follows from Theorem 6.7 and its proof that
\[\pi^{\star}\widehat{\varphi}_{\Sigma}^{\mathrm{max}}=V_{\pi^{\star}\omega- \lambda_{\mathrm{psef}}D}-\lambda_{\mathrm{psef}}\psi_{D}.\]
Now \(Z_{Y}(V_{\pi^{\star}\omega-\lambda_{\mathrm{psef}}D})=\mathbb{B}_{-}(\pi^{ \star}\omega-\lambda_{\mathrm{psef}}D)\) by Theorem 4.5, whereas we see from Example 3.2 that \(Z_{Y}(-\lambda_{\mathrm{psef}}\psi_{D})=Z_{Y}(\Sigma)\), so we conclude using Lemma 3.6.
### Dimension one and two
In this section we consider the case \(\dim X\leq 2\).
**Proposition 6.9**.: _If \(\dim X=1\), then for any real divisorial set \(\Sigma\subset X_{\mathbb{R}}^{\mathrm{div}}\), we have \(\varphi_{\Sigma}\in\mathbb{R}\mathrm{PL}^{+}(X)\). If \(\omega\) is rational and \(\Sigma\subset X^{\mathrm{div}}\), then we further have \(\varphi_{\Sigma}\in\mathrm{PL}^{+}(X)\)._
Proof.: We may assume \(v_{\mathrm{triv}}\not\in\Sigma\), or else \(\varphi_{\Sigma}\equiv 0\). Thus assume \(\Sigma=\{v_{i}\}_{i\in I}\), where \(v_{i}=t_{i}\operatorname{ord}_{p_{i}}\), \(t_{i}\in\mathbb{R}_{>0}\), and \(p_{i}\in X\) is a closed point. We may assume \(p_{i}\neq p_{j}\) for \(i\neq j\), or else \(\varphi_{\Sigma}=\varphi_{\Sigma^{\prime}}\) for \(\Sigma^{\prime}=\{v_{i}\}_{i\in I^{\prime}}\), where \(I^{\prime}\subset I\) is defined by \(i\in I^{\prime}\) iff for all \(j\neq i\), either \(p_{j}\neq p_{i}\) or \(t_{j}>t_{i}\). Under these assumptions,
\[\varphi_{\Sigma}=A\max\{1+\sum_{i}t_{i}^{-1}\log|\mathfrak{m}_{p_{i}}|,0\},\]
where \(A>0\) satisfies \(A\sum_{i}t_{i}^{-1}=\deg\omega\), see [1, Example 3.19]. Thus \(\varphi_{\Sigma}\in\mathbb{R}\mathrm{PL}^{+}(X)\). Further, if \(\Sigma\subset X^{\mathrm{div}}\), then \(t_{i}\in\mathbb{Q}_{>0}\) for all \(i\), so if \(\omega\) is rational, then \(A\in\mathbb{Q}_{>0}\), and hence \(\varphi_{\Sigma}\in\mathrm{PL}^{+}(X)\).
**Theorem 6.10**.: _If \(\dim X=2\), then for any real divisorial set \(\Sigma\subset X_{\mathbb{R}}^{\mathrm{div}}\), we have \(\varphi_{\Sigma}\in\mathbb{R}\mathrm{PL}^{+}(X)\). If \(\omega\) is rational and \(\Sigma\subset X^{\mathrm{div}}\), then we further have_
\[\varphi_{\Sigma}\in\mathrm{PL}(X)\Leftrightarrow\varphi_{\Sigma}\in\mathrm{ PL}^{+}(X)\Leftrightarrow\mathrm{T}(\Sigma)\in\mathbb{Q}. \tag{6.5}\]
We will see in Example 7.2 that \(\mathrm{T}(\Sigma)\) can be irrational.
**Lemma 6.11**.: _Assume \(\dim X\leq 2\), and pick \(B\in\mathrm{Car}_{\mathrm{b}}(X)_{\mathbb{R}}\). Then \(B\) is relatively nef iff it is relatively semiample._
Proof.: Assume \(B\) is relative nef, and pick a determination \(\pi\colon Y\to X\) of \(B\). The relatively nef cone of \(\mathrm{N}^{1}(Y/X)\) is dual to the cone generated by the (finite) set of \(\pi\)-exceptional prime divisors, and is thus a rational polyhedral cone. As a consequence, we can write \(B_{Y}=\sum_{i}t_{i}D_{i}\) with \(t_{i}>0\) and \(D_{i}\in\mathrm{Div}(Y)_{\mathbb{Q}}\) relatively nef. By [1, Theorem 12.1 (ii)], each \(D_{i}\) is relatively semiample, and the result follows.
Proof of Theorem 6.10.: Use the notation of Theorem 6.7. By Proposition 5.16, the Zariski decomposition is \(\mathbb{Q}\)-PL on the cone
\[C=(\mathbb{R}_{+}\pi^{\star}\omega+\mathbb{R}_{+}[-D])\cap\mathrm{PSef}(Y)= \mathbb{R}_{+}\pi^{\star}\omega+\mathbb{R}_{+}(\pi^{\star}\omega-\lambda_{ \mathrm{psef}}[D]).\]
We can thus find \(0=\lambda_{1}<\lambda_{2}<\cdots<\lambda_{N}=\lambda_{\rm{psef}}\) such that
\[\lambda\mapsto B_{\lambda}=-({\rm N}(\pi^{\star}\omega-\lambda[D])+\lambda \overline{D})\]
is affine linear on \([\lambda_{i},\lambda_{i+1}]\) for \(1\leq i<N\). Setting \(B_{i}:=B_{\lambda_{i}}\), it follows that
\[\varphi_{\Sigma}=\sup_{\lambda\in[0,\lambda_{\rm{psef}}]}\{\psi_{B_{\lambda}} +\lambda\}=\max_{1\leq i\leq N}\{\psi_{B_{i}}+\lambda_{i}\}.\]
Since \(\overline{\omega}+[B_{i}]\) is nef, the antieffective divisor \(B_{i}\) is relatively nef, and hence relatively semiample (see Lemma 6.11). By Proposition 1.7, we infer \(\psi_{B_{i}}\in{\rm PL}^{+}_{\rm hom}(X)_{\mathbb{R}}\), and hence \(\varphi_{\Sigma}\in\mathbb{R}{\rm PL}^{+}(X)\).
Now assume \(\omega\) and \({\rm T}(\Sigma)=\lambda_{\rm{psef}}\) are both rational, and that \(\Sigma\subset X^{\rm div}\). Then \(D\) is rational as well, and \(C\) is thus a rational polyhedral cone. Since the Zariski decomposition on \(C\) is the restriction of a \(\mathbb{Q}\)-PL map on \({\rm N}^{1}(Y)\), this implies that the \(\lambda_{i}\) above can be chosen rational. Using again that the Zariski decomposition is \(\mathbb{Q}\)-PL on \(C\), we infer that \(B_{i}\) is a \(\mathbb{Q}\)-divisor, hence \(\psi_{B_{i}}\in{\rm PL}^{+}_{\rm hom}(X)\), which shows \(\varphi_{\Sigma}\in{\rm PL}^{+}(X)\). The rest follows from (6.2).
## 7. Examples of Green's functions
We now exhibit examples of Green's functions with various types of behavior. These examples serve as the underpinnings of Theorems A and B of the introduction.
### Divisors on abelian varieties
As a direct application of Theorem 6.7, we show:
**Proposition 7.1**.: _Assume \({\rm Nef}(X)={\rm Psef}(X)\). Consider a real divisorial set \(\Sigma=\{v_{\alpha}\}\subset\) with \(v_{\alpha}=t_{\alpha}\,{\rm ord}_{E_{\alpha}}\) for \(E_{\alpha}\subset X\) prime, and set \(D:=\sum_{\alpha}t_{\alpha}^{-1}E_{\alpha}\). Then_
\[{\rm T}(\Sigma)=\lambda_{\rm{psef}}=\sup\left\{\lambda\geq 0\mid\omega- \lambda D\in{\rm Psef}(X)\right\}\]
_and_
\[\varphi_{\Sigma}={\rm T}(\Sigma)\max\left\{0,1-\psi_{D}\right\}.\]
_In particular, \(\varphi_{\Sigma}\in\mathbb{R}{\rm PL}^{+}(X)\). If we further assume \(\Sigma\subset X^{\rm div}\), then_
\[\varphi_{\Sigma}\in{\rm PL}(X)\Leftrightarrow\varphi_{\Sigma}\in{\rm PL}^{+}( X)\Leftrightarrow{\rm T}(\Sigma)\in\mathbb{Q}. \tag{7.1}\]
Proof.: Using the notation of Theorem 6.7, we have \({\rm N}(\omega-\lambda D)=0\) for \(\lambda\leq\lambda_{\rm{psef}}={\rm T}(\Sigma)\). Thus \(\widehat{\varphi}_{\Sigma}^{\lambda}=-\lambda\psi_{D}\), and hence
\[\varphi_{\Sigma}=\sup_{0\leq\lambda\leq\lambda_{\rm{psef}}}\{\lambda-\lambda \psi_{D}\}=\lambda_{\rm{psef}}\max\left\{0,1-\psi_{D}\right\}.\]
Since \(-\psi_{D}=\sum_{\alpha}t_{\alpha}^{-1}\log|\mathcal{O}_{X}(-E_{\alpha})|\) lies in \({\rm PL}^{+}(X)_{\mathbb{R}}\), it follows that \(\varphi_{\Sigma}\in\mathbb{R}{\rm PL}^{+}(X)\). If \(\Sigma\subset X^{\rm div}\), then \(D\) is a \(\mathbb{Q}\)-divisor, and hence \(-\psi_{D}\in{\rm PL}^{+}_{\rm hom}(X)\). If we further assume \({\rm T}(\Sigma)\in\mathbb{Q}\), we get \(\varphi_{\Sigma}\in{\rm PL}^{+}(X)\), and the remaining implication follows from (6.2).
**Example 7.2**.: _Suppose \(X\) is an abelian surface, \(\omega=c_{1}(L)\) with \(L\in{\rm Pic}(X)_{\mathbb{Q}}\) ample, and \(v={\rm ord}_{E}\) with \(E\subset X\) a prime divisor. Then \({\rm Nef}(X)={\rm Psef}(X)\), and \({\rm T}(v)=\lambda_{\rm{psef}}\) is the smallest root of the quadratic equation \((L-\lambda E)^{2}=0\). If \(X\) has Picard number \(\rho(X)\geq 2\), then \(\lambda_{\rm{psef}}\) is irrational for a typical choice of \(L\) and \(E\), and hence \(\varphi_{v}\notin{\rm PL}(X)\). In particular, \(v\) is not dreamy (with respect to \(L\)) in the sense of Fujita, see Example 6.6._
### The Cutkosky example
Building on a construction of Cutkosky [13] and Proposition 5.17 (itself based on [12, SS6.5]), we provide an example of a divisorial valuation on \(\mathbb{P}^{3}\) for which (6.5) fails. This relies on the following general result.
**Proposition 7.3**.: _Consider a flag of smooth subvarieties \(Z\subset S\subset X\) with \(\operatorname{codim}S=1\), \(\operatorname{codim}Z=2\) and ideals \(\mathfrak{b}_{S}\subset\mathfrak{b}_{Z}\subset\mathcal{O}_{X}\), and assume that_
1. \(S\equiv\omega\)_;_
2. \(\operatorname{Nef}(S)=\operatorname{Psef}(S)\)_;_
3. \(\omega|_{S}-Z\) _is not nef on_ \(S\)_, i.e._ \(\lambda_{\operatorname{nef}}^{S}:=\sup\{\lambda\geq 0\mid\omega|_{S}-\lambda[Z] \in\operatorname{Nef}(S)\}<1\)_._
_The Green's function of \(v:=\operatorname{ord}_{Z}\in X^{\operatorname{div}}\) is then given by_
\[\varphi_{v}=\max\left\{0,\lambda_{\operatorname{nef}}^{S}(\log|\mathfrak{b}_{ Z}|+1),\log|\mathfrak{b}_{S}|+1\right\}.\]
_In particular, \(\operatorname{T}(v)=1\), \(\varphi_{v}\in\mathbb{RPL}^{+}(X)\), and_
\[\varphi_{v}\in\operatorname{PL}(X)\Leftrightarrow\varphi_{v}\in\operatorname {PL}^{+}(X)\Leftrightarrow\lambda_{\operatorname{nef}}^{S}\in\mathbb{Q}.\]
Proof.: Let \(\pi\colon Y\to X\) be the blowup along \(Z\), with exceptional divisor \(E\), and denote by \(S^{\prime}=\pi^{\star}S-E\) the strict transform of \(S\). Since \(Z\) has codimension \(1\) on \(S\), \(\pi\) maps \(S^{\prime}\) isomorphically onto \(S\), and takes \(S^{\prime}|_{S^{\prime}}=\pi^{\star}S|_{S^{\prime}}-E|_{S^{\prime}}\) to \(S|_{S}-Z\equiv\omega|_{S}-[Z]\). By (ii) and (iii), we thus have \(\operatorname{Nef}(S^{\prime})=\operatorname{Psef}(S^{\prime})\), and \(S^{\prime}|_{S^{\prime}}\) is not nef.
Consider the cone \(C\subset\operatorname{N}^{1}(Y)\) generated by \(\theta:=\pi^{\star}\omega\in\operatorname{Nef}(Y)\) and \(\alpha:=-[E]\notin\operatorname{Psef}(Y)\). Since \(C\) contains the class of \(S^{\prime}\), it follows from Proposition 5.17 that
\[1=\lambda_{\operatorname{psef}}:=\sup\{\lambda\geq 0\mid\pi^{\star}\omega- \lambda[E]\in\operatorname{Psef}(Y)\}\]
and \(\lambda\mapsto\operatorname{N}(\pi^{\star}\omega-\lambda E)\) vanishes on \([0,\lambda_{\operatorname{nef}}^{S}]\), and is affine linear on \([\lambda_{\operatorname{nef}}^{S},1]\), with value \(S^{\prime}\) at \(\lambda=1\). By Theorem 6.7, the concave family \((B_{\lambda})_{\lambda\leq 1}\) of \(b\)-divisors associated to \(\varphi_{v}\) is affine linear on \((-\infty,0]\), \([0,\lambda_{\operatorname{nef}}^{S}]\) and \([\lambda_{\operatorname{nef}}^{S},1]\), with value
\[B_{\lambda}=0,\quad\lambda_{\operatorname{nef}}^{S}\overline{E}\quad\text{ and}\quad\overline{S^{\prime}+E}=\overline{S}\]
at \(\lambda=0\), \(\lambda_{\operatorname{nef}}^{S}\) and \(1\), respectively. By (2.1), the result follows, since \(-\psi_{\overline{E}}=\log|\mathfrak{b}_{Z}|\) and \(-\psi_{\overline{S}}=\log|\mathfrak{b}_{S}|\).
**Example 7.4**.: _Assume \(k=\mathbb{C}\), and set \((X,L)=(\mathbb{P}^{3},\mathcal{O}(4))\). By [13], there exists a smooth quartic surface \(S\subset X\) without \((-2)\)-curves, and hence such that \(\operatorname{Nef}(S)=\operatorname{Psef}(S)\), containing a smooth curve \(Z\) such that \(\lambda_{\operatorname{nef}}^{S}\) is irrational and less than \(1\). By Proposition 7.3, we infer \(\operatorname{T}(v)=1\) and \(\varphi_{v}\in\mathbb{RPL}^{+}(X)\setminus\operatorname{PL}(X)\) (in contrast with (6.5))._
### The Lesieutre example
Based on an example by Lesieutre [11], we now exhibit a Green's function that is not \(\mathbb{R}\)-PL. This forms the basis for Theorem B in the introduction.
**Proposition 7.5**.: _Suppose that \(X\) admits a class \(\theta\in\operatorname{Psef}(X)\) whose diminished base locus \(\mathbb{B}_{-}(\theta)\) is Zariski dense. Then there exist \(\omega\in\operatorname{Amp}(X)\) and \(v\in X^{\operatorname{div}}\) such that \(Z_{X}(\varphi_{\omega,v})\) is Zariski dense in \(X\). In particular, \(\varphi_{\omega,v}\notin\mathbb{RPL}(X)\)._
Proof.: Note first that \(\theta\) cannot be big. Otherwise, there would exist an effective \(\mathbb{R}\)-divisor \(D\equiv\theta\), and hence \(\mathbb{B}_{-}(\theta)\) would be contained in \(\operatorname{supp}D\). Pick an ample prime divisor \(E\) on \(X\), choose \(c\in\mathbb{Q}_{>0}\) large enough such that \(\omega:=\theta+c[E]\) is ample, and set \(v:=c^{-1}\operatorname{ord}_{E}\in X^{\operatorname{div}}\). Since \(\omega\) is ample and \(\omega-c[E]=\theta\) lies on the boundary of \(\operatorname{Psef}(X)\), the threshold \(\lambda_{\operatorname{psef}}=\sup\{\lambda\geq 0\mid\omega-\lambda[E]\in \operatorname{Psef}(X)\}\) is equal to \(c\). Thus \(\mathbb{B}_{-}(\omega-\lambda_{\operatorname{psef}}[E])\) is Zariski dense, and hence so is \(Z_{X}(\varphi_{\omega,v})\), by Corollary 6.8. The last point follows from Lemma 3.7.
**Example 7.6**.: _By [14, Theorem 1.1], the assumptions in Proposition 7.5 are satisfied when \(k=\mathbb{C}\) and \(X\) is the blowup of \(\mathbb{P}^{3}\) at nine sufficiently general points._
If \(\theta\) in Proposition 7.5 is rational, then the proof shows that \(\omega\) can be taken rational as well, i.e. \(\omega=c_{1}(L)\) for an ample \(\mathbb{Q}\)-line bundle. While no such rational example appears to be known at present, we can nevertheless exploit the structure of Lesieutre's example to get:
**Proposition 7.7**.: _Set \((X,L):=(\mathbb{P}^{3},\mathcal{O}(1))\). Then there exists a finite set \(\Sigma\subset X_{\mathbb{R}}^{\mathrm{div}}\) such that \(Z_{X}(\varphi_{L,\Sigma})\) is Zariski dense in \(X\), and hence \(\varphi_{L,\Sigma}\notin\mathbb{R}\mathrm{PL}(X)\)._
Proof.: Let \(\pi\colon Y\to X\) be the blowup at nine sufficiently general points, and denote by \(\sum_{i=1}^{9}E_{i}\) the exceptional divisor. By [14, Remark 4.5, Lemma 5.2], we can pick \(D=\sum_{i}c_{i}E_{i}\) with \(c_{i}\in\mathbb{R}_{>0}\) such that the diminished base locus of \(\pi^{\star}L-D\) is Zariski dense. As above, this implies that this class lies on the boundary of the psef cone (it even generates an extremal ray, see [14, Lemma 5.1]), and the psef threshold
\[\lambda_{\mathrm{psef}}=\sup\{\lambda\geq 0\mid\pi^{\star}L-\lambda D\in \mathrm{Psef}(Y)\}\]
is thus equal to \(1\). The result now follows from Corollary 6.8, with \(\Sigma=\{c_{i}^{-1}\operatorname{ord}_{E_{i}}\}_{1\leq i\leq 9}\).
It is natural to ask:
**Question 7.8**.: _Can an example as in Proposition 7.7 be found with \(\Sigma\subset X^{\mathrm{div}}\)?_
## 8. The non-trivially valued case
In this section, we work over the non-Archimedean field \(K=k(\!(\varpi)\!)\) of formal Laurent series, with valuation ring \(K^{\circ}:=k\!\left[\varpi\right]\!\). We use [13] as our main reference.
Thus \(X\) now denotes a smooth projective variety of dimension \(n\) over \(K\). (In SS9, it will be obtained as the base change of a smooth projective \(k\)-variety.) Working 'additively', we view the elements of the analytification \(X^{\mathrm{an}}\) as valuations \(w\colon K(Y)^{\times}\to\mathbb{R}\) for subvarieties \(Y\subset X\), restricting to the given valuation on \(K\).
### Models
We define a _model_ of \(X\) to be a normal, flat, projective \(K^{\circ}\)-scheme \(\mathcal{X}\) together with the data of an isomorphism \(\mathcal{X}_{K}\simeq X\). The _special fiber_ of \(\mathcal{X}\) is the projective \(k\)-scheme \(\mathcal{X}_{0}:=\mathcal{X}\times_{\mathrm{Spec}\,K}\operatorname{Spec}k\). Each \(w\in X^{\mathrm{an}}\) can be viewed as a semivaluation on \(\mathcal{X}\), whose center is denoted by \(\operatorname{red}_{\mathcal{X}}(w)\in\mathcal{X}_{0}\). This defines a surjective, continuous _reduction map_\(\operatorname{red}_{\mathcal{X}}\colon X^{\mathrm{an}}\to\mathcal{X}_{0}\). For each \(w\in X^{\mathrm{an}}\) we also set
\[Z_{\mathcal{X}}(w):=\overline{\{\operatorname{red}_{\mathcal{X}}(w)\}}\subset \mathcal{X}_{0}.\]
The preimage under \(\operatorname{red}_{\mathcal{X}}\) of the set of generic points of \(\mathcal{X}_{0}\) is finite. We denote it by \(\Gamma_{\mathcal{X}}\subset X^{\mathrm{an}}\), and call its elements the _Shilov points_ of \(\mathcal{X}\). As \(\mathcal{X}\) is normal, each irreducible component \(E\) of \(\mathcal{X}_{0}\) defines a _divisorial valuation_\(w_{E}\in X_{K}^{\mathrm{an}}\) given by
\[w_{E}:=b_{E}^{-1}\operatorname{ord}_{E},\,b_{E}:=\operatorname{ord}_{E}(\varpi);\]
it is the unique preimage under \(\operatorname{red}_{\mathcal{X}}\) of the generic point of \(E\), and the Shilov points of \(\mathcal{X}\) are exactly these valuations \(w_{E}\).
One says that another model \(\mathcal{X}^{\prime}\)_dominates_\(\mathcal{X}\) if the canonical birational map \(\mathcal{X}^{\prime}\dashrightarrow\mathcal{X}\) extends to a morphism (necessarily unique, by separatedness). In that case, \(\operatorname{red}_{\mathcal{X}}\) is the
composition of \(\operatorname{red}_{\mathcal{X}^{\prime}}\) with the induced projective morphism \(\mathcal{X}^{\prime}_{0}\to\mathcal{X}_{0}\). The set of models forms a filtered poset with respect to domination. The set
\[X^{\operatorname{div}}=\bigcup_{\mathcal{X}}\Gamma_{\mathcal{X}}\]
of all divisorial valuations is a dense subset of \(X^{\operatorname{an}}\).
### Piecewise linear functions
A \(\mathbb{Q}\)-Cartier \(\mathbb{Q}\)-divisor \(D\) on a model \(\mathcal{X}\) of \(X\) is _vertical_ if it is supported in \(\mathcal{X}_{0}\); it then defines a continuous function on \(X^{\operatorname{an}}\) called a _model function_. The \(\mathbb{Q}\)-vector space \(\operatorname{PL}(X)\) of such functions is stable under \(\max\), and dense in \(\operatorname{C}^{0}(X^{\operatorname{an}})\).
**Definition 8.1**.: _We define the space \(\mathbb{R}\mathrm{PL}(X)\) of real piecewise linear functions on \(X^{\operatorname{an}}\) (\(\mathbb{R}\)-\(\operatorname{PL}\) functions for short) as the smallest \(\mathbb{R}\)-linear subspace of \(\operatorname{C}^{0}(X^{\operatorname{an}})\) that is stable under max (and hence also min) and contains \(\operatorname{PL}(X)\)._
Fix a model \(\mathcal{X}\). An ideal \(\mathfrak{a}\subset\mathcal{O}_{\mathcal{X}}\) is _vertical_ if its zero locus \(V(\mathfrak{a})\) is contained in \(\mathcal{X}_{0}\). This defines a nonpositive function \(\log|\mathfrak{a}|\in\operatorname{PL}(X)\), determined by minus the exceptional divisor of the blowup of \(\mathcal{X}\) along \(\mathfrak{a}\), and such that
\[\log|\mathfrak{a}|(w)<0\Longleftrightarrow Z_{\mathcal{X}}(w)\subset V( \mathfrak{a}). \tag{8.1}\]
Functions of the form \(\log|\mathfrak{a}|\) for a vertical ideal \(\mathfrak{a}\subset\mathcal{O}_{\mathcal{X}}\) span the \(\mathbb{Q}\)-vector space \(\operatorname{PL}(X)\) (see [1, Proposition 2.2]). As in SS1.3, it follows that any function in \(\mathbb{R}\mathrm{PL}(X)\) can be written as a difference of finite maxima of \(\mathbb{R}_{+}\)-linear combinations of functions of the form \(\log|\mathfrak{a}|\).
### Dual complexes and retractions
We use [15, 1] as references.
An _snc model_\(\mathcal{X}\) is a regular model \(\mathcal{X}\) such that the Cartier divisor \(\mathcal{X}_{0}\) has simple normal crossing support. Denote by \(\mathcal{X}_{0}=\sum_{i\in I}b_{i}E_{i}\) its irreducible decomposition. A _stratum_ of \(\mathcal{X}_{0}\) is defined as a non-empty irreducible component of \(E_{J}:=\bigcap_{j\in J}E_{j}\) for some \(J\subset I\). By resolution of singularities, the set of snc models is cofinal in the poset of all models.
The _dual complex_\(\Delta_{\mathcal{X}}\) of an snc model \(\mathcal{X}\) is defined as the dual intersection complex of \(\mathcal{X}_{0}\). Its faces are in 1-1 correspondence with the strata of \(\mathcal{X}_{0}\), and further come with a natural integral affine structure. In particular, the vertices of \(\Delta_{\mathcal{X}}\) are in 1-1 correspondence with the \(E_{i}\)'s, and admit a natural realization in \(X^{\operatorname{an}}\) as the set \(\Gamma_{\mathcal{X}}\) of Shilov points \(x_{E_{i}}\).
This extends to a canonical embedding \(\Delta_{\mathcal{X}}\hookrightarrow X^{\operatorname{an}}\) onto the set of monomial points with respect to \(\sum_{i}E_{i}\). The reduction \(\operatorname{red}_{\mathcal{X}}(w)\in\mathcal{X}_{0}\) of a point \(w\in\Delta_{\mathcal{X}}\subset X^{\operatorname{an}}\) is the generic point of the stratum of \(\mathcal{X}_{0}\) associated with the unique simplex of \(\Delta_{\mathcal{X}}\) containing \(x\) in its relative interior. In particular, \(Z_{\mathcal{X}}(w)\) is a stratum of \(\mathcal{X}_{0}\). This embedding is further compatible with the PL structures, in the sense that the \(\mathbb{Q}\)-vector space \(\operatorname{PL}(\Delta_{\mathcal{X}})\) of piecewise rational affine functions on \(\Delta_{\mathcal{X}}\) is precisely the image of \(\operatorname{PL}(X)\) under restriction.
If another snc model \(\mathcal{X}^{\prime}\) dominates \(\mathcal{X}\), then \(\Delta_{\mathcal{X}}\) is contained in \(\Delta_{\mathcal{X}^{\prime}}\), and \(\operatorname{PL}(\Delta_{\mathcal{X}^{\prime}})\) restricts to \(\operatorname{PL}(\Delta_{\mathcal{X}})\). Furthermore, the set
\[X^{\operatorname{qm}}:=\bigcup_{\mathcal{X}}\Delta_{\mathcal{X}}\subset X^{ \operatorname{an}}\]
of _quasimonomial valuations_ coincides with the set of Abhyankar points of \(X\), see [1, Remark 3.8] and [15, Proposition 3.7], while the subset of rational points \(\bigcup_{\mathcal{X}}\Delta_{\mathcal{X}}(\mathbb{Q})\) coincides with the set \(X^{\operatorname{div}}\) of divisorial valuations. For later use, we also note:
**Lemma 8.2**.: _If \(\mathcal{X}\) is an snc model, then the image \(\operatorname{red}_{\mathcal{X}^{\prime}}(\Delta_{\mathcal{X}})\subset\mathcal{X }^{\prime}_{0}\) of the dual complex of \(\mathcal{X}\) under the reduction map of any other model \(\mathcal{X}^{\prime}\) is finite._
Proof.: Pick an snc model \(\mathcal{X}^{\prime\prime}\) that dominates both \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\). Then \(\Delta_{\mathcal{X}}\) is contained in \(\Delta_{\mathcal{X}^{\prime\prime}}\), and \(\operatorname{red}_{\mathcal{X}^{\prime}}(\Delta_{\mathcal{X}})\) is thus contained in the image of \(\operatorname{red}_{\mathcal{X}^{\prime\prime}}(\Delta_{\mathcal{X}^{\prime \prime}})\) under the induced morphism \(\mathcal{X}^{\prime\prime}_{0}\to\mathcal{X}_{0}\). After replacing both \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\) with \(\mathcal{X}^{\prime\prime}\), we may thus assume without loss that \(\mathcal{X}=\mathcal{X}^{\prime}\). For any \(w\in\Delta_{\mathcal{X}}\), \(\operatorname{red}_{\mathcal{X}}(w)\) is then the generic point of some stratum of \(\mathcal{X}_{0}\), and \(\operatorname{red}_{\mathcal{X}}(\Delta_{\mathcal{X}})\) is thus a finite set.
Dually, each snc model \(\mathcal{X}\) comes with a canonical _retraction_\(p_{\mathcal{X}}\colon X^{\operatorname{an}}\to\Delta_{\mathcal{X}}\) that takes \(w\in X^{\operatorname{an}}\) to the unique monomial valuation \(w^{\prime}=p_{\mathcal{X}}(w)\) such that
* \(Z_{\mathcal{X}}(w^{\prime})\) is the minimal stratum containing \(Z_{\mathcal{X}}(w)\);
* \(w\) and \(w^{\prime}\) take the same values on the \(E_{i}\)'s.
This induces a homeomorphism \(X^{\operatorname{an}}\stackrel{{\sim}}{{\to}}\varprojlim_{ \mathcal{X}}\Delta_{\mathcal{X}}\), which is compatible with the PL structures in the sense that
\[\operatorname{PL}(X)=\bigcup_{\mathcal{X}}p_{\mathcal{X}}^{*}\operatorname{ PL}(\Delta_{\mathcal{X}}). \tag{8.2}\]
This implies
\[\mathbb{R}\operatorname{PL}(X)=\bigcup_{\mathcal{X}}p_{\mathcal{X}}^{*} \operatorname{\mathbb{R}\operatorname{PL}}(\Delta_{\mathcal{X}}), \tag{8.3}\]
where \(\mathbb{R}\operatorname{PL}(\Delta_{\mathcal{X}})\) is the space \(\mathbb{R}\)-PL functions on \(\Delta_{\mathcal{X}}\), i.e. functions that are real affine linear on a sufficiently fine decomposition of each face into real simplices.
### Psh functions and Monge-Ampere measures
We use [1, 1, 2, 1] as references.
A _closed \((1,1)\)-form_\(\theta\in\mathcal{Z}^{1,1}(X)\) in the sense of [1, SS4.2] is represented by a relative numerical equivalence class on some model \(\mathcal{X}\), called a _determination_ of \(\theta\). It induces a numerical class \([\theta]\in\operatorname{N}^{1}(X)\). We say that \(\theta\) is _semipositive_, written \(\theta\geq 0\), if \(\theta\) is determined by a nef numerical class on some model. In that case, \([\theta]\) is nef as well.
To each tuple \(\theta_{1},\ldots,\theta_{n}\) in \(\mathcal{Z}^{1,1}(X)\) is associated a signed Radon measure \(\theta_{1}\wedge\cdots\wedge\theta_{n}\) on \(X^{\operatorname{an}}\) of total mass \([\theta_{1}]\cdot\ldots\cdot[\theta_{n}]\), with finite support in \(X^{\operatorname{div}}\). More precisely, if all \(\theta_{i}\) are determined by a normal model \(\mathcal{X}\), then \(\theta_{1}\wedge\cdots\wedge\theta_{n}\) has support in \(\Gamma_{\mathcal{X}}\) (see [1, SS2.7]).
Each \(\varphi\in\operatorname{PL}(X)\) is determined by a vertical \(\mathbb{Q}\)-Cartier divisor \(D\) on some model \(\mathcal{X}\), whose numerical class defines a closed \((1,1)\)-form \(\operatorname{dd}^{\mathrm{c}}\varphi\in\mathcal{Z}^{1,1}(X)\). We say that \(\varphi\) is \(\theta\)_-psh_ for a given \(\theta\in\mathcal{Z}^{1,1}(X)\) if \(\theta+\operatorname{dd}^{\mathrm{c}}\varphi\geq 0\).
From now on, we fix a semipositive form \(\omega\in\mathcal{Z}^{1,1}(X)\) such that \([\omega]\) is ample. A function \(\varphi\colon X^{\operatorname{an}}\to\mathbb{R}\cup\{-\infty\}\) is _\(\omega\)-plurisubharmonic_ (_\(\omega\)-psh_ for short) if \(\varphi\not\equiv-\infty\) and \(\varphi\) can be written as the pointwise limit of a decreasing net of \(\omega\)-psh PL functions. The space \(\operatorname{PSH}(\omega)\) is closed under max and under decreasing limits.
By Dini's lemma, the space \(\operatorname{CPSH}(\omega)\) of continuous \(\omega\)-psh functions coincides with the closure in \(\operatorname{C}^{0}(X)\) (with respect to uniform convergence) of the space of \(\omega\)-psh PL functions.
Each \(\varphi\in\operatorname{PSH}(\omega)\) satisfies the'maximum principle'
\[\sup_{X}\varphi=\max_{\Gamma_{\mathcal{X}}}\varphi \tag{8.4}\]
for any model \(\mathcal{X}\) determining \(\omega\) (see [1, Proposition 4.22]). For snc models, [1, SS7.1] more precisely yields:
**Lemma 8.3**.: _Pick \(\varphi\in\operatorname{PSH}(\omega)\) and an snc model \(\mathcal{X}\) on which \(\omega\) is determined. Then:_
1. _the restriction of_ \(\varphi\) _to any face of_ \(\Delta_{\mathcal{X}}\) _is continuous and convex;_
2. _the net_ \((\varphi\circ p_{\mathcal{X}})_{\mathcal{X}}\) _is decreasing and converges pointwise to_ \(\varphi\)
**Remark 8.4**.: _The definition of \(\mathrm{PSH}(\omega)\) given here differs from the one in [1], but Theorem 8.7 in loc. cit. implies that the two definitions are equivalent._
To each continuous \(\omega\)-psh function \(\varphi\) (or, more generally, any \(\omega\)-psh function of finite energy) is associated its _Monge-Ampere measure_\(\mathrm{MA}(\varphi)=\mathrm{MA}_{\omega}(\varphi)\), a Radon probability measure on \(X\) uniquely determined by the following properties:
* \(\varphi\mapsto\mathrm{MA}(\varphi)\) is continuous along decreasing nets;
* if \(\varphi\) is PL, then \(\mathrm{MA}(\varphi)=V^{-1}(\omega+\mathrm{dd}^{\mathrm{c}}\varphi)^{n}\) with \(V:=[\omega]^{n}\).
By the main result of [1], any Radon probability measure \(\mu\) with support in the dual complex \(\Delta_{\mathcal{X}}\) of some snc model can be written as \(\mu=\mathrm{MA}(\varphi)\) for some \(\varphi\in\mathrm{CPSH}(\omega)\), unique up to an additive constant.
### Green's functions
As in the trivially valued case, we can consider the Green's function associated to a nonpluripolar set \(\Sigma\subset X^{\mathrm{an}}\). Here we will only consider the following case. Suppose \(w\in X^{\mathrm{div}}\) is a divisorial point, and define
\[\varphi_{w}:=\varphi_{\omega,w}:=\sup\{\varphi\in\mathrm{PSH}(\omega)\mid \varphi(w)\leq 0\}.\]
It follows from [1, SS8.4] that \(\varphi_{w}\in\mathrm{CPSH}(\omega)\) satisfies \(\mathrm{MA}(\varphi_{w})=\delta_{w}\) and \(\varphi_{w}(w)=0\).
**Proposition 8.5**.: _If \(\dim X=1\) and \([\omega]\) is a rational class, then \(\varphi_{w}\in\mathrm{PL}(X)\)._
Proof.: This follows from Proposition 3.3.7 in [15], and can also be deduced from properties of the intersection form on \(\mathcal{X}_{0}\) for any snc model \(\mathcal{X}\), as in [11, Theorem 7.17].
This proves part (i) of Theorem A in the introduction. We will prove (ii) in SS9.5.
### Invariance under retraction
It will be convenient to introduce the following terminology:
**Definition 8.6**.: _We say that a function \(\varphi\) on \(X^{\mathrm{an}}\) is invariant under retraction if \(\varphi=\varphi\circ p_{\mathcal{X}}\) for some (and hence any sufficiently high) snc model \(\mathcal{X}\) of \(X\)._
**Example 8.7**.: _By (8.2) and (8.3), a function \(\varphi\in\mathrm{C}^{0}(X^{\mathrm{an}})\) lies in \(\mathrm{PL}(X)\) (resp. \(\mathbb{R}\mathrm{PL}(X)\)) iff \(\varphi\) is invariant under retraction and restricts to a \(\mathbb{Q}\)-PL (resp. \(\mathbb{R}\)-PL) function on the dual complex associated to any (equivalently, any sufficiently high) snc model._
**Remark 8.8**.: _The condition \(\varphi=\varphi\circ p_{\mathcal{X}}\) in Definition 8.6 is stronger than the 'comparison property' of [11, Definition 3.11], which merely requires \(\varphi=\varphi\circ p_{\mathcal{X}}\) to hold on the preimage under \(p_{\mathcal{X}}\) of the \(n\)-dimensional open faces of some dual complex \(\Delta_{\mathcal{X}}\), i.e. the preimage of the \(0\)-dimensional strata of \(\mathcal{X}_{0}\) under the reduction map._
**Proposition 8.9**.: _If \(\varphi\in\mathrm{PSH}(\omega)\) is invariant under retraction, then \(\varphi\in\mathrm{CPSH}(\omega)\), and \(\mathrm{MA}(\varphi)\) is supported in some dual complex._
The first point is a direct consequence of Lemma 8.3, while the second one is a special case of the following more precise result. Recall first that the _\(\omega\)-psh envelope_ of \(f\in\mathrm{C}^{0}(X^{\mathrm{an}})\) is defined as
\[\mathrm{P}(f)=\mathrm{P}_{\omega}(f):=\sup\{\varphi\in\mathrm{PSH}(\omega)\mid \varphi\leq f\}.\]
By [1], it lies in \(\mathrm{CPSH}(\omega)\).
**Theorem 8.10**.: _For any \(\varphi\in\mathrm{CPSH}(\omega)\) and any snc model \(\mathcal{X}\) on which \(\omega\) is determined, the following properties are equivalent:_
1. \(\mathrm{MA}(\varphi)\) _is supported in_ \(\Delta_{\mathcal{X}}\)
Proof.: For any \(\psi\in\operatorname{PSH}(\omega)\), we have \(\psi\leq\psi\circ p_{\mathcal{X}}\) (see Lemma 8.3 (iii)), and hence
\[\operatorname{P}(\varphi\circ p_{\mathcal{X}})=\sup\left\{\psi\in\operatorname{PSH }(\omega)\mid\psi\leq\varphi\text{ on }\Delta_{\mathcal{X}}\right\}. \tag{8.5}\]
Assume (i). By the domination principle (see [1, Lemma 8.4]), any \(\psi\in\operatorname{PSH}(\omega)\) such that \(\psi\leq\varphi\) on \(\operatorname{supp\,MA}(\varphi)\subset\Delta_{\mathcal{X}}\) satisfies \(\psi\leq\varphi\) on \(X\). In view of (8.5) this yields (ii). Conversely, assume (ii). For any finite set of rational points \(\Sigma\subset\Delta_{\mathcal{X}}(\mathbb{Q})\subset X^{\operatorname{div}}\), consider the envelope
\[\varphi_{\Sigma}:=\sup\{\psi\in\operatorname{PSH}(\omega)\mid\psi\leq\varphi \text{ on }\Sigma\}.\]
Then \(\varphi_{\Sigma}\) lies in \(\operatorname{CPSH}(\omega)\), and \(\operatorname{MA}(\varphi_{\Sigma})\) is supported in \(\Sigma\) (see [1, Lemma 8.5]). The net \((\varphi_{\Sigma})\), indexed by the filtered poset of finite subsets \(\Sigma\subset\Delta_{\mathcal{X}}(\mathbb{Q})\), is clearly decreasing, and bounded below by \(\varphi\). Its limit \(\psi:=\lim_{\Sigma}\varphi_{\Sigma}\) is thus \(\omega\)-psh, and we claim that it coincides with \(\varphi\). Indeed, we have \(\psi\leq f\) on \(\bigcup_{\Sigma}\Sigma=\Delta_{\mathcal{X}}(\mathbb{Q})\), and hence on \(\Delta_{\mathcal{X}}\), where both \(\psi\) and \(\varphi\) are continuous. By (8.5), this yields \(\psi\leq\operatorname{P}(\varphi\circ p_{\mathcal{X}})=\varphi\). By continuity of the Monge-Ampere operator along decreasing nets, we infer \(\operatorname{MA}(\varphi_{\Sigma})\to\operatorname{MA}(\varphi)\) weakly on \(X\), which yields (i) since each \(\operatorname{MA}(\varphi_{\Sigma})\) is supported in \(\Delta_{\mathcal{X}}\).
In view of Proposition 8.9 and Example 8.7, it is natural to conversely ask:
**Question 8.11**.: _If the Monge-Ampere measure \(\operatorname{MA}_{\omega}(\varphi)\) of \(\varphi\in\operatorname{CPSH}(\omega)\) is supported in some dual complex, is \(\varphi\) invariant under retraction?_
This question appears as [1, Question 2], and is equivalent to asking whether \(\varphi\circ p_{\mathcal{X}}\) is \(\omega\)-psh for some high enough model \(\mathcal{X}\), by Theorem 8.10. In Example 9.11 below (see also Theorem A) we show that the answer is negative. In this example, the support of \(\operatorname{MA}_{\omega}(\varphi)\) is even a finite set. One can nevertheless ask:
**Question 8.12**.: _Assume that \(\varphi\in\operatorname{CPSH}(\omega)\) is such that the support of the Monge-Ampere measure \(\operatorname{MA}_{\omega}(\varphi)\) is a finite set contained in some dual complex._
* _is_ \(\varphi\)__\(\mathbb{R}\)_-PL on each dual complex?_
* _if_ \(\omega\) _is rational, is_ \(\varphi\)__\(\mathbb{Q}\)_-PL on each dual complex?_
Example 9.11 below provides a negative answer to (ii). Indeed the function \(\varphi\) in this example is \(\mathbb{R}\)-PL but not \(\mathbb{Q}\)-PL, and by (8.2), (8.3), this implies that \(\varphi\) fails to be \(\mathbb{Q}\)-PL on some dual complex \(\Delta_{\mathcal{X}}\). The answer to (i) is also likely negative in general, as suggested by Nakayama's counterexample to the existence of Zariski decompositions on certain toric bundles over an abelian suface [12, IV.2.10].
**Question 8.13**.: _Suppose \(X\) is a toric variety, and let \(\varphi\in\operatorname{CPSH}(\omega)\) be a torus invariant \(\omega\)-psh function such that \(\operatorname{MA}_{\omega}(\varphi)\) supported on a compact subset of \(N_{\mathbb{R}}\subset X^{\operatorname{an}}\). Is \(\varphi\) invariant under retraction?_
**Question 8.14**.: _If \(\varphi\in\operatorname{CPSH}(\omega)\) is invariant under retraction, is the same true for \(\varphi|_{Z^{\operatorname{an}}}\), if \(Z\subset X\) is a smooth subvariety?_
### The center of a plurisubharmonic function
We end this section by a version of Theorem 3.5. In analogy with (3.1), for any subset \(S\subset X^{\operatorname{an}}\) and any model \(\mathcal{X}\) we set
\[Z_{\mathcal{X}}(S):=\bigcup_{w\in S}Z_{\mathcal{X}}(y).\]
This is thus the smallest subset of \(\mathcal{X}_{0}\) that is invariant under specialization and contains the image \(\operatorname{red}_{\mathcal{X}}(S)\) of \(S\) under the reduction map \(\operatorname{red}_{\mathcal{X}}\colon X^{\operatorname{an}}\to\mathcal{X}_{0}\). For any higher model \(\mathcal{X}^{\prime}\), the induced proper morphism \(\mathcal{X}_{0}^{\prime}\to\mathcal{X}_{0}\) maps \(Z_{\mathcal{X}^{\prime}}(S)\) onto \(Z_{\mathcal{X}}(S)\).
We say that \(S\subset X\) is _invariant under retraction_ if \(p_{\mathcal{X}}^{-1}(S)=S\) for some (and hence any sufficiently high) \(\operatorname{snc}\) model \(\mathcal{X}\).
**Lemma 8.15**.: _If \(S\subset X^{\operatorname{an}}\) is invariant under retraction, then \(Z_{\mathcal{X}}(S)\) is Zariski closed for any model \(\mathcal{X}\)._
Proof.: Pick an \(\operatorname{snc}\) model \(\mathcal{X}^{\prime}\) dominating \(\mathcal{X}\) such that \(S=p_{\mathcal{X}^{\prime}}^{-1}(S)\). Since \(Z_{\mathcal{X}}(S)\) is the image of \(Z_{\mathcal{X}^{\prime}}(S)\) under the proper morphism \(\mathcal{X}_{0}^{\prime}\to\mathcal{X}_{0}\), we may replace \(\mathcal{X}\) with \(\mathcal{X}^{\prime}\) and assume without loss that \(\mathcal{X}=\mathcal{X}^{\prime}\). The set \(Z_{\mathcal{X}}(S)\) obviously contains \(Z_{\mathcal{X}}(S\cap\Delta_{\mathcal{X}})\), which is Zariski closed since \(Z_{\mathcal{X}}(w)\) is a stratum of \(\mathcal{X}_{0}\) for any \(w\in\Delta_{\mathcal{X}}\). Conversely, pick \(w\in S\), and set \(y:=p_{\mathcal{X}}(w)\in\Delta_{\mathcal{X}}\). Then \(y\in p_{\mathcal{X}}^{-1}(S)=S\), and \(Z_{\mathcal{X}}(w)\subset Z_{\mathcal{X}}(y)\) since it follows from the definition of \(p_{\mathcal{X}}\) that \(\operatorname{red}_{\mathcal{X}}(w)\) is a specialization of \(\operatorname{red}_{\mathcal{X}}(y)\). This shows, as desired, that \(Z_{\mathcal{X}}(S)=Z_{\mathcal{X}}(S\cap\Delta_{\mathcal{X}})\) is Zariski closed.
**Definition 8.16**.: _Given \(\varphi\in\operatorname{PSH}(\omega)\) and a model \(\mathcal{X}\), we define the center of \(\varphi\) on \(\mathcal{X}\) as_
\[Z_{\mathcal{X}}(\varphi):=Z_{\mathcal{X}}(\{\varphi<\sup\varphi\})=\bigcup\{Z _{\mathcal{X}}(w)\mid w\in X,\,\varphi(w)<\sup\varphi\}.\]
**Example 8.17**.: _If \(\varphi=\log|\mathfrak{a}|\) for a vertical ideal \(\mathfrak{a}\subset\mathcal{O}_{\mathcal{X}}\), then \(Z_{\mathcal{X}}(\varphi)=V(\mathfrak{a})\)._
**Theorem 8.18**.: _For any \(\varphi\in\operatorname{PSH}(\omega)\) and any model \(\mathcal{X}\), the following holds:_
1. \(Z_{\mathcal{X}}(\varphi)\) _is an at most countable union of subvarieties of_ \(\mathcal{X}_{0}\)_;_
2. _if_ \(\varphi\) _is invariant under retraction, then_ \(Z_{\mathcal{X}}(\varphi)\) _is Zariski closed;_
3. \(Z_{\mathcal{X}}(\varphi)=\operatorname{red}_{\mathcal{X}}(\{\varphi<\sup \varphi\})\)_;_
4. _if_ \(\mathcal{X}\) _determines_ \(\omega\)_, then_ \(Z_{\mathcal{X}}(\varphi)\) _is a strict subset of_ \(\mathcal{X}_{0}\)_._
**Question 8.19**.: _Is it true that \(\{\varphi<\sup\varphi\}=\operatorname{red}_{\mathcal{X}}^{-1}(Z_{\mathcal{X}}( \varphi))\) as in Theorem 3.5?_
Proof.: By [1, Proposition 4.7], \(\varphi\) can be be written as the pointwise limit of a decreasing sequence \((\varphi_{m})_{m\in\mathbb{N}}\) of \(\omega\)-psh PL functions. Since each \(\varphi_{m}\) is in particular invariant under retraction (see Example 8.7), Lemma 8.15 implies that \(Z_{\mathcal{X}}\{(\varphi_{m}<\sup\varphi\})\) is Zariski closed for each \(m\). On the other hand, since \(\varphi_{m}\searrow\varphi\) pointwise on \(X\), we have \(\{\varphi<\sup\varphi\}=\bigcup_{m}\{\varphi_{m}<\sup\varphi\}\), and hence \(Z_{\mathcal{X}}(\varphi)=\bigcup_{m}Z_{\mathcal{X}}(\{\varphi_{m}<\sup\varphi\})\). This proves (i), while (ii) is a direct consequence of Lemma 8.15.
Pick \(w\in X^{\operatorname{an}}\) such that \(\varphi(w)<\sup\varphi\). To prove (iii), we need to show that any \(\xi\in Z_{\mathcal{X}}(w)\) lies in \(\operatorname{red}_{\mathcal{X}}(\{\varphi<\sup\varphi\})\). By Lemma 8.3, we can find a high enough \(\operatorname{snc}\) model \(\mathcal{X}^{\prime}\) such that \(x^{\prime}:=p_{\mathcal{X}^{\prime}}(w)\) satisfies \(\varphi(x^{\prime})<\sup\varphi\). By properness of \(\mathcal{X}_{0}^{\prime}\to\mathcal{X}_{0}\), \(Z_{\mathcal{X}}(w)\) is the image of \(Z_{\mathcal{X}^{\prime}}(w)\), which is itself contained in \(Z_{\mathcal{X}^{\prime}}(x^{\prime})\). After replacing \(\mathcal{X}\) with \(\mathcal{X}^{\prime}\) and \(x\) with \(x^{\prime}\), we may thus assume without loss that \(\mathcal{X}\) is \(\operatorname{snc}\) and \(x\) lies in \(\Delta_{\mathcal{X}}\). Pick \(y\in X^{\operatorname{an}}\) with \(\operatorname{red}_{\mathcal{X}}(y)=\xi\) (which exists by surjectivity of the reduction map, see [1, Lemma 4.12]). Set \(y^{\prime}:=p_{\mathcal{X}}(y)\), and denote by \(\sigma\) the unique face of \(\Delta_{\mathcal{X}}\) that contains \(y^{\prime}\) in its relative interior, the corresponding stratum of \(\mathcal{X}_{0}\) being the smallest one containing \(\xi\). Since the latter lies on the stratum \(Z_{\mathcal{X}}(w)\), it follows that \(\sigma\) contains \(x\) (possibly on its boundary). Since \(\varphi\) is convex and continuous on \(\sigma\) (see Lemma 8.3), it can only achieve its supremum at the interior point \(y^{\prime}\) if it is constant on \(\sigma\). As \(w\in\sigma\) satisfies \(\varphi(w)<\sup\varphi\), it follows that \(\varphi(y^{\prime})<\sup\varphi\) as well. Since \(y^{\prime}=p_{\mathcal{X}}(y)\), this implies \(\varphi(y)\leq\varphi(y^{\prime})<\sup\varphi\) (again by Lemma 8.3). Thus \(\xi=\operatorname{red}_{\mathcal{X}}(y)\in\operatorname{red}_{\mathcal{X}}(\{ \varphi<\sup\varphi\})\), which proves (iii).
Finally, assume that \(\mathcal{X}\) is normal and determines \(\omega\). By (8.4), we can find an irreducible component \(E\) of \(\mathcal{X}_{0}\) whose corresponding Shilov point \(w_{E}\in\Gamma_{\mathcal{X}}\) satisfies \(\varphi(w_{E})=\sup\varphi\). Since \(w_{E}\) is the only point of \(X^{\mathrm{an}}\) whose reduction on \(\mathcal{X}_{0}\) is the generic point of \(E\), it follows that the latter does not belong to \(Z_{\mathcal{X}}(\varphi)\), which is thus a strict subset of \(\mathcal{X}_{0}\).
## 9. The isotrivial case
We now consider the _isotrivial_ case, in which the variety over \(K=k(\!(\varpi)\!)\) is the base change \(X_{K}\) of a smooth projective variety \(X\) over the (trivially valued) field \(k\).
### Ground field extension
We have a natural projection
\[\pi\colon X_{K}^{\mathrm{an}}\to X^{\mathrm{an}},\]
while Gauss extension provides a continuous section
\[\sigma\colon X^{\mathrm{an}}\hookrightarrow X_{K}^{\mathrm{an}}\]
onto the set of \(k^{\times}\)-invariant points (see [11, Proposition 1.6]). By [11, Corollary 1.5], we further have:
**Lemma 9.1**.: _If \(v\in X^{\mathrm{an}}\) is divisorial (resp real divisorial) then \(\sigma(v)\in X_{K}^{\mathrm{an}}\) is divisorial (resp. quasimonomial)._
The base change of \(X\) to \(K^{\circ}\) defines the _trivial model_
\[\mathcal{X}_{\mathrm{triv}}:=X_{K^{\circ}}\]
of \(X_{K}\), whose central fiber will be identified with \(X\). More generally, each _test configuration_\(\mathcal{X}\to\mathbb{A}^{1}\) for \(X\) induces via base change \(\operatorname{Spec}K^{\circ}=\operatorname{Spec}k[\![\varpi]\!]\to\mathbb{A}^{1 }=\operatorname{Spec}k[\![\varpi]\!]\) a \(k^{\times}\)-invariant model of \(X_{K}\), that shares the same vertical ideals and vertical divisors as \(\mathcal{X}\), and will be simply be denoted by \(\mathcal{X}\), for simplicity.
### Psh functions
For any \(\theta\in\mathrm{N}^{1}(X)\), we denote by \(\pi^{\star}\theta\in\mathcal{Z}^{1,1}(X_{K})\) the induced closed \((1,1)\)-form, determined by \(\theta\) on the trivial model. If \(\omega\in\operatorname{Amp}(X)\), then \([\pi^{\star}\omega]\in\mathrm{N}^{1}(X_{K})\) coincides with the base change of \(\omega\), and hence is ample.
**Theorem 9.2**.: _Pick \(\omega\in\operatorname{Amp}(X)\) and \(\varphi\in\operatorname{PSH}(\omega)\). Then:_
1. \(\pi^{\star}\varphi\in\operatorname{PSH}(\pi^{\star}\omega)\)_;_
2. _if_ \(\varphi\) _is further continuous, then_ \(\operatorname{MA}_{\pi^{\star}\omega}(\pi^{\star}\varphi)=\sigma_{\star} \operatorname{MA}_{\omega}(\varphi)\)_._
**Lemma 9.3**.: _For any \(\varphi\in\operatorname{PL}(X)\) and \(\theta\in\mathrm{N}^{1}(X)\), the following holds:_
1. \(\pi^{\star}\varphi\in\operatorname{PL}(X_{K})\)_;_
2. \((\pi^{\star}\theta+\mathrm{dd}^{\mathrm{c}}\pi^{\star}\varphi)^{n}=\sigma_{ \star}(\theta+\mathrm{dd}^{\mathrm{c}}\varphi)^{n}\)_;_
3. \(\varphi\) _is_ \(\theta\)_-psh iff_ \(\pi^{\star}\varphi\) _is_ \(\pi^{\star}\theta\)_-psh._
Proof.: The function \(\varphi\) is determined by a vertical \(\mathbb{Q}\)-Cartier divisor \(D\) on a test configuration \(\mathcal{X}\), that may be taken to dominate the trivial one (see [11, Theorem 2.7]). The induced vertical divisor on the induced model of \(X_{K}\) then determines \(\pi^{\star}\varphi\). This proves (i), and also (ii), by comparing [1, (2.2)] and [11, (3.6)]. Finally, denote by \(\theta_{\mathcal{X}}\) the pullback of \(\theta\) to \(\mathrm{N}^{1}(\mathcal{X}/\mathbb{A}^{1})\). Then \(\varphi\) is \(\theta\)-psh iff \((\theta_{\mathcal{X}}+[D])|_{\mathcal{X}_{0}}\) is nef, which is also equivalent to \(\pi^{\star}\varphi\) being \(\pi^{\star}\theta\)-psh. This proves (iii).
Proof of Theorem 9.2.: Write \(\varphi\) as the limit on \(X^{\mathrm{an}}\) of a decreasing net of \(\omega\)-psh PL functions \(\varphi_{i}\). By Lemma 9.3, \(\pi^{\star}\varphi_{i}\) is PL and \(\pi^{\star}\omega\)-psh. Since it decreases pointwise on \(X^{\mathrm{an}}_{K}\) to \(\pi^{\star}\varphi\), the latter is \(\pi^{\star}\omega\)-psh, which proves (i). For each \(i\), Lemma 9.3 (ii) further implies \(\operatorname{MA}_{\pi^{\star}\omega}(\pi^{\star}\varphi_{i})=\sigma_{\star} \operatorname{MA}_{\omega}(\varphi_{i})\). If \(\varphi\) is continuous, then \(\operatorname{MA}_{\omega}(\varphi)\) and \(\operatorname{MA}_{\pi^{\star}\omega}(\pi^{\star}\varphi)\) are both defined, and are the limits of \(\operatorname{MA}_{\omega}(\varphi_{i})\) and \(\operatorname{MA}_{\pi^{\star}\omega}(\pi^{\star}\varphi_{i})\), respectively. This proves (ii).
### PL structures
As a direct consequence of Lemma 9.3, the projection \(\pi\colon X^{\mathrm{an}}_{K}\to X^{\mathrm{an}}\) is compatible with the PL structures:
**Corollary 9.4**.: _We have \(\pi^{\star}\operatorname{PL}(X)\subset\operatorname{PL}(X_{K})\) and \(\pi^{\star}\operatorname{\mathbb{R}PL}(X)\subset\operatorname{\mathbb{R}PL}( X_{K})\)._
As we next show, this is also the case for Gauss extension.
**Theorem 9.5**.: _We have \(\sigma^{\star}\operatorname{PL}(X_{K})=\operatorname{PL}(X)\) and \(\sigma^{\star}\operatorname{\mathbb{R}PL}(X_{K})=\operatorname{\mathbb{R}PL}( X)\)._
Any vertical ideal \(\mathfrak{a}\) on \(\mathcal{X}_{\mathrm{triv}}\), being trivial outside the central fiber, can be viewed as a vertical ideal on \(X\times\mathbb{A}^{1}\), and \(\widetilde{\mathfrak{a}}:=\mathbb{G}_{\mathrm{m}}\cdot\mathfrak{a}\) is then the smallest flag ideal containing \(\mathfrak{a}\).
**Lemma 9.6**.: _With the above notation we have \(\sigma^{\star}\log|\mathfrak{a}|=\varphi_{\widetilde{\mathfrak{a}}}\)._
Proof.: Pick an ample line bundle \(L\) on \(X\), and denote by \(\mathcal{L}_{\mathrm{triv}}\) the trivial model of \(L_{K}\), i.e. the pullback of \(L\) to the trivial model \(\mathcal{X}_{\mathrm{triv}}=X_{K^{\circ}}\). After replacing \(L\) with a large enough multiple, we may assume \(\mathcal{L}_{\mathrm{triv}}\otimes\mathfrak{a}\) is generated by finitely many sections \(s_{i}\in\operatorname{H}^{0}(\mathcal{X}_{\mathrm{triv}},\mathcal{L}_{ \mathrm{triv}})\). Then \(\log|\mathfrak{a}|=\max_{i}\log|s_{i}|\), where \(|s_{i}|\) denotes the pointwise length of \(s_{i}\) in the model metric induced by \(\mathcal{L}_{\mathrm{triv}}\). For each \(i\) write \(s_{i}=\sum_{\lambda\in\mathbb{Z}}s_{i,\lambda}\varpi^{\lambda}\) where \(s_{i,\lambda}\in\operatorname{H}^{0}(X,L)\), and denote by \(\mathfrak{b}_{\lambda}\subset\mathcal{O}_{X}\) the ideal locally generated by \((s_{i,\lambda})_{i}\). Then \(\widetilde{\mathfrak{a}}=\sum_{\lambda\in\mathbb{Z}}\mathfrak{b}_{\lambda} \varpi^{\lambda}\). By definition of Gauss extension, we have for any \(v\in X^{\mathrm{an}}\)
\[\log|s_{i}|(\sigma(v))=\max_{\lambda\in\mathbb{Z}}\{\log|s_{i,\lambda}|+\lambda\}.\]
Thus \(\sigma^{\star}\log|\mathfrak{a}|=\max_{\lambda\in\mathbb{Z}}\{\psi_{\lambda} -\lambda\}\) with \(\psi_{\lambda}:=\max_{i}\log|s_{i,\lambda}|=\log|\mathfrak{b}_{\lambda}|\), and hence \(\sigma^{\star}\log|\mathfrak{a}|=\max_{\lambda}\{\log|\mathfrak{b}_{\lambda}|- \lambda\}=\varphi_{\widetilde{\mathfrak{a}}}\).
Proof of Theorem 9.5.: By Corollary 9.4 we have \(\pi^{\star}\operatorname{PL}(X)\subset\operatorname{PL}(X_{K})\). Since \(\operatorname{PL}(X_{K})\) is generated by functions of the form \(\log|\mathfrak{a}|\) for a vertical ideal \(\mathfrak{a}\subset\mathcal{O}_{\mathcal{X}_{\mathrm{triv}}}\), Lemma 9.6 yields \(\sigma^{\star}\operatorname{PL}(X_{K})\subset\operatorname{PL}(X)\), and hence also \(\sigma^{\star}\operatorname{\mathbb{R}PL}(X_{K})\subset\operatorname{ \mathbb{R}PL}(X)\). This completes the proof, since \(\sigma^{\star}\pi^{\star}=\operatorname{id}\).
### Centers
Next we study the relationships between the two center maps \(Z_{X}\colon X^{\mathrm{an}}\to X\) and \(Z_{\mathcal{X}_{\mathrm{triv}}}\colon X^{\mathrm{an}}_{K}\to\mathcal{X}_{ \mathrm{triv},0}=X\).
**Lemma 9.7**.: _For all \(w\in X^{\mathrm{an}}_{K}\) and \(v\in X^{\mathrm{an}}\) we have_
\[Z_{\mathcal{X}_{\mathrm{triv}}}(w)\subset Z_{X}(\pi(w)),\quad Z_{X}(v)=Z_{ \mathcal{X}_{\mathrm{triv}}}(\sigma(v)).\]
Proof.: Denote by \(\mathfrak{b}\subset\mathcal{O}_{X}\) the ideal of the subvariety \(Z_{X}(\pi(w))\). Then \(\mathfrak{a}:=\mathfrak{b}+(\varpi)\) is a vertical ideal on \(\mathcal{X}_{\mathrm{triv}}\) such that \(V(\mathfrak{a})=V(\mathfrak{b})=Z_{X}(\pi(w))\) under the identification \(\mathcal{X}_{\mathrm{triv},0}=X\). Further,
\[\log|\mathfrak{a}|(w)=\max\{\log|\mathfrak{b}|(\pi(w)),-1\}<0,\]
and hence \(Z_{\mathcal{X}_{\mathrm{triv}}}(w)\subset V(\mathfrak{a})=Z_{X}(\pi(w))\), see (8.1).
In particular, \(Z_{\mathcal{X}_{\mathrm{triv}}}(\sigma(v))\subset Z_{X}(v)\). Conversely, denote by \(\mathfrak{a}\subset\mathcal{O}_{\mathcal{X}_{\mathrm{triv}}}\) the ideal of \(Z_{\mathcal{X}_{\mathrm{triv}}}(\sigma(v))\). Since \(\sigma(v)\) is \(k^{\times}\)-invariant, \(\mathfrak{a}=\sum_{\lambda\in\mathbb{Z}}\mathfrak{a}_{\lambda}\varpi^{-\lambda}\) is (induced by) a flag ideal. Further, \(\varphi_{\mathfrak{a}}(v)=\log|\mathfrak{a}|(\sigma(v))<0\), and hence \(Z_{X}(v)\subset Z_{X}(\varphi_{\mathfrak{a}})\). By Example 1.14 we have \(Z_{X}(\varphi_{\mathfrak{a}})=V(\mathfrak{a}_{0})\). The latter is also equal to the zero locus of \(\mathfrak{a}_{0}+(\varpi)\) on \(\mathcal{X}_{\mathrm{triv}}\), which is
contained in \(V(\mathfrak{a})=Z_{\mathcal{X}_{\mathrm{triv}}}(\sigma(v))\) since \(\mathfrak{a}\subset\mathfrak{a}_{0}+(\varpi)\). Thus \(Z_{X}(v)\subset Z_{\mathcal{X}_{\mathrm{triv}}}(\sigma(v))\), which concludes the proof.
**Proposition 9.8**.: _If \(\omega\in\mathrm{Amp}(X)\) and \(\varphi\in\mathrm{PSH}(\omega)\), then \(Z_{\mathcal{X}_{\mathrm{triv}}}(\pi^{\star}\varphi)=Z_{X}(\varphi)\)._
Proof.: Pick \(v\in X^{\mathrm{an}}\) such that \(\varphi(v)<\sup\varphi\), and set \(w:=\sigma(v)\). Then \(\pi^{\star}\varphi(w)=\varphi(v)\) and \(\sup\pi^{\star}\varphi=\sup\varphi\), so \(w\) lies in \(\{\pi^{\star}\varphi<\sup\pi^{\star}\varphi\}\), and hence \(Z_{X}(v)=Z_{\mathcal{X}_{\mathrm{triv}}}(w)\subset Z_{\mathcal{X}_{\mathrm{ triv}}}(\pi^{\star}\varphi)\) by Lemma 9.7. This implies \(Z_{X}(\varphi)\subset Z_{\mathcal{X}_{\mathrm{triv}}}(\pi^{\star}\varphi)\). Conversely, assume \(w\in X_{K}^{\mathrm{an}}\) satisfies \(\pi^{\star}\varphi(w)<\sup\pi^{\star}\varphi\). Then \(v:=\pi(w)\) lies in \(\{\varphi<\sup\varphi\}\), and hence \(Z_{X}(v)\subset Z_{X}(\varphi)\). In view of Lemma 9.7, this implies \(Z_{\mathcal{X}_{\mathrm{triv}}}(w)\subset Z_{X}(\varphi)\), and hence \(Z_{\mathcal{X}_{\mathrm{triv}}}(\pi^{\star}\varphi)\subset Z_{X}(\varphi)\).
Combining Proposition 9.8 and Theorem 8.18, we obtain
**Corollary 9.9**.: _Let \(\varphi\in\mathrm{PSH}(\omega)\), where \(\omega\in\mathrm{Amp}(X)\), and suppose that \(\pi^{\star}\varphi\in\mathrm{PSH}(\pi^{\star}\omega)\) is invariant under retraction. Then \(Z_{X}(\varphi)\subset X\) is a Zariski closed proper subset of \(X\)._
### Examples
We are now ready to prove Theorems A and B in the introduction, and also provide additional examples. As in the previous section, \(X\) denotes a smooth projective variety over \(k\). Pick a class \(\omega\in\mathrm{Amp}(X)\), a \(k^{\times}\)-invariant divisorial point \(w\in X_{K}^{\mathrm{div}}\), and denote as in SS8.5 by \(\varphi_{w}\in\mathrm{CPSH}(\pi^{\star}\omega)\) the Green's function associated to \(w\); this is the unique solution to the Monge-Ampere equation
\[\mathrm{MA}_{\pi^{\star}\omega}(\varphi_{w})=\delta_{w}\quad\text{and}\quad \varphi_{w}(w)=0.\]
By Lemma 9.1, we have \(w=\sigma(v)\) with \(v:=\pi(w)\in X^{\mathrm{div}}\). If \(\varphi_{v}\in\mathrm{CPSH}(\omega)\) denotes the Green's function of \(\{v\}\), see SS6.1, then we have
\[\varphi_{w}=\pi^{\star}\varphi_{v}.\]
Indeed, \(\pi^{\star}\varphi_{v}(w)=\varphi_{v}(v)=0\), and by Theorem 9.2, we have \(\mathrm{MA}_{\pi^{\star}\omega}(\pi^{\star}\varphi_{v})=\sigma_{\star}\delta_{ v}=\delta_{w}\).
Our goal is to investigate the regularity of \(\varphi_{w}\).
**Corollary 9.10**.: _If \(\dim X=1\), then \(\varphi_{w}\in\mathrm{PL}(X_{K})\). If \(\dim X=2\), then \(\varphi_{w}\in\mathbb{R}\mathrm{PL}(X_{K})\)._
Proof.: The first statement follows from Proposition 8.5. Now suppose \(\dim X=2\). By Theorem 6.10, \(\varphi_{v}\in\mathbb{R}\mathrm{PL}(X)\), so that \(\varphi_{w}\in\mathbb{R}\mathrm{PL}(X_{K})\), see Corollary 9.4.
However, even when \(\omega\) is rational, \(\varphi_{w}\) is in general not \(\mathbb{Q}\)-PL:
**Example 9.11**.: _Example 7.2 gives an example of an abelian surface \(X\), a rational class \(\omega\in\mathrm{Amp}(X)\), and a divisorial valuation \(v\in X^{\mathrm{div}}\) such that \(\varphi_{v}\in\mathbb{R}\mathrm{PL}(X)\setminus\mathrm{PL}(X)\). If \(w=\sigma(v)\), then \(\varphi_{w}:=\pi^{\star}\varphi_{v}\in\mathbb{R}\mathrm{PL}(X_{K})\setminus \mathrm{PL}(X_{K})\), by Theorem 9.5._
**Example 9.12**.: _Similarly, Example 7.4 gives an example of a divisorial valuation \(v\in\mathbb{P}^{3,\mathrm{div}}\) such that if we set \(\omega=c_{1}(\mathcal{O}(4))\), then \(\varphi_{v}:=\varphi_{\omega,v}\in\mathbb{R}\mathrm{PL}(X)\setminus\mathrm{PL} (X)\). If \(w=\sigma(v)\), then \(\varphi_{w}:=\pi^{\star}\varphi_{v}\in\mathbb{R}\mathrm{PL}(X_{K})\setminus \mathrm{PL}(X_{K})\), by Theorem 9.5._
Examples 9.11 and 9.12 establish Theorem A (ii). They also provide a negative answer to Question 8.12 (ii). Indeed, a function \(\varphi\in\mathrm{C}^{0}(X_{K}^{\mathrm{an}})\) lies in \(\mathbb{R}\mathrm{PL}(X_{K})\) (resp. \(\mathrm{PL}(X_{K})\)) iff \(\varphi\) is invariant under retraction and restricts to an \(\mathbb{R}\)-PL (resp. \(\mathbb{Q}\)-PL) function on each dual complex, see Example 8.7.
As the next example shows, if \(\dim X=3\), then \(\varphi_{w}\) need not be \(\mathbb{R}\)-PL. In fact, it may not even be invariant under retraction.
**Example 9.13**.: _Example 7.6 shows that we may have \(\dim X=3\) and \(Z_{X}(\varphi_{v})\) Zariski dense in \(X\). It follows that \(Z_{\mathcal{X}_{\rm triv}}(\varphi_{w})\) is Zariski dense in \(\mathcal{X}_{\rm triv,0}=X\), see Theorem 9.2 (ii). Thus Theorem 8.18 (ii) shows that \(\varphi_{w}\) cannot be invariant under retraction._
It could, however, a priori be the case that the restriction \(\varphi_{w}\) to any dual complex is \(\mathbb{R}\)-PL, see Question 8.12 (i).
In Example 9.13, based on Lesieutre's work, the class \(\omega\) is irrational. We do not know of an example for which the class \(\omega\) is rational. However, the following example provides a proof of Theorem B in the introduction.
**Example 9.14**.: _Set \(X=\mathbb{P}^{3}_{k}\) and \(\omega:=c_{1}(\mathcal{O}(1))\in\mathrm{N}^{1}(X)\). By Proposition 7.7, there exists \(\psi\in\mathrm{CPSH}(\omega)\) such that \(\mathrm{MA}_{\omega}(\psi)\) is supported in a finite subset \(\Sigma\subset X^{\rm div}_{\mathbb{R}}\), and \(Z_{X}(\psi)\) is Zariski dense in \(X\). Theorem 9.2 then shows that \(\varphi:=\pi^{\star}\psi\) lies in \(\mathrm{CPSH}(\pi^{\star}\omega)\), \(\mathrm{MA}_{\pi^{\star}\omega}(\varphi)=\sigma_{\star}\,\mathrm{MA}_{\omega} (\psi)\) has finite support in some dual complex (see Lemma 9.1), and the center of \(\varphi\) on the trivial model of \(X^{\rm an}_{K}\) is Zariski dense. By Theorem 8.18, it follows that \(\varphi\) cannot be invariant under retraction._
|
2306.17527 | Temporal network-based analysis of fluid flow with applications to
marine ecology | In this report we present the work carried out during the Complexity72h
workshop, held at IFISC in Palma de Mallorca, Spain, 26-30 June 2023. We
describe a temporal network-theoretic approach to study fluid flows with
applications to marine ecology. The network representation is derived from the
Lagrangian fluid dynamics and represents fluid transportation between patches
of the sea. It is a directed, weighted and time-dependent network. This
approach enables us to use advanced network-theoretic tools for analysis and
modeling. A common approximation adopted in the literature consists in using an
aggregated time-independent network representation of the fluid flow. In this
report we focus in particular on the role played by the temporal component and
to the information loss related to neglecting that dimension and inspect the
role played by seasonal or long time-period variations. We conduct an analysis
of basic network features of the aggregated and temporal graphs, we analyze
their community structure and we model population dynamics of marine lives
driven by the flow. Ultimately, we determine that time-independent
approximations can effectively represent long-term transportation evolution
spanning multiple years. However, for an accurate depiction of transportation
within a single year, it is necessary to incorporate explicit time-dependence
in the transport matrix to account for seasonality. | Kishor Acharya, Javier Aguilar, Lorenzo Dall'Amico, Kyriacos Nicolaou, Johnny Tong, Enrico Ser-Giacomi | 2023-06-30T10:30:01Z | http://arxiv.org/abs/2306.17527v1 | **Temporal network-based analysis of oceanic flow with applications to marine ecology**
## Abstract
In this report we present the work carried out during the Complexity72h workshop, held at IFISC in Palma de Mallorca, Spain, 26-30 June 2023. We describe a temporal network-theoretic approach to study fluid flows with applications to marine ecology. The network representation is derived from the Lagrangian fluid dynamics and represents fluid transportation between patches of the sea. It is a directed, weighted and time-dependent network. This approach enables us to use advanced network-theoretic tools for analysis and modeling. A common approximation adopted in the literature consists in using an aggregated time-independent network representation of the fluid flow. In this report we focus in particular on the role played by the temporal component and to the information loss related to neglecting that dimension and inspect the role played by seasonal or long time-period variations. We conduct an analysis of basic network features of the aggregated and temporal graphs, we analyze their community structure and we model population dynamics of marine lives driven by the flow. Ultimately, we determine that time-independent approximations can effectively represent long-term transportation evolution spanning multiple years. On the other hand, to show seasonal variations in a year, it is necessary to incorporate explicit time-dependence in the transport matrix to account for seasonality.
## Introduction
The marine ecosystems play an important role in our society, influencing, for instance, land ecosystems [1], economics [2], as well as public health [3]. The structure and function of marine ecosystems respond drastically to seasonal changes and climatic variations [4]. Environmental fluctuations can thus affect the plankton community structure, as well as the spatial distribution of fish and invertebrates, the recruitment success of pelagic fish and even the mortality of birds and mammals [5]. Marine ecosystems are embedded in patches of water that are continuously moving, stretching, and diluting. These processes drive inhomegeneities in a wide span of scales and global patterns of marine population dynamics are majorly determined by the dispersion of oceanic currents, with implications for the integrated ecosystem properties [6]. Due to the intrinsic challenges
of researching the marine environment and the limited availability of spatial data, it is hard to track the marine population directly. However, recent advances of satellite imaging and hydrodynamic modelling data made possible the open access to oceanic flow patterns [7]. One of the approaches used to characterize flow-driven dispersal of oceanic currents and propagules is to simulate the flow of synthetic particles using the hydrodynamic data [8]. Lagrangian particle trajectories can then be translated into transport matrices, that can interpreted as weighted, temporal, directed networks. This allows one to rigorously investigate maritime flows with graph theoretical tools [9].
We here consider a snapshots temporal network [10], collected in the Mediterranean sea and corresponding to one-month windows of the same year (2002) and to the same month (July) across 10 years (from 2002 to 2011). A common approach to deal with these data (see for instance [9]) is to consider a time-aggregated network obtained by averaging all the snapshots across the years. To the best of our knowledge, the explicit time dependence of these networks has not been considered in previous studies and is the focus of this report. By exploiting network-theoretic techniques, we quantify the differences between the temporal and aggregated graphs in terms of link density, degree and weight distributions, and community structure. These metrics show the topological structure of the transport networks and allow us to qualitatively evaluate how physical behaviors such as dispersion and mixing depend on seasons and climate. Time-independent networks exhibit significant differences in the distribution of degrees and weights compared to the time-dependent instances across all datasets. However, notable changes in communities are observed only when considering snapshots within the same year, while minimal evolution is observed when examining the same month across different years. This suggests that structural changes are primarily induced by seasonal effects.
In order to assess the effect of time-dependent transportation in marine populations, we formulate the marine population dynamics in with two advection-reaction-like models on top of the flow network. We compare the results obtained when using the temporal networks against the aggregated networks to characterize how the oceanic flow seasonally and climatically affect the population distribution of marine lives and its robustness to regulate the population against abrupt events. We find that the time-independent description tends to generate smoother patterns compared to the ones obtained with the faithful time-dependent representation.
While our results were primarily conceived to apply to marine sedentary populations, whose dispersal is mediated by marine currents, our conclusions may potentially relate to how air-borne dispersal of sedentary terrestrial populations is evaluated as well.
## Methods
This section delineates the methodology adopted to obtain the main results and provides a description of the data our analysis relies upon. We first describe how the temporal network is obtained from the original data. We then delineate some basics of community detection used to study the meso-scale structure of the so-obtained network and we describe a biophysical model for population dynamics of marine organisms passively transported by the currents.
### Network construction
We exploit the data taken from the Mediterranean Forecasting System (MFS) based on NEMO-OPA (Nucleus for European Modelling of the Ocean-PArellelis, version 3.2 [7]). This data-assimilative operational model has been implemented in the Mediterranean at
\(1/16^{*}\) degree horizontal regular resolution and 72 unevenly spaced verticals levels [11]. We use the physics reanalysis products for years spanning 2002-2011 downloaded from Marine Copernicus website marine.copernicus.eu.
Following the procedure described in [9], we discretize the Mediterranean basin on a grid of 8196 patches of \(0.25^{*}\) resolution. We create a network, representing each patch as a node and we add directed edges encoding the exchange of fluid that moved from one patch to the another in a given time interval. Each edge has an associated weight proportional to the amount of fluid transported. This quantity is obtained from a Lagrangian dynamics by following trajectories of ideal fluid particles and keeping record of their initial and final positions (i.e. starting and ending nodes) during the time interval considered. More specifically, we integrate for a fixed time \(\tau\) the equation of motion for each particle, from the initial condition \(\mathbf{x}_{0}\) at time \(t_{0}\) until the final position at \(t_{0}+\tau\), using a velocity field \(\mathbf{v}(\mathbf{x},t)\), defining the flow map \(\Phi_{t_{0}}^{\tau}\) :
\[\mathbf{x}\left(t_{0}+\tau\right)=\Phi_{t_{0}}^{\tau}\left(\mathbf{x}_{0} \right),\]
which determines the motion of single fluid particles. By considering the action of the flow map on all the points contained in a fluid region \(A\) we define the action of
Figure 1: **Transport matrix construction from the hydrodynamical data** In (a), transport matrix is constructed from tracer’s advection following Eq. (1). In (b), the Mediterranean sea is discretized into N = 3270 equal-area boxes and apply the pipeline as in (a) to obtain the transport matrices over 12 months.
\(\Phi_{t_{0}}^{\tau}\) on whole sets: \(A\left(t_{0}+\tau\right)=\Phi_{t_{0}}^{\tau}\left(A\left(t_{0}\right)\right)\). We then place \(100\) particles per node to initialize the system. In practice, trajectories are simulated by integrating the velocity field, bilinearly interpolated using a Runge-Kutta 4 algorithm with a time step of \(0.3\ h\), fulfilling the Courant-Friedrichs-Lewy condition [12]. We assume that the transport we deal with is primarily in two dimensions and neglect transport in the vertical dimension. In particular, we use the MFS horizontal velocity fields of the 3rd and 17th vertical levels, which correspond to about \(12\ m\) for the shallow coastal habitat and \(102\ m\) for the neritic shelf habitat, respectively.
Applying the flow map to the discrete boxes, we have an estimation of the flow among each pair of nodes. Given the collection of boxes \(\left\{B_{i},i=1,\ldots,N\right\}\), we represent the transport between them by the discrete version of the Perron-Frobenious operator \(\mathbf{P}\left(t_{0},\tau\right)\), obtained within the Ulam approach [13], whose matrix elements are given by :
\[\mathbf{P}\left(t_{0},\tau\right)_{ij}=\frac{m\left(B_{i}\cap\Phi_{t_{0}+\tau }^{-\tau}\left(B_{j}\right)\right)}{m\left(B_{i}\right)} \tag{1}\]
where \(m(A)\) is a measure assigned to the set \(A\). In our case, it is proportional to the amount of fluid it contains, i.e. simply its area. A probabilistic interpretation of Eq. (1) is that \(\mathbf{P}\left(t_{0},\tau\right)_{ij}\) is the probability for a particle to reach the box \(B_{j}\), under the condition that it started from a uniformly random position within box \(B_{i}\). Other measures referring for example to heat or salt content could be implemented for future applications. Eq. (1) states that the flow from box \(B_{i}\) to box \(B_{j}\) is the fraction of the contents of \(B_{i}\) which is mapped into \(B_{j}\). If a nonuniform distribution of some conserved tracer is initially released in the system such that \(\left\{p_{i}\left(t_{0}\right),i=1,\ldots,N\right\}\) is the amount of such tracer in each box \(\left\{B_{i}\right\}\) at the initial instant \(t_{0}\), the matrix \(\mathbf{P}\left(t_{0},\tau\right)\) gives the evolution of this distribution after a time \(\tau\) as \(p_{j}\left(t_{0}+\tau\right)=\sum_{i=1}^{N}p_{i}\left(t_{0}\right)\mathbf{P} \left(t_{0},\tau\right)_{ij}\). Writing the \(\left\{p_{i}\right\}\) as row vectors:
\[p\left(t_{0}+\tau\right)=p\left(t_{0}\right)\mathbf{P}\left(t_{0},\tau\right).\]
The matrix \(\mathbf{P}\left(t_{0},\tau\right)\) is row-stochastic, i.e. it has non-negative elements and \(\sum_{j=1}^{N}\mathbf{P}\left(t_{0},\tau\right)_{ij}=1\), but is column stochastic the flow \(\mathbf{v}(\mathbf{x},t)\) is in-compressible. The quantity \(\sum_{i=1}^{N}\mathbf{P}\left(t_{0},\tau\right)_{ij}\) measures the ratio of fluid present in box \(B_{j}\) after a time \(\tau\) with respect to its initial content at time \(t_{0}\). This ratio will be unity when the matrix is doubly stochastic (in-compressible flow).
As a standard way to evaluate numerically \(\mathbf{P}(t_{0},\tau)\), we apply the Lagrangian map to a large number of particles released uniformly inside each of the boxes \(\left\{B_{i},i=1,\ldots,N\right\}\) (Fig. 2). The initial number of particles in each box (\(\Omega_{i}\)), is a proxy of the amount of fluid it contains and should be proportional to its measure \(m\left(B_{i}\right)\). Therefore, since we work with boxes of equal area, we seed the same number of particles in each box (\(\Omega_{i}=\Omega\ \ \ \forall i\)). The number of particles transported from box \(B_{i}\) to box \(B_{j}\) gives an estimation of the flow among these boxes. A numerical approximation to Eq. (1) is:
\[\mathbf{P}\left(t_{0},\tau\right)_{ij}\approx\frac{\text{number of particles from box }i\text{ to box }j}{\Omega}\]
Because of the time-dependence of the velocity field, the results of the Lagrangian simulations will depend on both the initial time \(t_{0}\) and the duration of the simulation \(\tau\). Once these parameters are fixed, we can build a network described by a transport matrix \(\mathbf{P}\left(t_{0},\tau\right)\) that characterizes the connections among each pair of nodes from initial time \(t_{0}\) to final time \(t_{0}+\tau\). We interpret \(\mathbf{P}\left(t_{0},\tau\right)\) as the adjacency matrix of a weighted and directed network, so that \(\mathbf{P}\left(t_{0},\tau\right)_{ij}\) is the weight of the link from node \(i\) to node \(j\).
The network constructed in this way characterizes the final locations of all fluid elements a time \(\tau\) after their release at time \(t_{0}\), but gives no information on particle
locations at intermediate times. Also, since each of the matrices \(\mathbf{P}\left(t_{0}+k\tau,\tau\right)\), for \(k=0,1,\ldots,n-1\), where \(n\tau\) is the total time elapsed, is a stochastic matrix, one can consider the discrete time Markov chain in which an initial vector giving occupation probabilities \(p\left(t_{0}\right)=\left(p_{1}\left(t_{0}\right),\ldots,p_{N}\left(t_{0} \right)\right)\) for the different boxes is evolved in time as \(p\left(t_{n}\right)=p\left(t_{0}\right)\mathbf{P}\left(t_{0},\tau\right) \mathbf{P}\left(t_{1},\tau\right)\ldots\mathbf{P}\left(t_{n-1},\tau\right)\), where \(t_{k}=t_{0}+k\tau\). This time evolution will not be exactly equal to the true evolution \(p\left(t_{n}\right)=p\left(t_{0}\right)\mathbf{P}\left(t_{0},n\tau\right)\), but a Markovian approximation to it in which the memory of the particle positions is lost after a time \(\tau\). The Markovian approximation may be reasonable in some circumstances and in fact, it has been successfully used in geophysical flow problems [14].
Apart from the Markov assumption, replacing the continuous flow system with a discrete network introduces discretization errors. Even if the integration is done accurately, the initial and final locations of the transported particles are only specified up to a precision \(\Delta\), given by the linear side of the boxes. This implies that our network approach does not display explicitly fluid structures smaller than the box length-scale \(\Delta\).
Given the temporal matrix, let us now formally define its aggregated counterpart. The elements of \(\mathbf{P}\left(t_{0},\tau\right)_{i,j}\) account for the number of tracers that can be found in node \(j\) at time \(t_{0}+\tau\), which departed from position \(i\) at time \(t_{0}\). One commonly used approximation when evaluating advection over different periods of fixed duration \(\tau\) is the _aggregated_ matrix. Within this approximation, the dependence of the transport matrix on the initial time \(t\) is neglected. In particular, given an ensemble of \(T\) transport matrices \(\mathbf{P}\left(t_{i},\tau\right)\) with \(t_{1}<\cdots<t_{T}\), the transport matrix is defined element-wise as:
\[\hat{\mathbf{P}}_{i,j}^{\tau}:=\frac{1}{T}\sum_{t=x}^{T}\mathbf{P}\left(t_{x},\tau \right)_{i,j}. \tag{2}\]
Then, the fully time-dependent matrices are replaced with the aggregated matrix:
\[\mathbf{P}\left(t_{0},\tau\right)\approx\hat{\mathbf{P}}^{\tau}\quad\forall t\in[t_{1},t_{T}].\]
### Dynamical community detection
The fact that marine currents are not isotropic generates some sea areas that are better connected than others. Therefore, we expect different causal relations between different patches, resulting in a stronger influence of some areas over others. In the graph representation paradigm, this corresponds to a community structure. We here investigate such mesoscale structure and compare the results of community detection on the temporal and on the aggregated graphs.
Community detection [15] is a commonly studied inference problem that consists in partitioning the nodes of a network into tightly and non-overlapping groups. Formally, for each node \(i\), one wants to define a mapping \(i\rightarrow\ell(i)\in\{1,\ldots,k\}\), where \(k\) is the number of communities. A practical challenge in this setting is related to the complexity of the network structure that is weighted, directed, and temporal. We use a dynamical spectral clustering algorithm inspired by [16], adapted to weighted and directed graphs.
The core idea of spectral clustering is to represent each node of the network as a vector in a low dimensional space, using the eigenvectors of a suited graph matrix representation. The vectors can then be divided into groups with, for instance, _k-means_ algorithm [17] of _expectation-maximization_[18], as we do in the following. For static, weighted and directed networks, [19] showed that, even in the sparse regime in which typically many spectral algorithms tend to fail, one can obtain this embedding computing the \(k\) largest eigenvalues of the weighted adjacency matrix, store them in the columns of an embedding matrix \(X\in R^{N\times k}\) and interpret its rows as embedding vectors. We
use this method to obtain the communities from the aggregated (static) graph. For the temporal one, instead, we generalize building the following matrix \(\mathbf{M}\in R^{NT\times NT}\):
\[\mathbf{M}=\begin{pmatrix}\mathbf{P}(t_{0},\tau)&hI_{N}&0&\ldots&0\\ hI_{N}&\mathbf{P}(t_{1},\tau)&hI_{N}&\ldots&0\\ 0&hI_{N}&\mathbf{P}(t_{2},\tau)&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&0\\ 0&0&0&\ldots&\mathbf{P}(t_{T},\tau),\end{pmatrix}\]
where \(I_{N}\) is the diagonal matrix of size \(N\) and \(h>0\) is a regularization parameter that imposes that the community label of each node must change slowly across time. In our simulations we set this regularizer value to \(h/p=0.2\), where \(p\) is the sum of all entries of \(\mathbf{P}\). Every row of this matrix, and consequently of its eigenvectors, is associated to a node at a given time instant. The derivation of \(\mathbf{M}\) can be obtained following the steps described in [16], computing the Hessian matrix of the naive mean field free energy (instead of the Bethe free energy) related to the same Hamiltonian matrix.
On top of the flexibility that allows one to deal at once with weighted, directed, temporal and sparse graphs, a major advantage of this approach is that one can estimate the number of communities \(k\) in an unsupervised fashion. In fact, \(\mathbf{M}\) is non-Hermitian, hence its eigenvalues are generally complex. In a model-based approach, however, the largest of these eigenvalues, are associated to its expectation that is - by definition - supposed to be symmetric of rank \(k\). Hence, the first \(k\) largest eigenvalues of \(\mathbf{M}\) are real and by identifying the position of the first complex eigenvalue one can also estimate the number of communities, a typically hard task to solve.
### Ecological modeling
A general model for the population dynamics of marine organisms that are affected by ocean currents can be written in the form,
\[\partial_{t}\mathbf{n}(x,t)=f(\mathbf{n}(x,t),x)+T(\mathbf{n}(x,t),x,t),\]
where \(\mathbf{n}\) is a vector whose components are the fish populations at point \(x\) and time \(t\), \(f\) encodes the local population dynamics within and between species and \(T\) encodes the transport by the currents.
The Lagrangian approach for modeling currents can be used as a backbone for modeling and simulating such marine environments as the dynamics of ideal tracers can be used as a proxy for real population transport. When the time between the different transport networks \([\mathbf{P}(t_{1},\tau)\ldots,\mathbf{P}(t_{T},\tau)]\) is small compared to the time scale of local population dynamics the transport-reaction system can be decoupled and estimated as a two-step process
The two steps of the simulations are; (i) update the populations at each node according to the local population dynamics; (ii) redistribution of the populations proportionally with the mobility matrix of the given period.
## Results
We here discuss the main results of our analysis of the global network properties, the community structure, and the population dynamics.
### Global network properties
#### Month by month network
One of our main objectives is to evaluate whether an aggregated matrix can be used as a proxy of the real fully time-dependent transport matrices. In Fig. 3, we compare the global properties of 12 month-by-month matrices and their corresponding aggregated matrix. The first fact that draws our attention is that the aggregated matrix is way denser in links than any of the month-by-month matrices. Indeed, in Fig. 3-(a) it is shown that the average degree of the aggregated matrix is always bigger than the average degree of any of the twelve month-by-month matrices. Furthermore, a temporal pattern that we associate with seasonality can be observed. Of course, this temporal pattern is destroyed using the aggregated matrix, which is time-independent. Similarly, in Fig. 3-(b) it is shown that the degree distribution is wider in the case of the aggregated matrix, meaning that one would expect an overall increase in the variability of the degrees. The fact that the degrees increase through the process of aggregation is expected since the degree can only increase or stay constant through this operation [Eq. (2)]. How much the degree of nodes increases depends on the variability of links across the different instances of transport matrices used to compute the aggregated matrix. The distribution of degrees in the aggregated matrix indicates that small degrees, similar to the ones observed in the month-by-month matrices, can occur with a similar probability as much larger degrees. Based on this observation, we assume that while the connections of some nodes change significantly between snapshots, the links of other nodes may remain relatively constant.
Apart from the connectivity, it is important to assess the effect of the aggregation of matrices on the weights of the links. Every node has an associated total outflow and
Figure 2: **Conceptual illustration of advection-reaction model of fish.** (a)Eggs are spawn as propagules until developed into larvae, are driven by the oceanic currents. The advective duration of one month fits the sampling time of the networks. During the growth period, the fish are less susceptible to oceanic current, allowing them to reproduce, in which the reactive duration of one year fits the total sampling duration. (b)In this model, we assume that fish eggs spawned in coastal regions. They follow the sea currents for a month (blue arrows) and settle in shallow water (purple boxes) as larvae, while those end up in deep sea (red boxes) cannot settle and die. The settled larvae will no longer affected by the sea currents and they will undergo a growth cycle about 11 months (green arrows) and turn into adult fish. They then reproduce more eggs, beginning the next cycle.
inflow, defined respectively as
\[(W_{\text{out}})_{i}=\sum_{j}\mathbf{P}(t_{0},\tau)_{ij},\qquad(W_{\text{in}})_{i}= \sum_{j}\mathbf{P}(t_{0},\tau)_{ji}.\]
Since all nodes have a similar number of tracers at the initial time and all tracers must go somewhere, we expect that the outflow should be close to a constant. Indeed, in Fig. 3-(c) we show that the average outflow of transport matrices corresponding to different months agrees with the average outflow of the associated aggregated matrix. Since tracers can be accumulated in some destinations with many incomes, the average inflow does not need to be conserved. In Fig. 3-(d), it is shown that the inflow of the aggregated matrix tends to be smaller than in the time-dependent snapshots. This implies that inflows typically do not accumulate through matrix aggregation, as different snapshots often have varying destinations. Actually, all weights tend to be smaller in the aggregated matrix [Fig. 3-(e)]. Summing up the information provided through Figs. 3, the aggregated matrix tend to be more connected, but the number of tracers traveling through each link tends to be smaller. How these changes may affect the properties of the advection is a non-trivial question that we address in the remainder.
Figure 3: **Degrees and weights of Aggregate transport matrix versus monthly-month transport matrices** In (a), the average degree of the transport networks at different times \(\langle k_{i}\rangle\) is divided by the average degree of the aggregated matrix (\(\langle k\rangle\approx 250\)). It can be observed that the average degree of the aggregated matrix is close to one order of magnitude bigger than the typical degree of the month-by-month matrices. Furthermore, a temporal pattern that we associate with seasonality can be observed. In (b), we provide a more detailed view by presenting the complete degree distribution for two months, as well as the aggregated mobility matrix. Both (a) and (b) clearly demonstrate significant discrepancies between the connectivity of the month-by-month transport matrices and the aggregated transport matrix. In (c), we show the average outflow for the month-by-month matrices (dots) fluctuate about the value obtained on the aggregated matrix (horizontal dashed line). Similarly, in (d) it can be observed that the average inflow is always bigger in the case of the month-by-month matrices. In (e), we show the distribution of weights for two months (January and December) together with the one corresponding to the aggregated matrix. Whereas the distributions for January and December are in agreement, the aggregated weights tend to be smaller.
#### Year-by-year network
In this section, we replicate the analysis conducted in section Month by month network using a different dataset, which pertains to transportation data from the same month but across ten different years. Through the plots in Fig. 4 we get a similar behavior to what was observed within the month-by-month dataset: whereas the connectivity increase through the aggregation of matrices, the weights per link tend to decrease. In the year-by-year dataset, we cannot appreciate coherent oscillations Fig. 4-(a) like the ones observed in the month-by-month dataset [Fig. 3-(a)], but rather random fluctuations.
### Community structure
We aim to investigate how the global behaviors of oceanic flow are affected by seasons (short term) and climates (long term). We apply community detection to identify coherent regions in the sea, well mixed internally but with little exchange among them. In fig. 5a (i - iv), we plot the community structure in color code for 4 months and find that the community structure is slowly evolving over different months. In particular, the probability of the same node to be assigned to the same community in two successive time steps is approximately 0.8. We remark than some communities have a rather small size (most of them reflecting shallow oceanic regions such as continental shelves), and that there are especially some inter-seasonal variabilities.
On the other hand, the community structure of the same month across different
Figure 4: **Degrees and weights of aggregate transport matrix versus year-by-year transport matrices** In (a), the average degree of the transport networks at different times \(\langle k_{i}\rangle\) is divided by the average degree of the aggregated matrix (\(\langle k\rangle\approx 218\)). It can be observed that the average degree of the aggregated matrix is close to one order of magnitude bigger than the typical degree of the year-by-year matrices. In (b), we provide a more detailed view by presenting the complete degree distribution for two years (2003 and 2010), as well as the aggregated mobility matrix. Both (a) and (b) clearly demonstrate significant discrepancies between the connectivity of the year-by-year transport matrices and the aggregated transport matrix. In (c), we show the average outflow for the year-by-year matrices (dots) fluctuate about the value obtained on the aggregated matrix (horizontal dashed line). Similarly, in (d) it can be observed that the average inflow is always bigger in the case of the year-by-year matrices. In (e), we show the distribution of weights for two years (2003 and 2010) together with the one corresponding to the aggregated matrix. Whereas the distributions for 2003 and 2010 agree within them, the weights for the aggregated matrix tend to be smaller.
Figure 5: **Communities of monthly mobility network**. Community detection is performed on the transport networks and 6 communities (represented by the colors) are detected. (a.i-iv) four monthly transport networks (Jan, Apr, Jul and Oct in 2002) corresponding to four seasons are presented. Community structures change over the months, where most months deviate from the aggregated network community from Jan 2002 to Dec 2002 (a.v). (b.i-iv) four transport networks from July of 2003 to 2006. Community structures are similar for the same month across different years, over a 10 year period from 2002 to 2011 (b.v). The plots (vi) show the aggregated network on a basis so that nodes in the same community are close to one another.
years is very stable (see fig. 5b (i - iv)). Indeed, the probability of being assigned to the same community in two successive time steps is larger than 0.9, suggesting that the main variability is due to a seasonal effect, as it was already commented in the previous section. To conclude, this implies that a dynamical process run on the network at the time-scale of months will be expected to have significant differences if one uses the temporal or the aggregated network. A less prominent role of time should be observed, instead, at the scale of years.
We now validate these claims modeling two types of dynamical processes on the network: one that is mainly theoretical in which diffusion is couple with time-dependent birth and death rates; the other that more reliably models an ecological system.
## Modelling
### Synthetic model
In this section we describe the effect of a transportation model with time-dependent birth and death rates. These functions couple with the network dynamics evidencing the importance of keeping the temporal dimension into account. Even though this model does not claim to be realistic, it shows the importance of keeping the temporal dimension into account whenever there is a dynamical process on the network coupling with its dynamics. We consider the following model, in which \(p_{i}(t)\) denotes the probability of a particle to be in node \(i\) at time \(t\).
\[p_{i}(t+\tau)=\left[\mathbf{P}(t,\tau)p\right]_{i}+(\lambda_{t}-\mu_{t})p_{i} (t), \tag{3}\]
where \(\lambda_{t},\mu_{t}\) are the birth and death rates, respectively. We choose \(\lambda_{t}=te^{-t/1.5}\) and \(\mu_{t}=(T-t)e^{-(T-t)/1.5}\). This models the fact that in spring there is a blooming of births while in winter there is a higher mortality rate. As a consequence, the population size is not constant and is peaking around the month of April, while it is minimal towards the end of the year. Note that \(\sum_{t}=\lambda_{t}=\sum_{t}\mu_{t}\), implying that the population stays constant over the simulation time, even though the distribution are not constant.
Figure 6: **Synthetic modelling simulation**. For each point of the map, we initialize the probability distribution to a Dirac delta and perform a simulation as per Equation (3) using the temporal transition matrices and the aggregated one. The color code shows the cosine similarity between the two final distributions and spans from 0 (cyan, in the Spain region) for different distributions to 1 (purple, Venice region) for equal distributions.
To test the relevance of keeping the temporal dimension into account, we run several simulations: for each of them we set an initial distribution fully concentrated into a single node of the graph. We then run the diffusive process in two different ways: one in which at each time step we use the transition matrix of that time; the other in which we use the aggregated one. We then compare the resulting distributions in terms of cosine similarity.1 Figure 6 shows in color code the obtained cosine similarity for each initialization points, evidencing that in some regions considerable discrepancies between the two models are observed. On the opposite - and coherently with the community detection analysis - other regions, such as the Adriatic sea, show a much larger agreement.
Footnote 1: The cosine similarity between two vectors is defined as \(s(x,y)=\frac{x^{T}y}{\|x\|_{2}\|y\|_{2}}\).
### Ecological modeling
Our two-step simulation scheme is a good representation for the dynamics of some fish larvae [20]. After hatching, fish larvae enter a phase called the pelagic larval stage, where they spend a variable amount of time drifting with ocean currents until they settle in a region [see sketch in 2-(a)]. We illustrate our two-step approach for modeling ecological systems affected by currents through a simplified model for the population of fish larvae. In our model fish can only settle and lay eggs in shallow water for which with define as depths of less than 100m we refer to these nodes as shallow water nodes (SWN). After hatching larvae will drift along the currents for 1 month and settle if their location corresponds to SWN, or otherwise they die. The new population of eggs in each node then updates according to a logistic growth model
\[n_{i}(t)=\frac{Kn_{i}(t-1)}{n(t-1)+(K-n_{i}(t-1))e^{-\nu\tau}} \tag{4}\]
where \(K\) is the capacity, and \(\nu\) is the growth rate which we assume to be the same for all SWN. We simulate the model in the Mediterranean Sea sequentially for the months of July over the years 2002-2011. We initialize the larvae population uniformly on the shallow water nodes and use the first years 2002-2006 as a thermalization period.
The interplay of bathymetry landscape and currents shape the spatial distribution of the population to qualitatively similar population distribution of larvae.
Figure 7: **Result of the ecological modelling simulation in year 2011**. Population of fish larvae for the year 2011 for \(K=1\) and \(\nu=1\).
Particularly We see that the Adriatic sea and the Tunisian continental shelf regions have high populations. The population distribution in the SWN of the two regions are presented in Figure 7 for year 2011. In this preliminary analysis we observe low variability in the population distribution in the two regions over the years, but a general spatial heterogeneity from year to year that is driven by the currents.
## Discussion
In this report we studied marine current flows with network theory tools analyzing in particular the role played by the time component of the network evolution. A typical approach consists in fact to neglect this dimension [9] and we investigated the consequences of this approximations under many different angles.
The analysis of the global properties of the transport networks reveals that the aggregated matrix has a higher number of links with relatively smaller weights per link, in comparison to the time-dependent snapshots and it thus corresponds to a more homogeneous distribution over space. In terms of simulation, we saw that a dynamical model with an explicit temporal dependence leads to a more homogeneously final distribution when adopting the aggregated network instead of the time-dependent one. On top of this "averaging" property, induced by the changes in connectivity, we also verified that the connections between nodes tend to vary among the different snapshots. These variations are observed at the local network structure, at a mesoscale community level, and are reflected in the output of the simulations as well. The time dependence of communities is easily observable when looking at the network across months, but is not so evident when looking at its evolution across years. We explain this difference between the across-month and across-year evolution as a consequence of seasonality. This result suggests that neglecting the time dependence of transport matrices could be a reasonable approximation depending on the specific time windows under consideration: a proper description of within-a-year transport phenomena will require the full time-dependent transport matrices in order to capture oscillations due to seasonality. The seasonal change of oceanic flow pattern is in accordance with previous studies [21]. Conversely, transport spanning multiple years can potentially be described adequately using aggregated time-independent matrices. Other studies provide evidence that there is no coherent trend in the Meridional Overturning Circulation due to climate change in small time scales [22]. In our case, the Mediterranean Sea is even more isolated than the Atlantic Ocean, so the variation due to short-term climate is minimal compared to seasonal effects.
We point out, however, that in all cases the aggregated matrix is denser, hence it tends to induce an artificial averaging effect. This effect has already been observed in epidemiological modeling, in which the larger degree artificially speeds up the spreading of an infectious disease [23]. The approach proposed in [23] to deal with this problem is to generate matrices that identify and preserve the most relevant ties of a temporal network without affecting the degree distribution.
This analysis evidenced the importance of the time component of the network describing marine flows. we acknowledge that these results are preliminary, since the dynamics of the network can be better captured by observed field and satellite data. Also, a realistic ecological model would require more complex considerations. Nevertheless, here we present a feasible framework combining oceanic transport network and ecological modeling which can already capture many important characteristics of real population dynamics. This encourages more work in the future as marine ecology has many determinants that interact in a complex way. The role of the temporal structure in more realistic scenarios can then be envisioned as a natural continuation of this work.
## Acknowledgments
This work is the output of the Complexity72h workshop, held at IFISC in Palma de Mallorca, Spain, 26-30 June 2023 complexity72h.com. LD acknowledges the support from Fondation Botnar and from the Lagrange project of Fondazione CRT. Partial financial support has been received from the Agencia Estatal de Investigacion and Fondo Europeo de Desarrollo Regional (FEDER, UE) under project APASOS (PID2021-122256NB-C21/PID2021-122256NB-C22), the Maria de Maeztu project CEX2021-001164-M, funded by the MCIN/AEI/10.13039/501100011033, and the Conselleria d'Educacio, Universitat i Recerca of the Balearic Islands (Grant FPI FPI_006_2020).
|
2309.16094 | Visible Point Vector Partition Identities for Hyperpyramid Lattices | We set out an elementary approach to derive Visible Point Identities summed
on lattice points of inverted triangle (2D), pyramid (3D), hyperpyramid (4D, 5D
and so on) utilizing the greatest common divisor for the nD Visible Point
Vectors. This enables study of partitions in nD space into vector parts
distributed along straight lines radial from the origin in first hyperquadrant
where coordinates of lattice points are all positive integers. We also give
several new combinatorial identities for Visible Point Vector partitions. | Geoffrey B. Campbell | 2023-09-28T01:34:09Z | http://arxiv.org/abs/2309.16094v1 | # Visible Point Vector Partition Identities for Hyperpyramid Lattices
###### Abstract.
We set out an elementary approach to derive Visible Point Identities summed on lattice points of inverted triangle (2D), pyramid (3D), hyperpyramid (4D, 5D and so on) utilizing the greatest common divisor for the nD Visible Point Vectors. This enables study of partitions in nD space into vector parts distributed along straight lines radial from the origin in first hyperquadrant where coordinates of lattice points are all positive integers. We also give several new combinatorial identities for Visible Point Vector partitions.
Key words and phrases:Exact enumeration problems, generating functions. Partitions of integers. Elementary theory of partitions. Combinatorial identities, bijective combinatorics. Lattice points in specified regions.
Thanks to Professor Dr Henk Koppelaar, whose suggestions helped summarize, for this paper, parts of chapters 5, 12 and 21 of the author's draft book to appear in 2024.
**Statement 1.2**.: _For each of \(|x|,|y|,|z|<1,\)_
\[\prod_{\begin{subarray}{c}gcd(a,b,c)=1\\ a,b<c\\ a,b\geq 0,\;c>0\end{subarray}}\left(\frac{1}{1-x^{a}y^{b}z^{c}}\right)^{\frac{1 }{c}}=\left(\frac{(1-xz)(1-yz)}{(1-z)(1-xyz)}\right)^{\frac{1}{(1-x)(1-y)}} \tag{1.2}\]
\[=1+\frac{z}{1!}+\left|\begin{matrix}1&-1&0\\ \frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1&-2&0\\ \frac{(1-x^{3})(1-y^{3})}{(1-x)(1-y)}&1&-3\end{matrix}\right|\frac{z^{3}}{3!}\]
\[+\left|\begin{matrix}1&-1&0&0\\ \frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1&-2&0\\ \frac{(1-x^{3})(1-y^{3})}{(1-x)(1-y)}&\frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1 &-3\end{matrix}\right|\frac{z^{4}}{4!}+etc.\]
**Statement 1.3**.: _For each of \(|w|,|x|,|y|,|z|<1,\)_
\[\prod_{\begin{subarray}{c}gcd(a,b,c,d)=1\\ a,b,c<d\\ a,b,c\geq 0,\;d>0\end{subarray}}\left(\frac{1}{1-w^{a}x^{b}y^{c}z^{d}}\right)^{ \frac{1}{d}}=\left(\frac{(1-wz)(1-xz)(1-yz)(1-wxyz)}{(1-z)(1-wxz)(1-wyz)(1-xyz) }\right)^{\frac{1}{(1-w)(1-x)(1-y)}}, \tag{1.3}\]
\[=1+\frac{z}{1!}+\left|\begin{matrix}1&-1\\ \frac{(1-w^{2})(1-x^{2})(1-y^{2})}{(1-w)(1-x)(1-y)}&1\\ \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{(1-w)(1-x)(1-y)}&\frac{(1-w^{2})(1-x^{2})(1 -y^{2})}{(1-w)(1-x)(1-y)}&1\end{matrix}\right|\frac{z^{3}}{3!}\]
\[+\left|\begin{matrix}1&-1&0&0\\ \frac{(1-w^{2})(1-x^{2})(1-y^{2})}{(1-w)(1-x)(1-y)}&1&-2\\ \frac{(1-w^{4})(1-x^{4})(1-y^{4})}{(1-w)(1-x)(1-y)}&\frac{(1-w^{2})(1-x^{2})( 1-y^{2})}{(1-w)(1-x)(1-y)}&1\end{matrix}\right|\frac{z^{3}}{3!}\]
\[+\left|\begin{matrix}1&-1&0&0\\ \frac{(1-w^{2})(1-x^{2})(1-y^{2})}{(1-w)(1-x)(1-y)}&1&-2&0\\ \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{(1-w)(1-x)(1-y)}&\frac{(1-w^{2})(1-x^{2})( 1-y^{2})}{(1-w)(1-x)(1-y)}&1&-3\end{matrix}\right|\frac{z^{4}}{4!}+etc.\]
Each of the above identities give us exact enumerations of certain functions of vector partitions. Suppose we say the \(z\) variable is a "vertical axis" upon which to plot the 2D, 3D, 4D graph for the type of partitions and partition functions under consideration. Then the power series in \(z\) give us exact determinant representations at each 1D, 2D or 3D layer corresponding to \(z=1,2,3,...\).
The proofs of the power series determinant coefficient functions in (1.1) to (1.3) rely only on applying Cramer's Rule to the coefficient recurrences, as well as differentiating the logarithms of both sides of the infinite products and their closed form evaluations.
So, in ensuing pages, we give the simplest \(n\)-space hyperpyramid VPV theorem due to the author in [19]. The so-called "Skewed Hyperpyramid \(n\)-space Identities" from [19] we shall cover in a later paper. The application of the determinant coefficient technique of our earlier work on hyperquadrant lattices is applicable and bearing some semblance to the \(q\)-binomial from earlier papers' variants. Note that for most identities in this paper, the left side products are taken over a set of integer
lattice points inside an inverted 2D triangle lattice, or 3D pyramid, or 3+ dimension hyperpyramid on the Euclidean cartesian space.
In the first 15 years of the 21st century the summations found by the Borwein brothers Peter and Jonathan, their father David with their colleagues, see [7] to [11] have renewed interest in the old Euler Sums. Their results give us particular values of polylogarithms and related functions popularized by Lewin [23, 24, 25] involving the generalized Harmonic numbers. This work has been developed some way over nearly two decades so now we speak of the Mordell-Tornheim-Witten sums, which are polylogarithm generalizations all seen to be applicable to the VPV identities, but that connection is not yet fully worked through in the present literature. These newer researches including experimentally calculated results can, many of them, be substituted into VPV identities to give us exact results for weighted vector partitions. To make sense of these new results, we need to go back to fundamental definitions and ideas for partitions of vectors as distinct from those well considered already for integer partitions.
We examine the following correspondences. From the elementary generating function for unrestricted integer partitions we have,
\[\prod_{n=1}^{\infty}\big{(}1+x^{1n}+x^{2n}+x^{3n}+\ldots\big{)}=1+p(1)x+p(2)x^ {2}+p(3)x^{3}+\ldots. \tag{1.4}\]
So, equations (1.1) to (1.3) similarly imply
\[\prod_{\begin{subarray}{c}gcd(a,b)=1;\;a<b\\ a\geq 0,\;b\geq 1\end{subarray}}\bigg{(}1+\binom{1/b}{1}(y^{a}z^{b})+ \binom{1/b}{2}(y^{a}z^{b})^{2}+\binom{1/b}{3}(y^{a}z^{b})^{3}+\ldots\bigg{)}\] \[=\sum_{\begin{subarray}{c}a<b\\ a\geq 0,\;b\geq 1\end{subarray}}V_{2}(a,b)y^{a}z^{b}. \tag{1.5}\]
\[\prod_{\begin{subarray}{c}gcd(a,b,c)=1;\\ a,b<c\\ a,b\geq 0,\;c\geq 1\end{subarray}}\bigg{(}1+\binom{1/c}{1}(x^{a}y^{b}z^{c})+ \binom{1/c}{2}(x^{a}y^{b}z^{c})^{2}+\binom{1/c}{3}(x^{a}y^{b}z^{c})^{3}+\ldots \bigg{)}\] \[=\sum_{\begin{subarray}{c}a,b<c\\ a,b\geq 0,\;c\geq 1\end{subarray}}V_{3}(a,b,c)x^{a}y^{b}z^{c}. \tag{1.6}\]
\[\prod_{\begin{subarray}{c}gcd(a,b,c,d)=1;\\ a,b,c<d\\ a,b,c\leq 0,\;d\geq 1\end{subarray}}\bigg{(}1+\binom{1/d}{1}(w^{a}x^{b}y^{c}z^{ d})+\binom{1/d}{2}(w^{a}x^{b}y^{c}z^{d})^{2}+\binom{1/d}{3}(w^{a}x^{b}y^{c}z^{ d})^{3}+\ldots\bigg{)}\] \[=\sum_{\begin{subarray}{c}a,b,c<d\\ a,b,c\geq 0,\;d\geq 1\end{subarray}}V_{4}(a,b,c,d)w^{a}x^{b}y^{c}z^{d}. \tag{1.7}\]
Equations (1.5) to (1.7) ought to give us exact closed form generating functions \(V_{2}\), \(V_{3}\), \(V_{4}\), in 2D, 3D and 4D space respectively. We put this aside for now.
## 2. Vector Partitions whose parts are on ND straight lines.
From equation (1.4) we see that replacing \(x\) by \(y^{a}z^{b}\) for \(|y^{a}z^{b}|<1\) where \(a\) and \(b\) are coprime positive integers, gives the equation
\[\prod_{n=1}^{\infty}\frac{1}{1-(y^{a}z^{b})^{n}}=\prod_{n=1}^{\infty}\left(1+(y ^{a}z^{b})^{1n}+(y^{a}z^{b})^{2n}+(y^{a}z^{b})^{3n}+\ldots\right) \tag{2.1}\]
\[=1+p(1)(y^{a}z^{b})^{1}+p(2)(y^{a}z^{b})^{2}+p(3)(y^{a}z^{b})^{3}+\ldots.\]
Of course this is just a thinly disguised version of the generating function for unrestricted integer partitions. ie. one-dimensional partitions. In the 2D case we are saying that the number of partitions of an integer lattice point vector \(\langle A,B\rangle\) on the line \(z=ay/b\) for \(gcd(a,b)=1\) into vector parts also on this line, is equal to \(p(gcd(A,B))\).
So (2.1), enumerating the number of vector partitions for the lattice points along a 1D line in 2D space, applies equally to a 1D line in 3D space as follows, where \(a\), \(b\) and \(c\) are positive integers with \(gcd(a,b,c)=1\).
\[\prod_{n=1}^{\infty}\frac{1}{1-(x^{a}y^{b}z^{c})^{n}}=\prod_{n=1}^{\infty} \left(1+(x^{a}y^{b}z^{c})^{1n}+(x^{a}y^{b}z^{c})^{2n}+(x^{a}y^{b}z^{c})^{3n}+ \ldots\right) \tag{2.2}\]
\[=1+p(1)(x^{a}y^{b}z^{c})^{1}+p(2)(x^{a}y^{b}z^{c})^{2}+p(3)(x^{a}y^{b}z^{c})^{ 3}+\ldots.\]
Similarly, in the 4D case extension corresponding to the 3D equation (2.2) we have, where \(a\), \(b\), \(c\) and \(d\) are positive integers with gcd(a,b,c,d)=1,
\[\prod_{n=1}^{\infty}\frac{1}{1-(w^{a}x^{b}y^{c}z^{d})^{n}} \tag{2.3}\]
\[=\prod_{n=1}^{\infty}\left(1+(w^{a}x^{b}y^{c}z^{d})^{1n}+(w^{a}x^{b}y^{c}z^{d} )^{2n}+(w^{a}x^{b}y^{c}z^{d})^{3n}+\ldots\right)\]
\[=1+p(1)(w^{a}x^{b}y^{c}z^{d})^{1}+p(2)(w^{a}x^{b}y^{c}z^{d})^{2}+p(3)(w^{a}x^{ b}y^{c}z^{d})^{3}+\ldots.\]
We shall now do something interesting with equation (2.1). We write an "Upper VPV" identity derived from it as follows.
\[\prod_{\begin{subarray}{c}a,b,n\geq 1;\ a\leq b\\ gcd(a,b)=1\end{subarray}}\frac{1}{1-(y^{a}z^{b})^{n}} \tag{2.4}\]
\[= \frac{1}{1-y^{1}z^{1}}\] \[\times \frac{1}{1-y^{1}z^{2}}\frac{1}{1-y^{2}z^{2}}\] \[\times \frac{1}{1-y^{1}z^{3}}\frac{1}{1-y^{2}z^{3}}\frac{1}{1-y^{3}z^{3}}\] \[\times \frac{1}{1-y^{1}z^{4}}\frac{1}{1-y^{2}z^{4}}\frac{1}{1-y^{3}z^{4} }\frac{1}{1-y^{4}z^{4}}\] \[\times \frac{1}{1-y^{1}z^{5}}\frac{1}{1-y^{2}z^{5}}\frac{1}{1-y^{3}z^{5} }\frac{1}{1-y^{4}z^{5}}\frac{1}{1-y^{5}z^{5}}\] \[\times \frac{1}{1-y^{1}z^{6}}\frac{1}{1-y^{2}z^{6}}\frac{1}{1-y^{3}z^{6} }\frac{1}{1-y^{4}z^{6}}\frac{1}{1-y^{5}z^{6}}\frac{1}{1-y^{6}z^{6}}\] \[\times \mbox{etc.}\] \[=\prod_{\begin{subarray}{c}a,b,n\geq 1;\ a\leq b\\ gcd(a,b)=1\end{subarray}}\left(1+(y^{a}z^{b})^{1n}+(y^{a}z^{b})^{2n}+(y^{a}z^{b })^{3n}+\ldots\right)\] \[=\prod_{\begin{subarray}{c}a,b\geq 1;\ a\leq b\\ gcd(a,b)=1\end{subarray}}\left(1+p(1)(y^{a}z^{b})^{1}+p(2)(y^{a}z^{b})^{2}+p(3) (y^{a}z^{b})^{3}+\ldots\right)\] \[=\sum_{n_{1},n_{2},n_{3}\ldots\geq 0}p(n_{1})(y^{1}z^{1})^{n_{1}} p(n_{2})(y^{1}z^{2})^{n_{2}}p(n_{3})(y^{1}z^{3})^{n_{3}}p(n_{4})(y^{2}z^{3})^{n_{3}}\ldots\] \[=\sum_{n_{1},n_{2},n_{3}\ldots\geq 0}p(n_{1})p(n_{2})p(n_{3})p(n_{ 4})\cdots(y^{1}z^{1})^{n_{1}}(y^{1}z^{2})^{n_{2}}(y^{1}z^{3})^{n_{3}}(y^{2}z^{ 3})^{n_{3}}\ldots\] \[=\sum_{n_{1},n_{2},n_{3}\ldots\geq 0}p(n_{1})p(n_{2})p(n_{3})p(n_ {4})\cdots(y^{(1n_{1}+1n_{2}+1n_{3}+2n_{4}+\ldots)}z^{(1n_{1}+2n_{2}+3n_{3}+3n _{4}+\ldots)})\]
where each coefficient of \(n_{i}\) in the index sum of \(y\) is coprime to the coefficient of \(n_{i}\) in the index sum of \(z\);
\[= 1+p_{(1|1)}y^{1}z^{1}\] \[+p_{(1|2)}y^{1}z^{2}+p_{(2|2)}y^{2}z^{2}\] \[+p_{(1|3)}y^{1}z^{3}+p_{(2|3)}y^{2}z^{3}+p_{(3|3)}y^{3}z^{3}\] \[+p_{(1|4)}y^{1}z^{4}+p_{(2|4)}y^{2}z^{4}+p_{(3|4)}y^{3}z^{4}+p_{( 4|4)}y^{4}z^{4}\] \[+p_{(1|5)}y^{1}z^{5}+p_{(2|5)}y^{2}z^{5}+p_{(3|5)}y^{3}z^{5}+p_{(4 |5)}y^{4}z^{5}+p_{(5|5)}y^{5}z^{5}\] \[+etc.\]
where \(p_{(a|b)}:=p_{(a|b)}(y,z)\) is the coefficient function of \(y^{a}z^{b}\).
This enables study of partitions in \(n\)D space into vector parts distributed along straight lines radial from the origin in first hyperquadrant where coordinates of lattice points are all positive integers.
**2D Vector Partitions whose parts are on two straight lines.**
The above analysis is with respect to vector partitions in the upper half first quadrant above and including the line \(y=z\). The simplest version of this theory departing from the single radial from the origin line of lattice points, would be to state the result if dealing with 2D partitions from two such radial lines of lattice
points. For example, consider the two lines with equations \(y=z/2\) and \(y=z/3\). The lattice point vectors along these lines in the first quadrant may be listed as
\[S_{1} = \{\langle 1,2\rangle,\langle 2,4\rangle,\langle 3,6\rangle,\langle 4,8 \rangle,\langle 5,10\rangle,\ldots\};\] \[S_{2} = \{\langle 1,3\rangle,\langle 2,6\rangle,\langle 3,9\rangle, \langle 4,12\rangle,\langle 5,15\rangle,\ldots\}.\]
Following the above rationale we see that the generating function for 2D vector partitions into parts contained in \(S_{1}\) and \(S_{2}\) is
\[\frac{1}{((1-yz^{2})(1-y^{2}z^{4})(1-y^{3}z^{6})\cdots)(1-yz^{3})(1-y^{2}z^{6} )(1-y^{3}z^{9})\cdots)}\]
\[=\left(1+p(1)y^{1}z^{2}+p(2)y^{2}z^{4}+p(3)y^{3}z^{6}+\ldots\right)\left(1+p(1 )y^{1}z^{3}+p(2)y^{2}z^{6}+p(3)y^{3}z^{9}+\ldots\right)\]
\[= 1+p(1)yz^{2}+p(1)yz^{3}\] \[+\ p(2)y^{2}z^{4}+p(1)^{2}y^{2}z^{5}+p(2)y^{2}z^{6}\] \[+\ p(3)y^{3}z^{6}+p(1)p(2)y^{3}z^{7}+p(1)p(2)y^{3}z^{8}+p(3)y^{3}z ^{9}\] \[+\ etc.\]
\[= 1+yz^{2}+yz^{3}+2y^{2}z^{4}+y^{2}z^{5}+y^{2}(3y+2)z^{6}\] \[+\ 2y^{3}z^{7}+y^{3}(5y+2)z^{8}+3y^{3}(y+1)z^{9}+y^{4}(7y+4)z^{10}\] \[+\ y^{4}(5y+3)z^{11}+y^{4}(11y^{2}+6y+5)z^{12}+y^{5}(7y+6)z^{13}\] \[+\ y^{5}(14y^{2}+10y+5)z^{14}+y^{5}(11y^{2}+9y+7)z^{15}\] \[+\ etc.\]
The coefficients here plot onto the grid
\[\begin{array}{c|cccccccc}\vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\vdots&\vdots\\ 15&&&&7&9&11&\cdots\\ 14&&&&5&10&14&\cdots\\ 13&&&&6&7&&\cdots\\ 12&&&&5&6&11&&\cdots\\ 11&&&&3&5&&&\cdots\\ 10&&&&4&7&&\cdots\\ 9&&3&3&&&&\cdots\\ 8&&&&2&5&&&\cdots\\ 7&&&&2&&&&\cdots\\ 6&&&&2&3&&&\cdots\\ 5&&&&1&&&&\cdots\\ 4&&&&2&&&\cdots\\ 3&&1&&&&\cdots\\ 2&&1&&&&\cdots\\ 1&&&&&&\cdots\\ 0&&&&1&&&&\cdots\\ \hline z/y&0&1&2&3&4&5&6&7&\cdots\\ \end{array}\]
**Example interpretations reading from this graph.**
1. The number of partitions of vector \(\langle 7,15\rangle\) into unrestricted number of parts from \(S_{1}\) and \(S_{2}\) is 11.
2. The number of partitions of vector \(\langle 5,10\rangle\) into unrestricted number of parts from \(S_{1}\) and \(S_{2}\) is 7.
3. The number of partitions of vector \(\langle 4,9\rangle\) into unrestricted number of parts from \(S_{1}\) and \(S_{2}\) is \(3\).
4. The number of partitions of vector \(\langle 4,7\rangle\) into unrestricted number of parts from \(S_{1}\) and \(S_{2}\) is \(0\).
**3D Vector Partitions whose parts are on two straight lines.**
A further simple version of the above approach allows us to work out exactly the number of partitions into vector parts that lie upon "radial from the origin" lines of lattice points. We state the following example in 3D partitions in the first 3D hyperquadrant (ie. with lattice points whose co-ordinates are triples of positive integers.) For example, consider the first line through 3D space with equation \(x=y/2=z/3\); then the second line with equation \(x=y/3=z/4\). The lattice point vectors along these lines in the first quadrant may be listed as
\[S_{3} = \{\langle 1,2,3\rangle,\langle 2,4,6\rangle,\langle 3,6,9\rangle,\langle 4,8,12\rangle,\langle 5,10,15\rangle,\ldots\};\] \[S_{4} = \{\langle 1,3,4\rangle,\langle 2,6,8\rangle,\langle 3,9,12 \rangle,\langle 4,12,16\rangle,\langle 5,15,20\rangle,\ldots\}.\]
Applying our rationale we see that the generating function for 3D vector partitions into parts contained in \(S_{3}\) and \(S_{4}\) is
\[\frac{1}{((1-xy^{2}z^{3})(1-x^{2}y^{4}z^{6})(1-x^{3}y^{6}z^{9}) \cdots)(1-xy^{3}z^{4})(1-x^{2}y^{6}z^{8})(1-x^{3}y^{9}z^{12})\cdots)}\] \[=\big{(}1+p(1)xy^{2}z^{3}+p(2)x^{2}y^{4}z^{6}+p(3)x^{3}y^{6}z^{9}+\ldots \big{)}\] \[\qquad\qquad\times\big{(}1+p(1)xy^{3}z^{4}+p(2)x^{2}y^{6}z^{8}+p( 3)x^{3}y^{9}z^{12}+\ldots\big{)}\] \[= 1+p(1)xy^{3}z^{4}+p(1)xy^{2}z^{3}\] \[+p(2)x^{3}y^{6}z^{8}+p(1)^{2}x^{2}y^{5}z^{7}+p(2)x^{2}y^{4}z^{6}\] \[+p(3)x^{3}y^{6}z^{9}+p(1)p(2)x^{3}y^{7}z^{10}+p(1)p(2)x^{3}y^{8}z^ {1}1+p(3)x^{3}y^{9}z^{12}\] \[+etc.\]
There are many extended possibilities for the above enumerations of partitions in higher cartesian space sets of lattice point vectors. There is no reason other than a formal complexity why we can't create exact enumerative formulas for "\(n\)D Vector Partitions whose parts are on \(m\) straight lines" for \(m\) and \(n\) both arbitrary fixed positive integers.
**Distinct Vector Partitions along an _n_-space line.**
Recall that Euler, in addition to giving us the "unrestricted" integer partitions generating function, also noted that for \(|x|<1\),
\[\prod_{k=1}^{\infty}(1+x^{k})=\sum_{n=1}^{\infty}\mathcal{D}(n)x^{n}, \tag{2.5}\]
where \(\mathcal{D}(n)\) is the number of partitions of positive integer \(n\) into distinct integer parts.
From equation (2.5) we see that replacing \(x\) by \(y^{a}z^{b}\) for \(|y^{a}z^{b}|<1\) where \(a\) and \(b\) are coprime positive integers, gives the equation
\[\prod_{n=1}^{\infty}(1+(y^{a}z^{b})^{n})=\prod_{n=1}^{\infty}\big{(}1+(y^{a}z^ {b})^{1n}+(y^{a}z^{b})^{2n}+(y^{a}z^{b})^{3n}+\ldots\big{)} \tag{2.6}\]
\[=1+{\mathcal{D}}(1)(y^{a}z^{b})^{1}+{\mathcal{D}}(2)(y^{a}z^{b})^{2}+{ \mathcal{D}}(3)(y^{a}z^{b})^{3}+\ldots.\]
Similarly to the previous rationale using "unrestricted" partitions, this is a version of the generating function for integer partitions into distinct vector parts. ie. one-dimensional partitions into distinct parts. In the 2D case we are saying that the number of partitions of an integer lattice point vector \(\langle A,B\rangle\) on the line \(z=ay/b\) for \(gcd(a,b)=1\) into "distinct" vector parts also on this line, is equal to \({\mathcal{D}}(gcd(A,B))\).
In a further example, consider again the two lines with equations \(y=z/2\) and \(y=z/3\). The lattice point vectors along these lines in the first quadrant we list again as
\[S_{1} = \{\langle 1,2\rangle,\langle 2,4\rangle,\langle 3,6\rangle, \langle 4,8\rangle,\langle 5,10\rangle,\ldots\};\] \[S_{2} = \{\langle 1,3\rangle,\langle 2,6\rangle,\langle 3,9\rangle, \langle 4,12\rangle,\langle 5,15\rangle,\ldots\}.\]
Following the above rationale we see that the generating function for 2D vector partitions into distinct parts contained in \(S_{1}\) and \(S_{2}\) is
\[((1+yz^{2})(1+y^{2}z^{4})(1+y^{3}z^{6})\cdots)((1+yz^{3})(1+y^{2}z^{6})(1+y^{ 3}z^{9})\cdots)\]
\[=\left(1+{\mathcal{D}}(1)y^{1}z^{2}+{\mathcal{D}}(2)y^{2}z^{4}+{\mathcal{D}}( 3)y^{3}z^{6}+\ldots\right)\]
\[\times\left(1+{\mathcal{D}}(1)y^{1}z^{3}+{\mathcal{D}}(2)y^{2}z^{6}+{ \mathcal{D}}(3)y^{3}z^{9}+\ldots\right)\]
\[= 1+{\mathcal{D}}(1)yz^{2}+{\mathcal{D}}(1)yz^{3}\] \[+\ {\mathcal{D}}(2)y^{2}z^{4}+{\mathcal{D}}(1)^{2}y^{2}z^{5}+{ \mathcal{D}}(2)y^{2}z^{6}\] \[+\ {\mathcal{D}}(3)y^{3}z^{6}+{\mathcal{D}}(1){\mathcal{D}}(2)y^{3 }z^{7}+{\mathcal{D}}(1){\mathcal{D}}(2)y^{3}z^{8}+{\mathcal{D}}(3)y^{3}z^{9}\] \[+\ etc.\]
\[= 1+yz^{2}+yz^{3}+y^{2}z^{4}+y^{2}z^{5}+y^{2}(2y+1)z^{6}\] \[+\ y^{3}z^{7}+y^{3}(2y+1)z^{8}+2y^{3}(y+1)z^{9}+y^{4}(3y+1)z^{10}\] \[+\ 2y^{4}(y+1)z^{11}+2y^{4}(2y^{2}+y+1)z^{12}+y^{5}(3y+2)z^{1}3\] \[+\ 2y^{5}(2y^{2}+y+1)z^{14}+y^{5}(4y^{2}+4y+3)z^{15}\] \[+\ etc.\]
We can easily plot the coefficients here onto the grid
\[\begin{array}{ccccccccc}\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\vdots&\vdots&\vdots\\ 15&&&&3&4&4&\cdots\\ 14&&&&2&2&4&\cdots\\ 13&&&&2&3&&\cdots\\ 12&&&&2&2&4&\cdots\\ 11&&&&2&2&&\cdots\\ 10&&&&1&3&&\cdots\\ 9&&&&2&2&&\cdots\\ 8&&&&1&2&&\cdots\\ 7&&&&1&&\cdots\\ 6&&&&1&2&&\cdots\\ 5&&1&&&&\cdots\\ 4&&1&&&&\cdots\\ 3&&1&&&&\cdots\\ 2&&1&&&&\cdots\\ 1&&&&&&\cdots\\ 0&1&&&&\cdots\\ \hline z/y&0&1&2&3&4&5&6&7&\cdots\\ \end{array}\]
**Example interpretations reading from this graph.**
1. The number of partitions of vector \(\langle 7,15\rangle\) using distinct parts from \(S_{1}\) and \(S_{2}\) is \(11\).
2. The number of partitions of vector \(\langle 5,10\rangle\) using distinct parts from \(S_{1}\) and \(S_{2}\) is \(7\).
3. The number of partitions of vector \(\langle 4,9\rangle\) using distinct parts from \(S_{1}\) and \(S_{2}\) is \(3\).
4. The number of partitions of vector \(\langle 4,7\rangle\) using distinct parts from \(S_{1}\) and \(S_{2}\) is \(0\).
## 3. Deriving 2D VPV identities in extended triangle regions
In this section we derive the \(2D\) Visible Point Vector identities from essentially creating them out of a simple summation transformation based on the simple idea that the integer lattice points in the first quadrant have co-ordinates that are either coprime integer pairs namely, "lattice points visible from the origin", or co-ordinates that are the integer multiples of the coprime pairs. As we did in the hyperquadrant paper, we again start with a simple \(2D\) summation. Consider
\[\sum_{n=1}^{\infty}\left(\sum_{m=1}^{n}\frac{y^{m}}{m^{a}}\right)\frac{z^{n}}{ n^{b}}\]
\[=\left(\frac{y^{1}}{1^{a}}\right)\frac{z^{1}}{1^{b}}+\left(\frac{y^{1}}{1^{a}}+ \frac{y^{2}}{2^{a}}\right)\frac{z^{2}}{2^{b}}+\left(\frac{y^{1}}{1^{a}}+\frac {y^{2}}{2^{a}}+\frac{y^{3}}{3^{a}}\right)\frac{z^{3}}{3^{b}}+\left(\frac{y^{1} }{1^{a}}+\frac{y^{2}}{2^{a}}+\frac{y^{3}}{3^{a}}+\frac{y^{4}}{4^{a}}\right) \frac{z^{4}}{4^{b}}\]
\[+\left(\frac{y^{1}}{1^{a}}+\frac{y^{2}}{2^{a}}+\frac{y^{3}}{3^{a}}+\frac{y^{4} }{4^{a}}+\frac{y^{5}}{5^{a}}\right)\frac{z^{5}}{5^{b}}+\left(\frac{y^{1}}{1^{ a}}+\frac{y^{2}}{2^{a}}+\frac{y^{3}}{3^{a}}+\frac{y^{4}}{4^{a}}+\frac{y^{5}}{5^{a}} +\frac{y^{6}}{6^{a}}\right)\frac{z^{6}}{6^{b}}+\cdots\]
\[=\frac{y^{1}z^{1}}{1^{a}1^{b}}\]
\[+\frac{y^{1}z^{2}}{1^{a}2^{b}}+\frac{y^{2}z^{2}}{2^{a}2^{b}}\]
\[+\frac{y^{1}z^{3}}{1^{a}3^{b}}+\frac{y^{2}z^{3}}{2^{a}3^{b}}+\frac{y^{3}z^{3}}{3 ^{a}3^{b}}\]
\[+\frac{y^{1}z^{4}}{1^{a}4^{b}}+\frac{y^{2}z^{4}}{2^{a}4^{b}}+\frac{y^{3}z^{4}}{ 3^{a}4^{b}}+\frac{y^{4}z^{4}}{4^{a}4^{b}}\]
\[+\frac{y^{1}z^{5}}{1^{a}5^{b}}+\frac{y^{2}z^{5}}{2^{a}5^{b}}+\frac{y^{3}z^{5}}{ 3^{a}5^{b}}+\frac{y^{4}z^{5}}{4^{a}5^{b}}+\frac{y^{5}z^{5}}{5^{a}5^{b}}\]
\[+\frac{y^{1}z^{6}}{1^{a}6^{b}}+\frac{y^{2}z^{6}}{2^{a}6^{b}}+\frac{y^{3}z^{6}} {3^{a}6^{b}}+\frac{y^{4}z^{6}}{4^{a}6^{b}}+\frac{y^{5}z^{6}}{5^{a}6^{b}}+\frac {y^{6}z^{6}}{6^{a}6^{b}}\]
\[+\frac{y^{1}z^{7}}{1^{a}7^{b}}+\frac{y^{2}z^{7}}{2^{a}7^{b}}+\frac{y^{3}z^{7}} {3^{a}7^{b}}+\frac{y^{4}z^{7}}{4^{a}7^{b}}+\frac{y^{5}z^{7}}{5^{a}7^{b}}+\frac {y^{6}z^{7}}{6^{a}7^{b}}+\frac{y^{7}z^{7}}{7^{a}7^{b}}\]
\[+\quad\vdots\quad+\quad\vdots\quad+\quad\vdots\quad+\quad\vdots\quad+\quad \vdots\quad+\quad\vdots\quad+\quad\vdots\quad\ddots\]
\[=\sum_{m,n\geq 1;m\leq n}^{\infty}\frac{y^{m}z^{n}}{m^{a}n^{b}}\]
\[=\sum_{\begin{subarray}{c}h,j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\frac{(y^{j}z^{k})^{h}}{h^{a+b}(j^{a}k^{b})}\]
\[=\sum_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\frac{1}{(j^{a}k^{b})}\sum_{h=1}^{\infty} \frac{(y^{j}z^{k})^{h}}{h^{a+b}}\]
\[=\sum_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\frac{1}{(j^{a}k^{b})}\log\left(\frac{1}{1-y^{ j}z^{k}}\right)\quad if\quad a+b=1.\]
Therefore, we have shown that
\[\sum_{n=1}^{\infty}\left(\sum_{m=1}^{n}\frac{y^{m}}{m^{a}}\right)\frac{z^{n}} {n^{b}}=\sum_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\frac{1}{(j^{a}k^{b})}\log\left(\frac{1}{1-y^{ j}z^{k}}\right)\quad if\quad a+b=1.\]
Exponentiating both sides gives us the \(2D\) first extended triangle VPV identity, where in this \(2D\) case the \(nD\) pyramid reduces to the form of a triangle shaped array of lattice point vectors, and so we can state the
**Theorem 3.1**.: _The \(2D\) first quadrant triangle VPV identity. For \(|y|<1,|z|<1,\)_
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{1}{ j^{a}k^{b}}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{m=1}^{n}\frac{y^{m}}{m^{a}} \right)\frac{z^{n}}{n^{b}}\right\}\quad if\quad a+b=1. \tag{3.1}\]
As with our earlier exploits into the \(2D\) first quadrant case, for the present result we take some simple example cases where new and interesting results arise.
So, let us take the case where \(a=0,b=1\), giving us
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{1}{k }}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{m=1}^{n}y^{m}\right)\frac{z^{n}}{ n}\right\}\] \[=\exp\left\{\sum_{n=1}^{\infty}\left(y\frac{1-y^{n}}{1-y}\right) \frac{z^{n}}{n}\right\}=\exp\left\{\frac{y}{1-y}\log\left(\frac{1-yz}{1-z} \right)\right\}.\]
So, we arrive then at the following pair of equivalent results,
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{1}{ k}}=\left(\frac{1-yz}{1-z}\right)^{\frac{y}{1-y}}, \tag{3.2}\]
and
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(1-y^{j}z^{k}\right)^{\frac{1}{k}}=\left( \frac{1-z}{1-yz}\right)^{\frac{y}{1-y}}. \tag{3.3}\]
From here, multiply both sides of (3.2) and the case of (3.3) with \(y\mapsto y^{2}\) and \(z\mapsto z^{2}\) to get,
\[\prod_{\begin{subarray}{c}j,k\geq 1;\,j\leq k\\ gcd(j,k)=1\end{subarray}}\left(1+y^{j}z^{k}\right)^{\frac{1}{k}}=\left(\frac{1- yz}{1-z}\right)^{\frac{y}{1-y}}\left(\frac{1-z^{2}}{1-y^{2}z^{2}}\right)^{\frac{y^{2} }{1-y^{2}}}. \tag{3.4}\]
Obviously multiplying both sides of (3.3) and (3.4) give a restated (3.4).
Particular cases: \(y=\frac{1}{2}\) gives us from (3.3) and (3.4) the remarkable two results,
\[\prod_{\begin{subarray}{c}j,k\geq 1;\,j\leq k\\ gcd(j,k)=1\end{subarray}}\left(1-\frac{z^{k}}{2^{j}}\right)^{\frac{1}{k}}=\frac {2-2z}{2-z}=1-\frac{z}{2}-\frac{z^{2}}{4}-\frac{z^{3}}{8}-\frac{z^{4}}{16}- \frac{z^{5}}{32}-\ldots\]
\[=\left(1-\frac{z}{2}\right)\]
\[\sqrt{\left(1-\frac{z^{2}}{2^{1}}\right)}\]
\[\sqrt[5]{\left(1-\frac{z^{3}}{2^{1}}\right)\left(1-\frac{z^{3}}{2^{2}}\right) }\]
\[\sqrt[5]{\left(1-\frac{z^{5}}{2^{1}}\right)\left(1-\frac{z^{5}}{2^{2}}\right) \left(1-\frac{z^{5}}{2^{3}}\right)\left(1-\frac{z^{5}}{2^{4}}\right)}\]
\[\sqrt[6]{\left(1-\frac{z^{6}}{2^{1}}\right)\left(1-\frac{z^{6}}{2^{5}}\right)}\]
\[\vdots,\]
\[\prod_{\begin{subarray}{c}j,k\geq 1;j\leq k\\ gcd(j,k)=1\end{subarray}}\left(1+\frac{z^{k}}{2^{j}}\right)^{\frac{1}{k}}=\frac{ 1-\frac{z}{2}}{1-z}\sqrt[3]{\frac{1-z^{2}}{1-\frac{z^{2}}{4}}}=1+\frac{z}{2}+ \frac{z^{2}}{4}+\frac{3z^{3}}{8}+\frac{z^{4}}{4}+\frac{5z^{5}}{16}+\ldots\]
\[=\left(1+\frac{z}{2}\right)\]
\[\sqrt{\left(1+\frac{z^{2}}{2^{1}}\right)}\]
\[\sqrt[3]{\left(1+\frac{z^{3}}{2^{1}}\right)\left(1+\frac{z^{3}}{2^{2}}\right)}\]
\[\sqrt[6]{\left(1+\frac{z^{4}}{2^{1}}\right)\left(1+\frac{z^{4}}{2^{3}}\right)}\]
\[\sqrt[5]{\left(1+\frac{z^{5}}{2^{1}}\right)\left(1+\frac{z^{5}}{2^{2}}\right) \left(1+\frac{z^{5}}{2^{3}}\right)\left(1+\frac{z^{5}}{2^{4}}\right)}\]
\[\sqrt[6]{\left(1+\frac{z^{6}}{2^{1}}\right)\left(1+\frac{z^{6}}{2^{5}}\right)}\]
\[\vdots.\]
These two equations can be easily verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around \(z=0\) and comparing coefficients of like powers of \(z\). Next, take the cases of (3.3) and (3.4) with \(y=2\), both of which converge if \(|z|<\frac{1}{2}\), so then, after a slight adjustment to both sides we have,
\[\prod_{\begin{subarray}{c}j,k\geq 1;j<k\\ gcd(j,k)=1\end{subarray}}\left(1-2^{j}z^{k}\right)^{\frac{1}{k}}=1-\frac{z^{2}} {(1-z)^{2}}=1-z^{2}-2z^{3}-3z^{4}-4z^{5}-\ldots-nz^{n+1}-\ldots\]
\[=\sqrt{\left(1-2^{1}z^{2}\right)}\]
\[\sqrt[3]{\left(1-2^{1}z^{3}\right)\left(1-2^{2}z^{3}\right)}\]
\[\sqrt[4]{\left(1-2^{1}z^{4}\right)\left(1-2^{3}z^{4}\right)}\]
\[\sqrt[5]{\left(1-2^{1}z^{5}\right)\left(1-2^{2}z^{5}\right)\left(1-2^{3}z^{5} \right)\left(1-2^{4}z^{5}\right)}\]
\[\sqrt[6]{\left(1-2^{1}z^{6}\right)\left(1-2^{5}z^{6}\right)}\]
\[\sqrt[7]{\left(1-2^{1}z^{7}\right)\left(1-2^{2}z^{7}\right)\left(1-2^{3}z^{7} \right)\left(1-2^{4}z^{7}\right)\left(1-2^{5}z^{7}\right)\left(1-2^{6}z^{7} \right)}\]
\[\vdots,\]
which is also easy to verify on a calculating engine term by term from the power series of each side. The notably simple coefficients make this result somewhat tantalizing, as there seems no obvious reason for such coefficients to come out of the products of binomial series roots.
We remark at this juncture that equations (3.3) and it's reciprocal equation (3.4) are amenable to applying the limit as \(y\to 1\). In fact we have as follows that,
\[\lim_{y\to 1}\left(\frac{1-z}{1-yz}\right)^{\frac{y}{1-y}}=e^{\frac{z}{z-1}}\]
and also from considering equation (3.4) there is the limit, easily evaluated,
\[\lim_{y\to 1}\left(\frac{1-yz}{1-z}\right)^{\frac{y}{1-y}}\left(\frac{1-z^{2}}{1- y^{2}z^{2}}\right)^{\frac{y^{2}}{1-y^{2}}}=e^{\frac{z}{1-z^{2}}}.\]
Therefore, applying these two limits to equations (3.3) and (3.4) respectively we obtain the two interesting results (3.5) and (3.6) given here.
\[\prod_{k=1}^{\infty}\left(1-z^{k}\right)^{\frac{\varphi(k)}{k}}=e^{\frac{z}{z -1}}=\sum_{k=0}^{\infty}\frac{\alpha(k)z^{k}}{k!} \tag{3.5}\]
\[=1-\frac{z}{1!}-\frac{z^{2}}{2!}-\frac{z^{3}}{3!}+\frac{z^{4}}{4!}+\frac{19z^ {5}}{5!}+\frac{151z^{6}}{6!}+\frac{1091z^{7}}{7!}\]
\[+\frac{7841z^{8}}{8!}+\frac{56519z^{9}}{9!}+\frac{396271z^{10}}{10!}+O(z^{11}),\]
where \(\varphi(k)\) is the Euler totient function, the number of positive integers less than and coprime to \(k\). (3.5) demonstrates that sequence \(\alpha(k)\) has the exponential generating function \(e^{\frac{k}{z-1}}\). The first 31 coefficients generated by the series are,
Amazingly \(\gcd(\alpha(k),k!)=1\), for all values of \(k\) up to \(34\), and mostly beyond that, and \(\alpha(k)\equiv 1\ or\ 9\ (mod\ 10)\), and also the recurrence relation
\[\alpha(n)+(n-1)(n-2)\,\alpha(n-2)=(2n-3)\,\alpha(n-1)\]
holds. (See OEIS sequence A293116 [26]) This recurrence relation allows us to write continued fractions for the ratios \(\alpha(n+1)/\alpha(n)\).
\[\prod_{k=1}^{\infty}\left(1+z^{k}\right)^{\frac{\varphi(k)z^{k}}{k}}=e^{\frac {z}{1-z^{2}}}=\sum_{k=0}^{\infty}\frac{\beta(k)z^{k}}{k!}\]
\[=1+\frac{z}{1!}+\frac{z^{2}}{2!}+\frac{7z^{3}}{3!}+\frac{25z^{4}}{4!}+\frac{18 1z^{5}}{5!}+\frac{1201z^{6}}{6!}\]
\[+\frac{10291z^{7}}{7!}+\frac{97777z^{8}}{8!}+\frac{202709z^{9}}{9!}+O(z^{10}),\]
where again, \(\varphi(k)\) is the Euler totient function.
Next we take (3.1) with the case that \(a=1\) and \(b=0\), so then
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k,\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{1} {j}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{m=1}^{n}\frac{y^{m}}{m}\right) z^{n}\right\}\]
\[=\exp\left\{\frac{1}{1-z}\sum_{n=1}^{\infty}\frac{y^{n}z^{n}}{n}\right\}=\exp \left\{\frac{1}{1-z}\log\left(\frac{1}{1-yz}\right)\right\}.\]
This leads us to establish that
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{1}{j}}= \left(\frac{1}{1-yz}\right)^{\frac{1}{1-z}}, \tag{3.7}\]
which is equivalent to
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(1-y^{j}z^{k}\right)^{\frac{1}{j}}=(1-yz) ^{\frac{1}{1-z}}\,. \tag{3.8}\]
From multiplying both sides of (3.7) in which \(y\mapsto y^{2}\) and \(z\mapsto z^{2}\) with both sides of (3.8) we obtain
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(1+y^{j}z^{k}\right)^{\frac{1}{j}}= \frac{(1-y^{2}z^{2})^{\frac{1}{1-z^{2}}}}{(1-yz)^{\frac{1}{1-z}}}. \tag{3.9}\]
Particular cases:
\(z=\frac{1}{2}\) gives us from (3.8) and (3.9) the remarkable result that
\[\prod_{\begin{subarray}{c}j,k\geq 1;\,j\leq k\\ gcd(j,k)=1\end{subarray}}\left(1-\frac{y^{j}}{2^{k}}\right)^{\frac{1}{j}}= \left(1-\frac{y}{2}\right)^{2}=1-\frac{y}{4}+\frac{y^{2}}{4}\]
\[=\left(1-\frac{y^{1}}{2^{1}}\right)\]
\[\left(1-\frac{y^{1}}{2^{2}}\right)\]
\[\left(1-\frac{y^{1}}{2^{3}}\right)\sqrt{\left(1-\frac{y^{2}}{2^{3}}\right)}\]
\[\left(1-\frac{y^{1}}{2^{4}}\right)\sqrt[3]{\left(1-\frac{y^{3}}{2^{4}}\right)}\]
\[\left(1-\frac{y^{1}}{2^{5}}\right)\sqrt{\left(1-\frac{y^{2}}{2^{5}}\right)} \sqrt[3]{\left(1-\frac{y^{3}}{2^{5}}\right)}\sqrt[4]{\left(1-\frac{y^{4}}{2^ {5}}\right)}\]
\[\left(1-\frac{y^{1}}{2^{6}}\right)\sqrt[3]{\left(1-\frac{y^{5}}{2^{6}}\right)}\]
and the result,
\[\prod_{\begin{subarray}{c}j,k\geq 1;\,j\leq k\\ gcd(j,k)=1\end{subarray}}\left(1+\frac{y^{j}}{2^{k}}\right)^{\frac{1}{j}}= \frac{\sqrt[3]{(4-y^{2})^{4}}}{\sqrt[3]{4}(2-y)^{2}}=1+y+\frac{5y^{2}}{12}+ \frac{y^{3}}{6}+\frac{11y^{4}}{144}+\frac{5y^{5}}{144}+O(y^{6})\]
\[=\left(1+\frac{y^{1}}{2^{1}}\right)\] \[\left(1+\frac{y^{1}}{2^{3}}\right)\sqrt{\left(1+\frac{y^{2}}{2^{3}} \right)}\] \[\left(1+\frac{y^{1}}{2^{4}}\right)\sqrt[3]{\left(1+\frac{y^{3}}{2^ {4}}\right)}\] \[\left(1+\frac{y^{1}}{2^{5}}\right)\sqrt{\left(1+\frac{y^{2}}{2^{5 }}\right)}\sqrt[3]{\left(1+\frac{y^{3}}{2^{5}}\right)}\sqrt[4]{\left(1+\frac{y ^{4}}{2^{5}}\right)}\] \[\left(1+\frac{y^{1}}{2^{6}}\right)\sqrt[5]{\left(1+\frac{y^{5}}{2^ {6}}\right)}\] \[\vdots\,.\]
These two equations can be verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around \(y=0\) and comparing coefficients of like powers of \(y\). However, the calculation is an infinite series for each coefficient, unlike in the previous examples, where it is a finite sum.
## 4. Deriving 3D VPV identities in square pyramid regions
We start by considering the infinite inverted pyramid with square layered arrays of lattice point vectors as per the following diagram, with VPVs bolded.
(4.1)
From this we create a \(3D\) summation over integer co-ordinates in the above lattice point vectors. We consider the sum,
\[\sum_{n=1}^{\infty}\left(\sum_{l=1}^{n}\frac{x^{l}}{l^{a}}\right) \left(\sum_{m=1}^{n}\frac{y^{m}}{m^{b}}\right)\frac{z^{n}}{n^{c}}\] \[\qquad=\left(\frac{x^{1}}{1^{a}}\right)\left(\frac{y^{1}}{1^{b}} \right)\frac{z^{1}}{1^{c}}\] \[\qquad+\left(\frac{x^{1}}{1^{a}}+\frac{x^{2}}{2^{a}}\right) \left(\frac{y^{1}}{1^{b}}+\frac{y^{2}}{2^{b}}\right)\frac{z^{2}}{2^{c}}\] \[\qquad+\left(\frac{x^{1}}{1^{a}}+\frac{x^{2}}{2^{a}}+\frac{x^{3}} {3^{a}}\right)\left(\frac{y^{1}}{1^{b}}+\frac{y^{2}}{2^{b}}+\frac{y^{3}}{3^{b} }\right)\frac{z^{3}}{3^{c}}\]
\[+\frac{x^{2}y^{2}z^{2}}{2^{a}1b^{2}c}+\frac{x^{2}y^{2}z^{2}}{2^{a}2^{b}2^{c}}\]
[MISSING_PAGE_POST]
\[+\ \ \vdots\ \ \ \ +\ \ \ \ +\ \ \ \ \ +\ \ \ \ \ +\ \ \ \ +\ \ \ \ +\ \ \ \vdots\ \ \ +\ \ \ \ +\ \ \ \ \vdots\ \ \ \ +\ \ \ \ \ \vdots\ \ \ \ \ \ +\ \ \ \ \vdots\
\[=\sum_{\begin{subarray}{c}l,m,n\geq 1\\ l,m\leq n;\gcd(l,m,n)=1\end{subarray}}\frac{1}{(l^{a}m^{b}n^{c})}\sum_{h=1}^{ \infty}\frac{(x^{l}y^{n}z^{n})^{h}}{h^{a+b+c}}\] \[=\sum_{\begin{subarray}{c}l,m,n\geq 1\\ l,m\leq n;\gcd(l,m,n)=1\end{subarray}}\frac{1}{(l^{a}m^{b}n^{c})}\log\left( \frac{1}{1-x^{l}y^{b}z^{c}}\right)\quad if\quad a+b+c=1.\]
Therefore, we have shown that if \(a+b+c=1\) then
\[\sum_{n=1}^{\infty}\left(\sum_{l=1}^{n}\frac{x^{l}}{l^{a}}\right)\left(\sum_{m =1}^{n}\frac{y^{m}}{m^{b}}\right)\frac{z^{n}}{n^{c}}=\sum_{\begin{subarray}{c }l,m,n\geq 1\\ l,m\leq n;\gcd(l,m,n)=1\end{subarray}}\frac{1}{(l^{a}m^{b}n^{c})}\log\left( \frac{1}{1-x^{l}y^{b}z^{c}}\right).\]
Exponentiating both sides gives us the \(3D\) "pyramid VPV identity".
The identity is summarized in the
**Theorem 4.1**.: _The \(3D\) first hyperquadrant pyramid VPV identity. If \(|x|,|y|,|z|<1\), with \(a+b+c=1\),_
\[\prod_{\begin{subarray}{c}l,m,n\geq 1\\ l,m\leq n;\gcd(l,m,n)=1\end{subarray}}\left(\frac{1}{1-x^{l}y^{m}z^{n}}\right) ^{\frac{1}{n}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{l=1}^{n}\frac{x^{l}} {l^{a}}\right)\left(\sum_{m=1}^{n}\frac{y^{m}}{m^{b}}\right)\frac{z^{n}}{n^{c }}\right\}. \tag{4.2}\]
As we did for the \(2D\) particular cases, we can examine some obvious example corollaries arising from this theorem. Firstly, take the case where \(a=b=0,c=1\), so then,
\[\prod_{\begin{subarray}{c}l,m,n\geq 1\\ l,m\leq n;\gcd(l,m,n)=1\end{subarray}}\left(\frac{1}{1-x^{l}y^{m}z^{n}}\right) ^{\frac{1}{n}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{l=1}^{n}x^{l}\right) \left(\sum_{m=1}^{n}y^{m}\right)\frac{z^{n}}{n}\right\}\]
\[=\exp\left\{\sum_{n=1}^{\infty}xy\left(\frac{1-x^{n}}{1-x}\right)\left(\frac{ 1-y^{n}}{1-y}\right)\frac{z^{n}}{n}\right\}\]
\[=\exp\left\{\frac{xy}{(1-x)(1-y)}\log\left(\frac{(1-xz)(1-yz)}{(1-z)(1-xyz)} \right)\right\},\]
which brings us after exponentiating both sides to a set of \(3D\) infinite products. So, we have
\[\prod_{\begin{subarray}{c}l,m,n\geq 1\\ l,m\leq n;\gcd(l,m,n)=1\end{subarray}}\left(\frac{1}{1-x^{l}y^{m}z^{n}}\right) ^{\frac{1}{n}}=\left(\frac{(1-xz)(1-yz)}{(1-z)(1-xyz)}\right)^{\frac{xy}{(1-x)( 1-y)}}, \tag{4.3}\]
and the equivalent identity,
\[\prod_{\begin{subarray}{c}l,m,n\geq 1\\ l,m\leq n;\gcd(l,m,n)=1\end{subarray}}\left(1-x^{l}y^{m}z^{n}\right)^{\frac{ 1}{n}}=\left(\frac{(1-z)(1-xyz)}{(1-xz)(1-yz)}\right)^{\frac{xy}{(1-x)(1-y)}}. \tag{4.4}\]
We see that (4.3) and (4.4) are generalizations of the 2D identities (3.2) and (3.3) from the previous section. Writing (4.4) in longhand, referencing the diagram (4.1), we see that
\[\left(\frac{(1-z)(1-xyz)}{(1-xz)(1-yz)}\right)^{\frac{xy}{(1-x)(1-y)}}\] \[=(1-xyz)\] \[\sqrt{(1-xyz^{2})(1-xy^{2}z^{2})(1-x^{2}yz^{2})}\] \[\sqrt[3]{(1-xyz^{3})(1-x^{2}yz^{3})(1-x^{3}yz^{3})(1-x^{2}yz^{3})}\] \[\sqrt[3]{(1-x^{2}y^{2}z^{3})(1-x^{2}y^{3}z^{3})(1-x^{3}yz^{3})(1-x ^{3}yz^{2}z^{3})}\] \[\sqrt[4]{(1-xyz^{4})(1-x^{2}yz^{4})(1-x^{3}yz^{4})(1-x^{4}yz^{4})}\] \[\sqrt[4]{(1-xy^{2}z^{4})(1-x^{3}y^{2}z^{4})}\] \[\sqrt[5]{(1-xyz^{5})(1-x^{2}yz^{5})(1-x^{3}yz^{5})(1-x^{4}yz^{5})(1 -x^{5}yz^{5})}\] \[\sqrt[5]{(1-xy^{2}z^{5})(1-x^{3}y^{2}z^{5})(1-x^{3}y^{2}z^{5})(1-x ^{4}y^{2}z^{5})(1-x^{5}y^{2}z^{5})}\] \[\sqrt[5]{(1-xy^{3}z^{5})(1-x^{2}y^{3}z^{5})(1-x^{3}y^{3}z^{5})(1- x^{4}y^{3}z^{5})(1-x^{5}y^{3}z^{5})}\] \[\sqrt[5]{(1-xy^{4}z^{5})(1-x^{2}y^{4}z^{5})(1-x^{3}y^{4}z^{5})(1- x^{4}y^{4}z^{5})(1-x^{5}y^{4}z^{5})}\] \[\sqrt[5]{(1-xy^{5}z^{5})(1-x^{2}y^{5}z^{5})(1-x^{3}y^{5}z^{5})(1- x^{4}y^{5}z^{5})}\] etc.
This is easily verified on a calculating application if expanded on both sides as power series in \(z\).
## 5. VPV identities in nD first hyperquadrant hyperpyramid regions
The \(n\) dimensional first hyperquadrant hyperpyramid VPV Identity is encoded in the following
**Theorem 5.1**.: _The \(nD\) first hyperquadrant hyperpyramid VPV identity. If \(i=1,2,3,...,n\) then for each \(x_{i}\in\mathbb{C}\) such that \(|x_{i}|<1\) and \(b_{i}\in\mathbb{C}\) such that \(\sum_{i=1}^{n}b_{i}=1\),_
\[\prod_{\begin{subarray}{c}\gcd(a_{1},a_{2},...,a_{n})=1\\ a_{1},a_{2},...,a_{n-1}<a_{n}\\ a_{1},a_{2},...,a_{n}\geq 1\end{subarray}}\left(\frac{1}{1-{x_{1}}^{a_{1}}{x_{2}}^ {a_{2}}{x_{3}}^{a_{3}}\cdots{x_{n}}^{a_{n}}}\right)^{\frac{1}{a_{1}^{b_{1}}a_{ 2}^{b_{2}}a_{3}^{c}3...a_{n}b_{n}}}\] \[=\exp\left\{\sum_{k=1}^{\infty}\left(\sum_{j=1}^{k}\frac{{x_{1}}^ {j}}{j^{b_{i}}}\right)\left(\sum_{j=1}^{k}\frac{{x_{2}}^{j}}{j^{b_{2}}}\right) \left(\sum_{j=1}^{k}\frac{{x_{3}}^{j}}{j^{b_{3}}}\right)\cdots\left(\sum_{j=1 }^{k}\frac{{x_{n-1}}^{j}}{j^{b_{n-1}}}\right)\frac{{x_{n}}^{k}}{k^{b_{n}}} \right\}. \tag{5.1}\]
This result is quite straight-forward to prove using the technique of our two previous sections. It was also given in Campbell [19] by summing on the VPV's in the \(n\)-space hyperpyramid, defined by the inequalities
\[x_{1}<x_{n},x_{2}<x_{n},x_{3}<x_{n},...,x_{n-1}<x_{n} \tag{5.2}\]
in the first \(n\)-space hyperquadrant, and applying the following
**Lemma 5.1**.: _Consider an infinite region raying out of the origin in any Euclidean vector space. The set of all lattice point vectors apart from the origin in that region is precisely the set of positive integer multiples of the VPVs in that region._
The corresponding theorem from Campbell [14] was summed very simply over all lattice point vectors in the first hyperquadrant.
Further consequences of the above theorem are given as follows.
The 2D case of theorem 5.1 is
**Corollary 5.1**.: _If \(|yz|\) and \(|z|<1\) and \(s+t=1\) then,_
\[\prod_{\begin{subarray}{c}(a,b)=1\\ a<b\\ a\geq 0,b\geq 1\end{subarray}}\left(\frac{1}{1-y^{a}z^{b}}\right)^{\frac{1}{ a\ast b^{t}}} \tag{5.3}\]
\[=\exp\left\{\frac{z^{1}}{1^{t}}+\left(1+\frac{y^{1}}{1^{s}}\right)\frac{z^{2} }{2^{t}}+\left(1+\frac{y^{1}}{1^{s}}+\frac{y^{2}}{2^{s}}\right)\frac{z^{3}}{3^ {t}}+\cdots\right\}\]
The 3D case of theorem 5.1 is
**Corollary 5.2**.: _If \(|xyz|\), \(|yz|\) and \(|z|<1\) and \(r+s+t=1\) then,_
\[\prod_{\begin{subarray}{c}(a,b,c)=1\\ a,b<c\\ a,b\geq 0,c\geq 1\end{subarray}}\left(\frac{1}{1-x^{a}y^{b}z^{c}}\right)^{ \frac{1}{a^{r}b^{s}c^{t}}} \tag{5.4}\]
\[=\exp\left\{\frac{z^{1}}{1^{t}}+\left(1+\frac{x^{1}}{1^{r}}\right)\left(1+ \frac{y^{1}}{1^{s}}\right)\frac{z^{2}}{2^{t}}+\left(1+\frac{x^{1}}{1^{r}}+ \frac{x^{2}}{2^{r}}\right)\left(1+\frac{y^{1}}{1^{s}}+\frac{y^{2}}{2^{s}} \right)\frac{z^{3}}{3^{t}}+\cdots\right\}\]
The 4D case of theorem 5.1 is
**Corollary 5.3**.: _If \(|wxyz|\), \(|xyz|\), \(|yz|\) and \(|z|<1\) and \(r+s+t+u=1\) then,_
\[\prod_{\begin{subarray}{c}(a,b,c)=1\\ a,b,c<d\\ a,b,c\geq 0,d\geq 1\end{subarray}}\left(\frac{1}{1-w^{a}x^{b}y^{c}z^{d}} \right)^{\frac{1}{a^{r}b^{s}c^{t}d^{u}}}=\exp\left\{\mathrm{P}_{3}(r,w;s,x;t, y;u,z)\right\} \tag{5.5}\]
_where \(\mathrm{P}_{3}\), is a 4D hyperpyramid function,_
\[\mathrm{P}_{3}(r,w;s,x;t,y;u,z)=\frac{z^{1}}{1^{u}}+\left(1+ \frac{w^{1}}{1^{r}}\right)\left(1+\frac{x^{1}}{1^{s}}\right)\left(1+\frac{y^{ 1}}{1^{t}}\right)\frac{z^{2}}{2^{u}}\\ +\left(1+\frac{w^{1}}{1^{r}}+\frac{w^{2}}{2^{r}}\right)\left(1+ \frac{x^{1}}{1^{s}}+\frac{x^{2}}{2^{s}}\right)\left(1+\frac{y^{1}}{1^{t}}+ \frac{y^{2}}{2^{t}}\right)\frac{z^{3}}{3^{u}}+\cdots\]
The approach we adopt to give the reader an intuitive sense for these identities is to state corollaries and then examples from them. The 2D case through to the 5D case of (5.1) are given in the following examples of the _square hyperpyramid identity_.
**Corollary 5.4**.: _For \(|y|,|z|<1\),_
\[\prod_{\begin{subarray}{c}(a,b)=1\\ a<b\\ a\geq 0,b\geq 1\end{subarray}}\left(\frac{1}{1-y^{a}z^{b}}\right)^{\frac{1}{b}}= \left(\frac{1-yz}{1-z}\right)^{\frac{1}{1-y}} \tag{5.6}\]
\[=1+\frac{z}{1!}+\left|\begin{matrix}1&-1&0\\ \frac{1-y^{2}}{1-y}&1\end{matrix}\right|\frac{z^{2}}{2!}+\left|\begin{matrix}1&-1&0 \\ \frac{1-y^{2}}{1-y}&1&-2\\ \frac{1-y^{3}}{1-y}&\frac{1-y^{2}}{1-y}&1\end{matrix}\right|\frac{z^{3}}{3!}+ \left|\begin{matrix}1&-1&0&0\\ \frac{1-y^{2}}{1-y}&1&-2&0\\ \frac{1-y^{3}}{1-y}&\frac{1-y^{2}}{1-y}&1&-3\\ \frac{1-y^{4}}{1-y}&\frac{1-y^{3}}{1-y}&\frac{1-y^{2}}{1-y}&1\end{matrix}\right| \frac{z^{4}}{4!}+etc.\]
In this case it is fairly easy to find the Taylor coefficients for the (5.6) right side function. Hence we get a closed form evaluation of the determinant coefficients. In Mathematica, and WolframAlpha one easily sees that the Taylor series is
\[\left(\frac{1-yz}{1-z}\right)^{\frac{1}{1-y}}=1+z+(y+2)\frac{z^{2}}{2!}+(2y^{2 }+5y+6)\frac{z^{3}}{3!}+(6y^{3}+17y^{2}+26y+24)\frac{z^{4}}{4!}\]
\[+(24y^{4}+74y^{3}+129y^{2}+154y+120)\frac{z^{5}}{5!}+O(z^{6})\]
and that the expansion is encapsulated by \(\sum_{n=0}^{\infty}c_{n}z^{n}\) where \(c_{0}=1\), \(c_{1}=1\) with the recurrence
\[nyc_{n}+(n+2)c_{n+2}=(2+n+y+ny)c_{n+1}.\]
Incidentally, also in Mathematica, and WolframAlpha one easily sees, for example, that the code
\[Det[\{1,-1,0,0\},\{(1-y^{2})/(1-y),1,-2,0\},\{(1-y^{3})/(1-y),(1-y^{2})/(1-y), 1,-3\},\]
\[\{(1-y^{4})/(1-y),(1-y^{3})/(1-y),(1-y^{2})/(1-y),1\}]\]
nicely verifies the coefficient given by
\[\left|\begin{matrix}1&-1&0&0\\ \frac{1-y^{2}}{1-y}&1&-2&0\\ \frac{1-y^{3}}{1-y}&\frac{1-y^{2}}{1-y}&1&-3\\ \frac{1-y^{4}}{1-y}&\frac{1-y^{3}}{1-y}&\frac{1-y^{2}}{1-y}&1\end{matrix}\right| =6y^{3}+17y^{2}+26y+24.\]
It is interesting to compare our identity (4.3) given earlier in this paper with the following result.
**Corollary 5.5**.: _For each of \(|x|,|y|,|z|<1,\)_
\[\prod_{\begin{subarray}{c}(a,b,c)=1\\ a,b<c\\ a,b\geq 0,c>0\end{subarray}}\left(\frac{1}{1-x^{a}y^{b}z^{c}}\right)^{\frac{1} {c}}=\left(\frac{(1-xz)(1-yz)}{(1-z)(1-xyz)}\right)^{\frac{1}{(1-x)(1-y)}} \tag{5.7}\]
\[=1+\frac{z}{1!}+\left|\begin{matrix}1&-1\\ \frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1\\ \frac{(1-x^{3})(1-y^{3})}{(1-x)(1-y)}&\frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1 \end{matrix}\right|\frac{z^{3}}{3!}\]
\[+\left|\begin{matrix}1&-1&0&0\\ \frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1&-2&0\\ \frac{(1-x^{3})(1-y^{3})}{(1-x)(1-y)}&\frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1 &-3\\ \frac{(1-x^{4})(1-y^{4})}{(1-x)(1-y)}&\frac{(1-x^{3})(1-y^{3})}{(1-x)(1-y)}& \frac{(1-x^{2})(1-y^{2})}{(1-x)(1-y)}&1\end{matrix}\right|\frac{z^{4}}{3!}+etc.\]
**Corollary 5.6**.: _For each of \(|w|,|x|,|y|,|z|<1,\)_
\[\prod_{\begin{subarray}{c}(a,b,c,d)=1\\ a,b,c<d\\ a,b,c\geq 0,d>0\end{subarray}}\left(\frac{1}{1-w^{a}x^{b}y^{c}z^{d}}\right)^{ \frac{1}{d}}=\left(\frac{(1-wz)(1-xz)(1-yz)(1-wxyz)}{(1-z)(1-wxz)(1-wyz)(1-xyz)} \right)^{\frac{1}{(1-w)(1-x)(1-y)}}, \tag{5.8}\]
\[=1+\frac{z}{1!}+\left|\begin{matrix}1&-1\\ \frac{(1-w^{2})(1-x^{2})(1-y^{2})}{(1-w)(1-x)(1-y)}&1\end{matrix}\right|\frac{ z^{2}}{2!}\]
\[+\left|\begin{matrix}1&-1&0\\ \frac{(1-w^{2})(1-x^{2})(1-y^{2})}{(1-w)(1-x)(1-y)}&1&-2\\ \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{(1-w)(1-x)(1-y)}&\frac{(1-w^{2})(1-x^{2})(1 -y^{2})}{(1-w)(1-x)(1-y)}&1\end{matrix}\right|\frac{z^{3}}{3!}\]
\[+\left|\begin{matrix}1&-1&0&0\\ \frac{(1-w^{2})(1-x^{2})(1-y^{2})}{(1-w)(1-x)(1-y)}&1&-2&0\\ \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{(1-w)(1-x)(1-y)}&\frac{(1-w^{2})(1-x^{2})(1 -y^{2})}{(1-w)(1-x)(1-y)}&1&-3\\ \frac{(1-w^{4})(1-x^{4})(1-y^{4})}{(1-w)(1-x)(1-y)}&\frac{(1-w^{3})(1-x^{3})(1 -y^{3})}{(1-w)(1-x)(1-y)}&\frac{(1-w^{2})(1-x^{2})(1-y^{2})}{(1-w)(1-x)(1-y)}& 1\end{matrix}\right|\frac{z^{4}}{4!}+etc.\]
**Corollary 5.7**.: _For each of \(|v|,|w|,|x|,|y|,|z|<1,\)_
\[\prod_{\begin{subarray}{c}(a,b,c,d,e)=1\\ a,b,c,d<e\\ a,b,c,d\geq 0,e>0\end{subarray}}\left(\frac{1}{1-v^{a}w^{b}x^{c}y^{d}z^{e}}\right) ^{\frac{1}{e}} \tag{5.9}\]
\[=\left(\frac{(1-vz)(1-wz)(1-xz)(1-yz)}{(1-z)(1-vwz)(1-vxz)(1-vyz)}\right)^{ \frac{1}{(1-v)(1-w)(1-x)(1-y)}}\]
\[\times\left(\frac{(1-vwxz)(1-vwyz)(1-vxyz)(1-wxyz)}{(1-wxz)(1-wyz)(1-xyz)(1-vw xyz)}\right)^{\frac{1}{(1-v)(1-w)(1-x)(1-y)}}.\]
\[=1+\frac{z}{1!}+\left|\begin{matrix}1&-1\\ \frac{(1-v^{2})(1-w^{2})(1-x^{2})(1-y^{2})}{(1-v)(1-w)(1-x)(1-y)}&1\end{matrix} \right|\frac{z^{2}}{2!}\]
\[+\left|\begin{matrix}1&-1&0\\ \frac{(1-v^{2})(1-w^{2})(1-x^{2})(1-y^{2})}{(1-v)(1-w)(1-x)(1-y)}&1&-2\\ \frac{(1-v^{3})(1-w^{3})(1-x^{3})(1-y^{3})}{(1-v)(1-w)(1-x)(1-y)}&\frac{(1-v^ {2})(1-w^{2})(1-x^{2})(1-y^{2})}{(1-v)(1-w)(1-x)(1-y)}&1\end{matrix}\right| \frac{z^{3}}{3!}\]
\[+\left|\begin{matrix}1&-1&0&0\\ \frac{(1-v^{2})(1-w^{2})(1-x^{2})(1-y^{2})}{(1-v)(1-w)(1-x)(1-y)}&1&-2&0\\ \frac{(1-v^{3})(1-w^{3})(1-x^{3})(1-y^{3})}{(1-v)(1-w)(1-x)(1-y)}&\frac{(1-v^ {2})(1-w^{2})(1-x^{2})(1-y^{2})}{(1-v)(1-w)(1-x)(1-y)}&1\\ \frac{1-v^{4})(1-w^{4})(1-x^{4})(1-y^{4})}{(1-v)(1-w)(1-x)(1-y)}&\frac{(1-v^ {3})(1-w^{3})(1-x^{3})(1-y^{3})}{(1-v)(1-w)(1-x)(1-y)}&\frac{(1-v^{2})(1-w^{2}) (1-x^{2})(1-y^{2})}{(1-v)(1-w)(1-x)(1-y)}&1\end{matrix}\right|\]
\[+etc.\]
## 6. 2D VPV identities for a z-axis symmetric extended triangle lattice
As we did in the section 3 of this paper, we again start with a simple \(2D\) summation. Consider an infinite extension of the inverted triangle 2D lattice point vectors with the Visible Point Vectors bolded,
\[\begin{array}{ccccccccc}\langle-5,5\rangle&\langle-4,\textbf{5}\rangle& \langle-3,\textbf{5}\rangle&\langle-2,\textbf{5}\rangle&\langle-1,\textbf{5 }\rangle&\langle 0,5\rangle&\langle\textbf{1},\textbf{5}\rangle&\langle \textbf{2},\textbf{5}\rangle&\langle\textbf{3},\textbf{5}\rangle&\langle \textbf{4},\textbf{5}\rangle&\langle 5,5\rangle\\ \langle-4,4\rangle&\langle-3,\textbf{4}\rangle&\langle-2,4\rangle&\langle -1,\textbf{4}\rangle&\langle 0,4\rangle&\langle\textbf{1},\textbf{4}\rangle& \langle 2,4\rangle&\langle 3,\textbf{4}\rangle&\langle 4,4\rangle\\ &\langle-3,3\rangle&\langle-2,\textbf{3}\rangle&\langle-1,\textbf{3} \rangle&\langle 0,3\rangle&\langle 1,\textbf{3}\rangle&\langle 2,\textbf{3} \rangle&\langle 3,3\rangle\\ &&&\langle-2,2\rangle&\langle-1,\textbf{2}\rangle&\langle 0,2\rangle&\langle 1,2 \rangle&\langle 2,2\rangle\\ &&&\langle-1,\textbf{1}\rangle&\langle\textbf{0},\textbf{1}\rangle&\langle 1, \textbf{1}\rangle\\ &&&&\langle 0,0\rangle&\langle 0\rangle&\langle 0\rangle&\langle 0\rangle\end{array} \tag{6.1}\]
Next we create the following summation with the sum covering the above coordinates in infinite extended form.
\[\sum_{n=1}^{\infty}\left(\sum_{m=-n}^{n}\frac{y^{m}}{m^{a}}\right)\frac{z^{n} }{n^{b}}\]
\[=\left(\frac{y^{-1}}{(-1)^{a}}+1+\frac{y^{1}}{1^{a}}\right)\frac{z^{1}}{1^{b}}\]
\[+\left(\frac{y^{-2}}{(-2)^{a}}+\frac{y^{-1}}{(-1)^{a}}+1+\frac{y^{1}}{1^{a}}+ \frac{y^{2}}{2^{a}}\right)\frac{z^{2}}{2^{b}}\]
\[+\left(\frac{y^{-3}}{(-3)^{a}}+\frac{y^{-2}}{(-2)^{a}}+\frac{y^{-1}}{(-1)^{a}} +1+\frac{y^{1}}{1^{a}}+\frac{y^{2}}{2^{a}}+\frac{y^{3}}{3^{a}}\right)\frac{z^{ 3}}{3^{b}}\]
\[+\left(\frac{y^{-4}}{(-4)^{a}}+\frac{y^{-3}}{(-3)^{a}}+\frac{y^{-2}}{(-2)^{a}} +\frac{y^{-1}}{(-1)^{a}}+1+\frac{y^{1}}{1^{a}}+\frac{y^{2}}{2^{a}}+\frac{y^{3} }{3^{a}}+\frac{y^{4}}{4^{a}}\right)\frac{z^{4}}{4^{b}}\]
\[+\;etc.\]
\[=\frac{y^{-1}z^{1}}{(-1)^{a}1^{c}}+\frac{y^{0}z^{1}}{1\times 1^{c}}+\frac{y^{1}z^{1}}{ 1^{a}1^{c}}\]
\[+\frac{y^{-2}z^{2}}{(-2)^{a}2^{c}}+\frac{y^{-1}z^{2}}{(-1)^{a}2^{c}}+\frac{y^ {0}z^{2}}{1\times 2^{c}}+\frac{y^{1}z^{2}}{1^{a}2^{c}}+\frac{y^{2}z^{2}}{2^{a}2^{c}}\]
\[+\frac{y^{-3}z^{3}}{(-3)^{a}3^{c}}+\frac{y^{-2}z^{3}}{(-2)^{a}3^{c}}+\frac{y^{ -1}z^{3}}{(-1)^{a}3^{c}}+\frac{y^{0}z^{3}}{1\times 3^{c}}+\frac{y^{1}z^{3}}{1^{a}3^{c}}+ \frac{y^{2}z^{3}}{2^{a}3^{c}}+\frac{y^{3}z^{3}}{3^{a}3^{c}}\]
\[+\frac{y^{-4}z^{4}}{(-4)^{a}4^{c}}+\frac{y^{-3}z^{4}}{(-3)^{a}4^{c}}+\frac{y^{ -2}z^{4}}{(-2)^{a}4^{c}}+\frac{y^{-1}z^{4}}{(-1)^{a}4^{c}}+\frac{y^{0}z^{4}}{1 \times 4^{c}}+\frac{y^{1}z^{4}}{1^{a}4^{c}}+\frac{y^{2}z^{4}}{2^{a}4^{c}}+ \frac{y^{2}z^{4}}{3^{a}4^{c}}+\frac{y^{4}z^{4}}{4^{a}4^{c}}\]
\[+\;etc.\]
\[=\sum_{\begin{subarray}{c}|m|,n\geq 1;m|\leq n\end{subarray}}^{\infty}\frac{y^{m}z^{ n}}{m^{a}n^{b}}\]
\[=\sum_{\begin{subarray}{c}h,|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\frac{(y^{j}z^{k})^{h}}{h^{a+b}(j^{a}k^{b})}\]
\[=\sum_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\frac{1}{(j^{a}k^{b})}\sum_{h=1}^{\infty} \frac{(y^{j}z^{k})^{h}}{h^{a+b}}\]
\[=\sum_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\frac{1}{(j^{a}k^{b})}\log\left(\frac{1}{1-y^ {j}z^{k}}\right)\quad if\quad a+b=1.\]
Therefore, we have shown that
\[\sum_{n=1}^{\infty}\left(\sum_{m=-n}^{n}\frac{y^{m}}{m^{a}}\right)\frac{z^{n}} {n^{b}}=\sum_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\frac{1}{(j^{a}k^{b})}\log\left(\frac{1}{1-y^ {j}z^{k}}\right)\quad if\quad a+b=1.\]
Exponentiating both sides (and swapping sides) gives us the \(2D\) first extended inverted symmetric triangle VPV identity, where in this \(2D\) case the \(nD\) pyramid reduces to the form of a triangle shaped array of lattice point vectors having the \(z\)-axis as the axis of symmetry, and so we can state the
**Theorem 6.1**.: _The_ **2D** _vertical symmetry extended triangle VPV identity. For \(0<|yz|<1\), \(0<|z/y|<1\), \(0<|z|<1,\) with \(a+b=1\),_
\[\prod_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{ 1}{j^{a}k^{b}}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{m=-n}^{n}\frac{y^{m }}{m^{a}}\right)\frac{z^{n}}{n^{b}}\right\}\quad if\quad a+b=1. \tag{6.2}\]
As with our earlier exploits into the \(2D\) first quadrant case, for the present result we take some simple example cases where new and interesting results arise.
So, let us take the case where \(a=0,b=1\), giving us for \(0<|yz|<1\), \(0<|z/y|<1\), \(0<|z|<1\),
\[\prod_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{ 1}{k}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{m=-n}^{n}y^{m}\right)\frac{z ^{n}}{n}\right\}\]
\[=\exp\left\{\sum_{n=1}^{\infty}\left(\frac{y^{2n+1}-1}{y^{n}(y-1)}\right) \frac{z^{n}}{n}\right\}=\exp\left\{\frac{1}{1-y}\log\left(\frac{(1-yz)^{y}}{1 -z/y}\right)\right\}.\]
So, we arrive then at the following pair of equivalent results, for \(0<|yz|<1\), \(0<|z/y|<1\), \(0<|z|<1\),
\[\prod_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{ 1}{k}}=\left(\frac{(1-yz)^{y}}{1-z/y}\right)^{\frac{1}{1-y}}, \tag{6.3}\]
and
\[\prod_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\left(1-y^{j}z^{k}\right)^{\frac{1}{k}}= \left(\frac{1-z/y}{(1-yz)^{y}}\right)^{\frac{1}{1-y}}. \tag{6.4}\]
From here, multiply both sides of (6.3) and the case of (6.4) with \(y\mapsto y^{2}\) and \(z\mapsto z^{2}\) to get,
\[\prod_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\left(1+y^{j}z^{k}\right)^{\frac{1}{k}}= \left(\frac{(1-yz)^{y}}{1-z/y}\right)^{\frac{1}{1-y}}\left(\frac{1-(z/y)^{2}} {(1-(yz)^{2})^{y^{2}}}\right)^{\frac{1}{1-y^{2}}}. \tag{6.5}\]
Particular cases:
\(y=\frac{1}{2}\) gives us from (6.4) and (6.5) the two results that
\[\prod_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\left(1-\frac{z^{k}}{2^{j}}\right)^{\frac{-1} {k}}=\frac{1-z/2}{(1-2z)^{2}}\sqrt[4]{\left(\frac{1-4z^{2}}{4/1-z^{2}/4} \right)^{3}}\] \[=1+\frac{7z}{2}+\frac{19z^{2}}{4}+\frac{61z^{3}}{8}+\frac{117z^{ 4}}{8}+\frac{423z^{5}}{16}+\frac{4861z^{6}}{96}+\frac{18259z^{7}}{192}\] \[\qquad\qquad+\frac{140867z^{8}}{768}+\frac{538373z^{9}}{1536}+ \frac{696379z^{10}}{1024}+O(z^{11})\] \[\qquad\qquad\qquad\qquad=\frac{1}{\left(1-\frac{z}{2}\right)}\] \[\qquad\qquad\qquad\frac{1}{\sqrt{\left(1-2^{1}z^{2}\right)\left( 1-\frac{z^{2}}{2^{1}}\right)}}\] \[\qquad\qquad\qquad\frac{1}{\sqrt{\left(1-2^{2}z^{3}\right)\left( 1-2^{1}z^{3}\right)\left(1-\frac{z^{3}}{2^{1}}\right)\left(1-\frac{z^{3}}{2^ {2}}\right)}}\] \[\qquad\qquad\qquad\frac{1}{\sqrt{\left(1-2^{3}z^{4}\right)\left( 1-2^{1}z^{4}\right)\left(1-\frac{z^{4}}{2^{1}}\right)\left(1-\frac{z^{4}}{2^ {3}}\right)}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\vdots\,,\]
\[\prod_{\begin{subarray}{c}|j|,k\geq 1\\ |j|\leq k;\,(j,k)=1\end{subarray}}\left(1+\frac{z^{k}}{2^{j}}\right)^{\frac{1} {k}}=\frac{2-z}{2-2z}\sqrt[3]{\frac{4-z^{2}}{4-4z^{2}}}\] \[=1+\frac{z}{2}+\frac{3z^{2}}{4}+\frac{5z^{3}}{8}+\frac{13z^{4}}{ 16}+\frac{23z^{5}}{32}+\frac{167z^{6}}{192}\] \[\qquad\qquad+\frac{305z^{7}}{384}+\frac{59z^{8}}{64}+\frac{659z^{ 9}}{768}+O(z^{10})\] \[\qquad\qquad\qquad=\left(1+2z\right)\left(1+\frac{z}{2}\right)\] \[\qquad\qquad\qquad\sqrt{\left(1+2^{1}z^{2}\right)\left(1+\frac{z ^{2}}{2^{1}}\right)}\] \[\qquad\qquad\qquad=\frac{1}{\left(1-\frac{z}{2}\right)}\] \[\qquad\qquad\qquad\qquad\frac{1}{\sqrt{\left(1-2^{1}z^{2}\right) \left(1-\frac{z^{2}}{2^{1}}\right)}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\frac{1}{ \sqrt{\left(1-2^{2}z^{3}\right)\left(1-2^{1}z^{3}\right)\left(1+\frac{z^{3}}{2 ^{1}}\right)\left(1+\frac{z^{3}}{2^{2}}\right)}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\frac{1}{\sqrt{\left(1-2^{3}z ^{4}\right)\left(1+2^{1}z^{4}\right)\left(1+\frac{z^{4}}{2^{1}}\right)\left(1 +\frac{z^{4}}{2^{3}}\right)}}\]
\[\sqrt[\sqrt[\begin{matrix}1+2^{4}z^{5}\end{matrix}](1+2^{3}z^{5})\left(1+2^{2}z^{5} \right)\left(1+2^{2}z^{5}\right)\left(1+2^{1}z^{5}\right)\left(1+\frac{z^{5}}{2^ {1}}\right)\left(1+\frac{z^{5}}{2^{2}}\right)\left(1+\frac{z^{5}}{2^{3}} \right)\left(1+\frac{z^{5}}{2^{4}}\right)\]
\[\sqrt[\begin{matrix}\sqrt[6]{\left(1+2^{5}z^{6}\right)\left(1+2^{1}z^{6}\right) \left(1+\frac{z^{6}}{2^{1}}\right)\left(1+\frac{z^{6}}{2^{5}}\right)}\\ \vdots\end{matrix}\]
These two equations can be easily verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around \(z=0\) and comparing coefficients of like powers of \(z\). Next, take the cases of (6.4) and (6.5) with \(y=2\), both of which converge if \(|z|<2\), so then, after a slight adjustment to both sides by a factor of \(1-2z\),
We remark at this juncture that equations (6.4) and it's reciprocal equation (6.5) are amenable to applying the limit as y approaches 1. In fact we have as follows that,
\[\lim_{y\to 1}\left(\frac{1-z}{1-yz}\right)^{\frac{y}{1-y}}=e^{\frac{z}{z-1}}\]
and also from considering equation (6.5) there is the limit, easily evaluated,
\[\lim_{y\to 1}\left(\frac{1-yz}{1-z}\right)^{\frac{y}{1-y}}\left(\frac{1-z^{2}}{1 -y^{2}z^{2}}\right)^{\frac{y^{2}}{1-y^{2}}}=e^{\frac{z}{1-z^{2}}}.\]
Therefore, applying these two limits to equations (6.4) and (6.5) respectively we obtain the two interesting results (6.6) and (6.7) given here.
\[\prod_{k=1}^{\infty}\left(1-z^{k}\right)^{\frac{\varphi(k)}{k}}=e^{ \frac{z}{z-1}}=\sum_{k=0}^{\infty}\frac{\alpha(k)z^{k}}{k!} \tag{6.6}\]
\[=1-\frac{z}{1!}-\frac{z^{2}}{2!}-\frac{z^{3}}{3!}+\frac{z^{4}}{4!}+\frac{19z^{ 5}}{5!}+\frac{151z^{6}}{6!}+\frac{1091z^{7}}{7!}\]
\[+\frac{7841z^{8}}{8!}+\frac{56519z^{9}}{9!}+\frac{396271z^{10}}{10!}+O(z^{11}),\]
demonstrating sequence \(\alpha(k)\) has the exponential generating function \(e^{\frac{z}{z-1}}\). Amazingly \(\gcd(\alpha(k),k!)=1\), for all values of \(k\) up to 34, and mostly beyond that, and \(\alpha(k)\equiv 1\) or \(9\) (\(mod\)\(10\)), and also the recurrence relation
\[\alpha(n)+(n-1)(n-2)\,\alpha(n-2)=(2n-3)\,\alpha(n-1)\]
holds. (See OEIS sequence A293116 [26]) This recurrence relation allows us to write continued fractions for the ratios \(\alpha(n+1)/\alpha(n)\).
\[\prod_{k=1}^{\infty}\left(1+z^{k}\right)^{\frac{\varphi(k)z^{k}}{k}}=e^{ \frac{z}{1-z^{2}}}=\sum_{k=0}^{\infty}\frac{\beta(k)z^{k}}{k!} \tag{6.7}\]
\[=1+\frac{z}{1!}+\frac{z^{2}}{2!}+\frac{7z^{3}}{3!}+\frac{25z^{4}}{4!}+\frac{18 1z^{5}}{5!}+\frac{1201z^{6}}{6!}\]
\[+\frac{10291z^{7}}{7!}+\frac{97777z^{8}}{8!}+\frac{202709z^{9}}{9!}+O(z^{10}),\]
where \(\varphi(k)\) is the Euler totient function, the number of positive integers less than and coprime to \(k\).
Next we take (6.2) with the case that \(a=1\) and \(b=0\), so then
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{1}{ j}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{m=1}^{n}\frac{y^{m}}{m}\right)z^{n}\right\}\] \[=\exp\left\{\frac{1}{1-z}\sum_{n=1}^{\infty}\frac{y^{n}z^{n}}{n} \right\}=\exp\left\{\frac{1}{1-z}\log\left(\frac{1}{1-yz}\right)\right\}.\]
This leads us to establish that
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(\frac{1}{1-y^{j}z^{k}}\right)^{\frac{1} {j}}=\left(\frac{1}{1-yz}\right)^{\frac{1}{1-z}}, \tag{6.8}\]
which is equivalent to
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(1-y^{j}z^{k}\right)^{\frac{1}{j}}=(1-yz )^{\frac{1}{1-z}}\,. \tag{6.9}\]
From multiplying both sides of (3.7) in which \(y\mapsto y^{2}\) and \(z\mapsto z^{2}\) with both sides of (3.8) we obtain
\[\prod_{\begin{subarray}{c}j,k\geq 1\\ j\leq k;\,(j,k)=1\end{subarray}}\left(1+y^{j}z^{k}\right)^{\frac{1}{j}}=\frac {(1-y^{2}z^{2})^{\frac{1}{1-z^{2}}}}{(1-yz)^{\frac{1}{1-z}}}. \tag{6.10}\]
Particular cases:
\(z=\frac{1}{2}\) gives us from (6.9) and (6.10) the remarkable result that
\[\prod_{\begin{subarray}{c}j,k\geq 1;\,j\leq k\\ gcd(j,k)=1\end{subarray}}\left(1-\frac{y^{j}}{2^{k}}\right)^{\frac{1}{j}}= \left(1-\frac{y}{2}\right)^{2}=1-\frac{y}{4}+\frac{y^{2}}{4}\]
\[=\left(1-\frac{y^{1}}{2^{1}}\right)\]
\[\left(1-\frac{y^{1}}{2^{2}}\right)\]
\[\left(1-\frac{y^{1}}{2^{3}}\right)\sqrt{\left(1-\frac{y^{2}}{2^{3}}\right)}\]
\[\left(1-\frac{y^{1}}{2^{4}}\right)\sqrt[3]{\left(1-\frac{y^{3}}{2^{4}}\right)}\]
\[\left(1-\frac{y^{1}}{2^{5}}\right)\sqrt{\left(1-\frac{y^{2}}{2^{5}}\right)} \sqrt[3]{\left(1-\frac{y^{3}}{2^{5}}\right)}\sqrt[3]{\left(1-\frac{y^{4}}{2^ {5}}\right)}\]
\[\left(1-\frac{y^{1}}{2^{6}}\right)\hat{\sqrt[3]{\left(1-\frac{y^{5}}{2^{6}}\right)}}\]
\[\vdots,\]
and the result,
\[\prod_{\begin{subarray}{c}j,k\geq 1:j\leq k\\ \gcd(j,k)=1\end{subarray}}\left(1+\frac{y^{j}}{2^{k}}\right)^{\frac{1}{j}}= \frac{\sqrt[3]{(4-y^{2})^{4}}}{\sqrt[3]{4}(2-y)^{2}}=1+y+\frac{5y^{2}}{12}+ \frac{y^{3}}{6}+\frac{11y^{4}}{144}+\frac{5y^{5}}{144}+O(y^{6})\]
\[=\left(1+\frac{y^{1}}{2^{1}}\right)\]
\[\left(1+\frac{y^{1}}{2^{2}}\right)\]
\[\left(1+\frac{y^{1}}{2^{3}}\right)\sqrt{\left(1+\frac{y^{2}}{2^{3}}\right)}\]
\[\left(1+\frac{y^{1}}{2^{4}}\right)\sqrt[3]{\left(1+\frac{y^{3}}{2^{4}}\right)}\]
\[\left(1+\frac{y^{1}}{2^{5}}\right)\sqrt{\left(1+\frac{y^{2}}{2^{5}}\right)} \sqrt[3]{\left(1+\frac{y^{3}}{2^{5}}\right)}\sqrt[4]{\left(1+\frac{y^{4}}{2^{ 5}}\right)}\]
\[\left(1+\frac{y^{1}}{2^{6}}\right)\sqrt[3]{\left(1+\frac{y^{5}}{2^{6}}\right)}\]
\[\vdots.\]
These two equations can be verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around \(y=0\) and comparing coefficients of like powers of \(y\). However, the calculation is an infinite series for each coefficient, unlike in the previous examples, where it is a finite sum.
## 7. 3D VPV identities for a right square pyramid lattice
As we did in section 4, we start with a simple \(3D\) summation. We derive on a \(3D\) inverted pyramid shaped lattice that extends infinitely and occupies four adjacent hyperquadrants of the eight hyperquadrants that comprise the \(X-Y-Z\)\(3\)-space.
We depict this infinite inverted pyramid with square layered arrays of lattice point vectors as per the following diagram, with VPVs bolded.
So now, we consider the sum, whose shape is an inverted \(3D\) right pyramid whose apex is at the origin \(\langle 0,0,0\rangle\), given by (for \(0<each\ of\ |xz|,|z/x|,|yz|,|z/y|,|z|<1\))
\[\sum_{n=1}^{\infty}\left(\sum_{l=-n}^{n}\frac{x^{l}}{l^{a}}\right)\left(\sum_{ m=-n}^{n}\frac{y^{m}}{m^{b}}\right)\frac{z^{n}}{n^{c}}\]
\[=\left(\frac{x^{-1}}{(-1)^{a}}+1+\frac{x^{1}}{1^{a}}\right)\left(\frac{y^{-1}} {(-1)^{a}}+1+\frac{y^{1}}{1^{b}}\right)\frac{z^{1}}{1^{c}}\]
\[+\left(\frac{x^{-2}}{(-2)^{a}}+\frac{x^{-1}}{(-1)^{a}}+1+\frac{x^{1}}{1^{a}}+ \frac{x^{2}}{2^{a}}\right)\left(\frac{y^{-2}}{(-2)^{b}}+\frac{y^{-1}}{(-1)^{b} }+1+\frac{y^{1}}{1^{b}}+\frac{y^{2}}{2^{b}}\right)\frac{z^{2}}{2^{c}}\]
\[+\left(\frac{x^{-3}}{(-3)^{a}}+\frac{x^{-2}}{(-2)^{a}}+\frac{x^{-1}}{(-1)^{a}} +1+\frac{x^{1}}{1^{a}}+\frac{x^{2}}{2^{a}}+\frac{x^{3}}{3^{a}}\right)\]
\[\left(\frac{y^{-3}}{(-3)^{a}}+\frac{y^{-2}}{(-2)^{b}}+\frac{y^{-1}}{(-1)^{b}}+ 1+\frac{y^{1}}{1^{b}}+\frac{y^{2}}{2^{b}}+\frac{y^{3}}{3^{b}}\right)\frac{z^{3} }{3^{c}}\]
\[+\left(\frac{x^{-4}}{(-4)^{a}}+\frac{x^{-3}}{(-3)^{a}}+\frac{x^{-2}}{(-2)^{a}} +\frac{x^{-1}}{(-1)^{a}}+1+\frac{x^{1}}{1^{a}}+\frac{x^{2}}{2^{a}}+\frac{x^{3} }{3^{a}}+\frac{x^{4}}{4^{a}}\right)\]
\[+\left(\frac{y^{-4}}{(-4)^{a}}+\frac{y^{-3}}{(-3)^{a}}+\frac{y^{-2}}{(-2)^{a} }+\frac{y^{-1}}{(-1)^{a}}+1+\frac{y^{1}}{1^{a}}+\frac{y^{2}}{2^{a}}+\frac{y^{3} }{3^{a}}+\frac{y^{4}}{4^{a}}\right)\frac{z^{4}}{4^{c}}\]
\[+\]
\[\vdots\]
\[=\frac{x^{-1}y^{1}z^{1}}{(-1)^{a}\ 1^{b}\ 1^{c}} + \frac{x^{0}y^{1}z^{1}}{1\times 1^{b}\ 1^{c}} + \frac{x^{1}y^{1}z^{1}}{1^{a}\ 1^{b}\ 1^{c}}\] \[+\frac{x^{-1}y^{0}z^{1}}{(-1)^{a}\times 1\times 1^{c}}+\frac{x^{0}y ^{0}z^{1}}{1\times 1\times 1^{c}}+\frac{x^{1}y^{0}z^{1}}{1^{a}\times 1\times 1^{c}}\]
\[+\frac{x^{-1}y^{-1}z^{1}}{(-1)^{a}\;(-1)^{b}\;1^{c}}+\frac{x^{0}y^{-1}z^{1}}{1 \times(-1)^{b}\;1^{c}}+\frac{x^{1}y^{-1}z^{1}}{1^{a}\;(-1)^{b}\;1^{c}}\]
\[+\frac{x^{-2}y^{2}z^{2}}{(-2)^{a}2^{b}2^{c}} + \frac{x^{-1}y^{2}z^{2}}{(-1)^{a}2^{b}2^{c}} + \frac{x^{0}y^{2}z^{2}}{1\times 2^{b}2^{c}} + \frac{x^{1}y^{2}z^{2}}{1^{a}2^{b}2^{c}} + \frac{x^{2}y^{2}z^{2}}{2^{a}2^{b}2^{c}}\] \[+\frac{x^{-2}y^{1}z^{2}}{(-2)^{a}1^{b}2^{c}} + \frac{x^{-1}y^{1}z^{2}}{(-1)^{a}1^{b}2^{c}} + \frac{x^{0}y^{1}z^{2}}{1\times 1^{b}2^{c}} + \frac{x^{1}y^{1}z^{2}}{1^{a}1^{b}2^{c}} + \frac{x^{2}y^{1}z^{2}}{2^{a}1^{b}2^{c}}\] \[+\frac{x^{-2}y^{0}z^{2}}{(-2)^{a}\times 1\times 2^{c}}+\frac{x^{- 1}y^{0}z^{2}}{(-1)^{a}\times 1\times 2^{c}}+\frac{x^{0}y^{0}z^{2}}{1\times 1 \times 2^{c}}+\frac{x^{1}y^{0}z^{2}}{1^{a}\times 1\times 2^{c}}+\frac{x^{2}y^{0}z^{2}}{ 2^{a}\times 1\times 2^{c}}\] \[+\frac{x^{-2}y^{-1}z^{2}}{(-2)^{a}(-1)^{b}2^{c}}+\frac{x^{-1}y^{- 1}z^{2}}{(-1)^{a}(-1)^{b}2^{c}}+\frac{x^{0}y^{-1}z^{2}}{1\times(-1)^{b}2^{c}} +\frac{x^{1}y^{-1}z^{2}}{1^{a}(-1)^{b}2^{c}}+\frac{x^{2}y^{-1}z^{2}}{2^{a}(-1) ^{b}2^{c}}\] \[+\frac{x^{-2}y^{-2}z^{2}}{(-2)^{a}(-2)^{b}2^{c}}+\frac{x^{-1}y^{- 2}z^{2}}{(-1)^{a}(-2)^{b}2^{c}} +\frac{x^{0}y^{-2}z^{2}}{1\times(-2)^{b}2^{c}}+\frac{x^{1}y^{- 2}z^{2}}{1^{a}(-2)^{b}2^{c}}+\frac{x^{2}y^{-2}z^{2}}{2^{a}(-2)^{b}2^{c}}\] \[+\] \[\vdots\]
\[=\sum_{|l|,|m|,n\geq 1;\;|l|,|m|\leq n}\frac{x^{l}y^{m}z^{n}}{l^{a}m^{b}n^{c}}\]
\[=\sum_{\begin{subarray}{c}h,|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\;\gcd(l,m,n)=1\end{subarray}}\frac{(x^{l}y^{m}z^{n})^{h}}{h^{a +b+c}(l^{a}m^{b}n^{c})}\]
\[=\sum_{\begin{subarray}{c}|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\;\gcd(l,m,n)=1\end{subarray}}\frac{1}{(l^{a}m^{b}n^{c})}\sum_{ h=1}^{\infty}\frac{(x^{l}y^{n}z^{n})^{h}}{h^{a+b+c}}\]
\[=\sum_{\begin{subarray}{c}|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\;\gcd(l,m,n)=1\end{subarray}}\frac{1}{(l^{a}m^{b}n^{c})}\log \left(\frac{1}{1-x^{l}y^{b}z^{c}}\right)\quad if\quad a+b+c=1.\]
Therefore, we have shown that if \(a+b+c=1\) and for \(0<|x|,|1/x|,|y|,|1/y|\) all \(<|z|<1\), then
\[\sum_{n=1}^{\infty}\left(\sum_{l=-n}^{n}\frac{x^{l}}{l^{a}}\right)\left(\sum_{ m=-n}^{n}\frac{y^{m}}{m^{b}}\right)\frac{z^{n}}{n^{c}}=\sum_{\begin{subarray}{c }|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\;\gcd(l,m,n)=1\end{subarray}}\frac{1}{(l^{a}m^{b}n^{c})}\log \left(\frac{1}{1-x^{l}y^{b}z^{c}}\right).\]
Simplifying both sides gives us the \(3D\) "right square pyramid VPV identity", where in this \(3D\) case the pyramid takes the form of layered square shaped arrays of lattice point vectors as shown in the above workings, with the central axis rising vertically from the apex point \(\langle 0,0,0\rangle\).
The identity is summarized in the
**Theorem 7.1**.: _The \(3D\) right square pyramid VPV identity. For \(0<eachof|xz|,|z/x|,|yz|,|z/y|,|z|<1\), with \(a+b+c=1\),_
\[\prod_{\begin{subarray}{c}|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\,\gcd(l,m,n)=1\end{subarray}}\left(\frac{1}{1-x^{l}y^{m}z^{n} }\right)^{\frac{1}{n}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{l=-n}^{n} \frac{x^{l}}{l^{a}}\right)\left(\sum_{m=-n}^{n}\frac{y^{m}}{m^{b}}\right)\frac {z^{n}}{n^{c}}\right\}. \tag{7.2}\]
As we did for the \(2D\) particular cases, we can examine some obvious example corollaries arising from this theorem. Firstly, take the case where \(a=b=0,c=1\), so then,
\[\prod_{\begin{subarray}{c}|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\,\gcd(l,m,n)=1\end{subarray}}\left(\frac{1}{1-x^{l}y^{m}z^{n} }\right)^{\frac{1}{n}}=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{l=-n}^{n}x^{ l}\right)\left(\sum_{m=-n}^{n}y^{m}\right)\frac{z^{n}}{n}\right\}\] \[=\exp\left\{\sum_{n=1}^{\infty}\left(\frac{x^{2n+1}-1}{x^{n}(x-1) }\right)\left(\frac{y^{2n+1}-1}{y^{n}(y-1)}\right)\frac{z^{n}}{n}\right\}\] \[=\exp\left\{\frac{1}{(1-x)(1-y)}\sum_{n=1}^{\infty}\left((xy)^{n +1}+\left(\frac{1}{xy}\right)^{n}-x\left(\frac{x}{y}\right)^{n}-y\left(\frac{ y}{x}\right)^{n}\right)\frac{z^{n}}{n}\right\}\] \[=\exp\left\{\frac{1}{(1-x)(1-y)}\log\left(\frac{(1-xz/y)^{x}(1-yz/ x)^{y}}{(1-xyz)^{xy}(1-z/(xy))}\right)\right\},\] \[=\left\{\frac{(1-xz/y)^{x}(1-yz/x)^{y}}{(1-xyz)^{xy}(1-z/(xy))} \right\}^{\frac{1}{(1-x)(1-y)}}.\]
This then implies
**Corollary 7.1**.: _For \(0<\text{ each of }|xz|,|z/x|,|yz|,|z/y|,|z|<1\), with \(a+b+c=1\),_
\[\prod_{\begin{subarray}{c}|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\,\gcd(l,m,n)=1\end{subarray}}\left(\frac{1}{1-x^{l}y^{m}z^{n} }\right)^{\frac{1}{n}}=\left\{\frac{(1-xz/y)^{x}(1-yz/x)^{y}}{(1-xyz)^{xy}(1-z /(xy))}\right\}^{\frac{1}{(1-z)(1-y)}}, \tag{7.3}\]
_and its reciprocal identity,_
\[\prod_{\begin{subarray}{c}|l|,|m|,n\geq 1\\ |l|,|m|\leq n;\,\gcd(l,m,n)=1\end{subarray}}\left(1-x^{l}y^{m}z^{n}\right)^{ \frac{1}{n}}=\left\{\frac{(1-xyz)^{xy}(1-z/(xy))}{(1-xz/y)^{x}(1-yz/x)^{y}} \right\}^{\frac{1}{(1-x)(1-y)}}. \tag{7.4}\]
We see that (7.3) and (7.4) are generalizations of the 2D identities (3.2) and (3.3) from an earlier section.
## 8. VPV identities in nD right-square hyperpyramid regions
The \(n\) dimensional square hyperpyramid VPV Identity is encoded in the following
**Theorem 8.1**.: _The \(nD\) right-square hyperpyramid VPV identity. If \(i=1,2,3,...,n\) then for each \(x_{i}\in\mathbb{C}\) such that \(|x_{i}|<1\) and \(b_{i}\in\mathbb{C}\) such that \(\sum_{i=1}^{n}b_{i}=1\),_
(8.1) \[\prod_{\begin{subarray}{c}\gcd(|a_{1}|,|a_{2}|,...,|a_{n-1}|,a_{n}|)=1\\ |a_{1}|,|a_{2}|,...,|a_{n-1}|\leq a_{n}\\ |a_{1}|,|a_{2}|,...,|a_{n-1}|,a_{n}\geq 1\end{subarray}}\left(\frac{1}{1-x_{1} {}^{a_{1}}x_{2}{}^{a_{2}}x_{3}{}^{a_{3}}\cdots x_{n}{}^{a_{n}}}\right)^{\frac{ 1}{a_{1}^{\ast}a_{1}a_{2}{}^{\ast}a_{2}{}^{\ast}a_{3}{}^{\ast}\cdots a_{n}{}^{b _{n}}}}\] \[\prod_{\begin{subarray}{c}\gcd(|a_{1}|,|a_{2}|,...,|a_{n-1}|,a_{n} |)=1\\ |a_{1}|,|a_{2}|,...,|a_{n-1}|,a_{n}\geq 1\end{subarray}}\left(\frac{1}{1-x_{1} {}^{a_{1}}x_{2}{}^{a_{2}}x_{3}{}^{a_{3}}\cdots x_{n}{}^{a_{n}}}\right)^{\frac {1}{a_{1}^{\ast}a_{1}}x_{2}{}^{a_{2}}x_{3}{}^{a_{3}}\cdots x_{n}{}^{a_{n}}}\] \[\prod_{\begin{subarray}{c}\gcd(|a_{1}|,|a_{2}|,.
\[=\exp\left\{\sum_{k=1}^{\infty}\left(\sum_{j=-k}^{k}\frac{{x_{1}}^{j}}{j^{b_{1}}} \right)\left(\sum_{j=-k}^{k}\frac{{x_{2}}^{j}}{j^{b_{2}}}\right)\left(\sum_{j=-k }^{k}\frac{{x_{3}}^{j}}{j^{b_{3}}}\right)\cdots\left(\sum_{j=-k}^{k}\frac{{x_{ n-1}}^{j}}{j^{b_{n-1}}}\right)\frac{{x_{n}}^{k}}{k^{b_{n}}}\right\}.\]
This result is quite straight-forward to prove using the technique of our two previous sections. This methodology was also given in Campbell [19], but has not been worked through for corollaries of theorem 8.1 over the past 24 years.
Since in the previous section we gave the 3D example of theorem 8.1, we state the 4D case of of it now.
**Corollary 8.1**.: _If \(|wxyz|\), \(|xyz|\), \(|yz|\) and \(|z|<1\) and \(r+s+t+u=1\) then,_
\[\prod_{\begin{subarray}{c}(a,b,c,d)=1\\ |a|,|b|,|c|\leq d\\ a,b,c\geq 0,d\geq 1\end{subarray}}\left(\frac{1}{1-w^{a}x^{b}y^{c}z^{d}} \right)^{\frac{1}{a^{r}y^{c}z^{d}}}=\exp\left\{\mathrm{P}(r,w;s,x;t,y;u,z)\right\} \tag{8.2}\]
_where_
\[\mathrm{P}(r,w;s,x;t,y;u,z)=\sum_{n=1}^{\infty}\left(\sum_{k=-n}^{n}\frac{w^{ k}}{k^{r}}\right)\left(\sum_{k=-n}^{n}\frac{x^{k}}{k^{s}}\right)\left(\sum_{k=-n }^{n}\frac{y^{k}}{k^{t}}\right)\frac{z^{n}}{n^{u}}.\]
Take the case where \(r=s=t=0,u=1\), so then,
\[\prod_{\begin{subarray}{c}|h|,|i|,|j|,k\geq 1\\ |h|,|i|,|j|\leq k;\gcd(h,i,j,k)=1\end{subarray}}\left(\frac{1}{1-w^{h}x^{i}y^ {j}z^{k}}\right)^{\frac{1}{k}}\] \[=\exp\left\{\sum_{n=1}^{\infty}\left(\sum_{k=-n}^{n}w^{k}\right) \left(\sum_{k=-n}^{n}x^{k}\right)\left(\sum_{k=-n}^{n}y^{k}\right)\frac{z^{n} }{n}\right\}\] \[=\exp\left\{\sum_{n=1}^{\infty}\left(\frac{w^{2n+1}-1}{w^{n}(w-1) }\right)\left(\frac{x^{2n+1}-1}{x^{n}(x-1)}\right)\left(\frac{y^{2n+1}-1}{y^{ n}(y-1)}\right)\frac{z^{n}}{n}\right\}\]
\[=\left\{\frac{(1-wxyz)^{wxy}(1-wz/(xy))^{w}(1-xz/(wy))^{x}(1-yz/(wx))^{y}}{(1 -wxz/y)^{wx}(1-wyz/x)^{wy}(1-xyz/w)^{xy}(1-z/(wxy))}\right\}^{\frac{1}{(1-w)(1- x)(1-y)}}.\]
This then implies
**Corollary 8.2**.: _For \(0<\text{ each of }|wz|,|z/w|,|xz|,|z/x|,|yz|,|z/y|,|z|<1\),_
\[\prod_{\begin{subarray}{c}|h|,|i|,|j|,k\geq 1\\ |h|,|i|,|j|\leq k;\gcd(h,i,j,k)=1\end{subarray}}\left(\frac{1}{1-w^{h}x^{i}y ^{j}z^{k}}\right)^{\frac{1}{n}} \tag{8.3}\]
\[=\left\{\frac{(1-wxyz)^{wxy}(1-wz/(xy))^{w}(1-xz/(wy))^{x}(1-yz/(wx))^{y}}{(1 -wxz/y)^{wx}(1-wyz/x)^{wy}(1-xyz/w)^{xy}(1-z/(wxy))}\right\}^{\frac{1}{(1-w)(1- x)(1-y)}},\]
_and its reciprocal identity,_
\[\prod_{\begin{subarray}{c}|h|,|i|,|j|,k\geq 1\\ |h|,|i|,|j|\leq k;\gcd(h,i,j,k)=1\end{subarray}}\left(1-w^{h}x^{i}y^{j}z^{k} \right)^{\frac{1}{n}} \tag{8.4}\]
\[=\left\{\frac{(1-wxz/y)^{wx}(1-wyz/x)^{wy}(1-xyz/w)^{xy}(1-z/(wxy))}{(1-wxyz)^{ wxy}(1-wz/(xy))^{w}(1-xz/(wy))^{x}(1-yz/(wx))^{y}}\right\}^{\frac{1}{(1-w)(1-x)(1-y)}}.\]
Note that both sides of (8.3) are formally equivalent to the power series
\[1+\frac{z}{1!}+\left|\begin{matrix}\frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^{1}x^{ 2}y^{1}(1-w)(1-x)(1-y)}&-1\\ \frac{(1-w^{3})(1-x^{5})(1-y^{5})}{w^{2}x^{2}y^{2}(1-w)(1-x)(1-y)}&\frac{(1-w^{ 3})(1-x^{3})(1-y^{3})}{w^{2}x^{3}y^{2}(1-w)(1-x)(1-y)}&\frac{z^{2}}{2!}\\ +\left|\begin{matrix}\frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^{1}x^{2}y^{1}(1-w)(1 -x)(1-y)}&-1&0\\ \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^{2}x^{2}y^{2}(1-w)(1-x)(1-y)}&\frac{(1-w^ {3})(1-x^{3})(1-y^{3})}{w^{2}x^{2}y^{2}(1-w)(1-x)(1-y)}&-2\\ \frac{(1-w^{3})(1-x^{2})(1-y^{7})}{w^{3}x^{3}y^{3}(1-w)(1-x)(1-y)}&\frac{(1-w^ {3})(1-x^{5})(1-y^{5})}{w^{2}x^{2}y^{2}(1-w)(1-x)(1-y)}&\frac{(1-w^{3})(1-x^{3} )(1-y^{3})}{w^{1}x^{1}y^{1}(1-w)(1-x)(1-y)}\end{matrix}\right|\frac{z^{3}}{3!} +etc.\]
## 9. Envoi: Research Exercises
Study 1. **2D first quadrant Upper VPV combinatorial sum**.
For each of \(|y|,|z|<1\), show that
\[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ 0\leq a<b\leq 1\end{subarray}}\frac{y^{a}\,z^{b}}{1-y^{a}\,z^{b}}=\frac{z}{(1-z)( 1-yz)}. \tag{9.1}\]
For this example we work it through in more detail.
Proof: Starting with the left side of (9.1)
\[LHS = \frac{y^{0}z^{1}}{1-y^{0}z^{1}}\] \[+ \frac{y^{0}z^{2}}{1-y^{0}z^{2}}+\frac{y^{1}z^{2}}{1-y^{1}z^{2}}\] \[+ \frac{y^{0}z^{3}}{1-y^{0}z^{3}}+\frac{y^{1}z^{3}}{1-y^{1}z^{3}}+ \frac{y^{2}z^{3}}{1-y^{2}z^{3}}\] \[+ \frac{y^{0}z^{4}}{1-y^{0}z^{4}}+\frac{y^{1}z^{4}}{1-y^{1}z^{4}}+ \frac{y^{2}z^{4}}{1-y^{3}z^{4}}\] \[+ \frac{y^{0}z^{5}}{1-y^{0}z^{5}}+\frac{y^{1}z^{5}}{1-y^{1}z^{5}}+ \frac{y^{2}z^{5}}{1-y^{3}z^{5}}+\frac{y^{3}z^{5}}{1-y^{0}z^{5}}+\frac{y^{4}z^{ 5}}{1-y^{4}z^{5}}\] \[+ \frac{y^{0}z^{6}}{1-y^{0}z^{6}}+\frac{y^{1}z^{6}}{1-y^{1}z^{6}}+ \frac{y^{5}z^{6}}{1-y^{5}z^{6}}\] \[+ etc.\]
\[=\sum_{\begin{subarray}{c}\gcd(j,k)=1\\ 0\leq j<k\leq 1\end{subarray}}\sum_{a=0}^{\infty}(y^{aj}z^{ak})=\sum_{a=0}^{ \infty}\sum_{\begin{subarray}{c}\gcd(j,k)=1\\ 0\leq j<k\leq 1\end{subarray}}(y^{aj}z^{ak})\]
\[= y^{0}z^{1}\] \[+ (y^{0}+y^{1})z^{2}\] \[+ (y^{0}+y^{1}+y^{2})z^{3}\] \[+ (y^{0}+y^{1}+y^{2}+y^{3})z^{4}\] \[+ (y^{0}+y^{1}+y^{2}+y^{3}+y^{4})z^{5}\] \[+ (y^{0}+y^{1}+y^{2}+y^{3}+y^{4}+y^{5})z^{6}\] \[+ etc.\]
\[=\frac{1-y}{1-y}z+\frac{1-y^{2}}{1-y}z^{2}+\frac{1-y^{3}}{1-y}z^{3}+\frac{1-y^ {4}}{1-y}z^{4}+\cdots\]
\[=\frac{1}{1-y}\left(\frac{z}{1-z}-\frac{yz}{1-yz}\right)\]
\[=\frac{z}{(1-z)(1-yz)}.\hskip 28.452756pt\blacksquare\]
Study 2. **2D first quadrant Upper VPV zeta function analogy for the previous exercise**.
Show that for \(\Re y>1\), \(\Re z>1\),
\[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ 1\leq a<b\leq 2\end{subarray}}\frac{1}{a^{y}b^{z}}=\frac{\zeta(y)\zeta(z)}{ \zeta(y+z)}. \tag{9.2}\]
Proof: Starting with the left side of (9.2)
\[LHS = \left(\frac{1}{1^{y}}\right)\frac{1}{2^{z}}\] \[+ \left(\frac{1}{1^{y}}+\frac{1}{2^{y}}\right)\frac{1}{3^{z}}\] \[+ \left(\frac{1}{1^{y}}+\frac{1}{3^{y}}\right)\frac{1}{4^{z}}\] \[+ \left(\frac{1}{1^{y}}+\frac{1}{2^{y}}+\frac{1}{3^{y}}+\frac{1}{4^ {y}}\right)\frac{1}{5^{z}}\] \[+ \left(\frac{1}{1^{y}}+\frac{1}{5^{y}}\right)\frac{1}{6^{z}}\] \[+ \left(\frac{1}{1^{y}}+\frac{1}{2^{y}}+\frac{1}{3^{y}}+\frac{1}{4^ {y}}+\frac{1}{5^{y}}+\frac{1}{6^{y}}\right)\frac{1}{7^{z}}\] \[+ etc.\]
\[=\sum_{\begin{subarray}{c}\gcd(j,k)=1\\ 0\leq j<k\leq 1\end{subarray}}\sum_{a=0}^{\infty}\frac{1}{j^{ay}k^{az}}= \sum_{a=0}^{\infty}\sum_{\begin{subarray}{c}\gcd(j,k)=1\\ 0\leq j<k\leq 1\end{subarray}}\frac{1}{j^{ay}k^{az}}\]
\[= y^{0}z^{1}\] \[+ (y^{0}+y^{1})z^{2}\] \[+ (y^{0}+y^{1}+y^{2})z^{3}\] \[+ (y^{0}+y^{1}+y^{2}+y^{3})z^{4}\] \[+ (y^{0}+y^{1}+y^{2}+y^{3}+y^{4})z^{5}\] \[+ (y^{0}+y^{1}+y^{2}+y^{3}+y^{4}+y^{5})z^{6}\] \[+ etc.\]
\[=\frac{1-y}{1-y}z+\frac{1-y^{2}}{1-y}z^{2}+\frac{1-y^{3}}{1-y}z^{3}+\frac{1-y^ {4}}{1-y}z^{4}+\cdots\]
\[=\frac{1}{1-y}\left(\frac{z}{1-z}-\frac{yz}{1-yz}\right)\]
\[=\frac{z}{(1-z)(1-yz)}.\qquad\qquad\blacksquare\]
Infer from (9.1) and (9.2) that they encode two similar statements about partitions into lattice point vectors \(\langle a,b\rangle\) with \(gcd(a,b)=1\) and for (9.1) we have non-negative integers \(a\) and positive integers \(b\); whereas for (9.2) we have positive integers for both \(a\) and \(b\).
Study 3. **Find the particular cases of (9.1)**.
Prove the following:
\[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ 0\leq a<b\leq 1\end{subarray}}\frac{1}{3^{n}\,2^{n}-1} = \frac{2^{-n}}{(1-3^{-n})(1-2^{-n})}\quad(for\ \Re n>1),\] \[= \frac{1}{2^{n}}\left(1+\frac{1}{2^{n}}+\frac{1}{3^{n}}+\frac{1}{ 4^{n}}+\frac{1}{6^{n}}+\frac{1}{8^{n}}+\frac{1}{9^{n}}+\frac{1}{12^{n}}+ \cdots\right);\] \[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ 0\leq a<b\leq 1\end{subarray}}\frac{z^{a+b}}{1-z^{a+b}} = \frac{z}{(1-z)^{2}},\quad|z|<1;\] \[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ 0\leq a<b\leq 1\end{subarray}}tan^{2(a+b)}(\theta) = \frac{1}{cos^{2}(\theta)}.\]
Study 4. **Find the particular cases of (9.2)**.
Prove the following:
\[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ a,b\geq 1\end{subarray}}\frac{1}{a^{2}b^{2}} = \frac{5}{2},\] \[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ a,b\geq 1\end{subarray}}\frac{1}{a^{2}b^{3}} = \frac{\pi^{2}\zeta(3)}{6\zeta(5)},\] \[\sum_{\begin{subarray}{c}\gcd(a,b)=1\\ a,b\geq 1\end{subarray}}\frac{1}{a^{3}b^{5}} = \frac{9450\zeta(3)\zeta(5)}{\pi^{8}}.\]
Study 5. **3D first hyperquadrant sums using gcd**.
For each of \(|x|,|y|,|z|<1\), show that
\[\sum_{\begin{subarray}{c}\gcd(a,b,c)=1\\ a,b\geq 0,c>0\end{subarray}}\frac{x^{a}\,y^{b}\,z^{c}}{1-x^{a}\,y^{b}\,z^{c}}= \frac{z}{(1-x)(1-y)(1-z)}. \tag{9.3}\]
Similarly, show that for \(\Re x>1\), \(\Re y>1\), \(\Re z>1\),
\[\sum_{\begin{subarray}{c}\gcd(a,b,c)=1\\ a,b,c\geq 1\end{subarray}}\frac{1}{a^{x}b^{y}c^{z}}=\frac{\zeta(x)\zeta(y)\zeta (z)}{\zeta(x+y+z)}. \tag{9.4}\]
Infer from (9.3) and (9.4) that they encode two similar statements about partitions into lattice point vectors \(\langle a,b,c\rangle\) with \(gcd(a,b,c)=1\) and for (9.3) we have non-negative integers \(a\), \(b\) and positive integers \(c\); whereas for (9.4) we have positive integers for \(a\), \(b\) and \(c\).
1. **4D hyperquadrant VPV sums using gcd**.
For each of \(|w|,|x|,|y|,|z|<1\), show that
\[\sum_{\begin{subarray}{c}\gcd(a,b,c,d)=1\\ a,b,c\geq 0,d>0\end{subarray}}\frac{w^{a}\,x^{b}\,y^{c}\,z^{d}}{1-w^{a}\,x^{b} \,y^{c}\,z^{d}}=\frac{z}{(1-w)(1-x)(1-y)(1-z)}. \tag{9.5}\]
Similarly, show that for \(\Re w\), \(\Re x\), \(\Re y\), and \(\Re z\) all \(>1\),
\[\sum_{\begin{subarray}{c}\gcd(a,b,c,d)=1\\ a,b,c,d\geq 1\end{subarray}}\frac{1}{a^{w}bx^{c}y^{d}z^{d}}=\frac{\zeta(w) \zeta(x)\zeta(y)\zeta(z)}{\zeta(w+x+y+z)}. \tag{9.6}\]
Infer from (9.5) and (9.6) that they encode two similar statements about partitions into lattice point vectors \(\langle a,b,c,d\rangle\) with \(gcd(a,b,c,d)=1\) and for (9.5) we have non-negative integers \(a\), \(b\), \(c\) and positive integers \(d\); whereas for (9.6) we have positive integers for \(a\), \(b\), \(c\) and \(d\).
1. **5D hyperquadrant VPV sums using gcd**.
For each of \(|v|,|w|,|x|,|y|,|z|<1\), show that
\[\sum_{\begin{subarray}{c}\gcd(a,b,c,d,e)=1\\ a,b,c,d\geq 0,e>0\end{subarray}}\frac{v^{a}\,w^{b}\,x^{c}\,y^{d}\,z^{e}}{1-v^{ a}\,w^{b}\,x^{c}\,y^{d}\,z^{e}}=\frac{z}{(1-v)(1-w)(1-x)(1-y)(1-z)}. \tag{9.7}\]
Similarly, show that for \(\Re v\), \(\Re w\), \(\Re x\), \(\Re y\), and \(\Re z\) all \(>1\),
\[\sum_{\begin{subarray}{c}\gcd(a,b,c,d,e)=1\\ a,b,c,d,e\geq 1\end{subarray}}\frac{1}{a^{v}b^{w}c^{x}d^{y}e^{z}}=\frac{\zeta(v) \zeta(w)\zeta(x)\zeta(y)\zeta(z)}{\zeta(v+w+x+y+z)}. \tag{9.8}\]
Infer from (9.7) and (9.8) that they encode two similar statements about partitions into lattice point vectors \(\langle a,b,c,d\rangle\) with \(gcd(a,b,c,d,e)=1\) and for (9.7) we have non-negative integers \(a\), \(b\), \(c\), \(d\) and positive integers \(e\); whereas for (9.8) we have positive integers for \(a\), \(b\), \(c\), \(d\) and \(e\).
1. **nD hyperquadrant sums using gcd**.
For each of \(|q_{1}|,|q_{2}|,|q_{3}|,\ldots,|q_{n}|<1\), show that
\[\sum_{\begin{subarray}{c}\gcd(a_{1},a_{2},a_{3},\ldots,a_{n})=1\\ a_{1}a_{2},\ldots,a_{n-1}\geq 0,a_{n}\geq 1\end{subarray}}\frac{q_{1}^{a_{1}} \,q_{2}^{a_{2}}\,q_{3}^{a_{3}}\cdots q_{n}^{a_{n}}}{1-q_{1}^{a_{1}}\,q_{2}^{a _{2}}\,q_{3}^{a_{3}}\cdots q_{n}^{a_{n}}}=\frac{q_{n}}{(1-q_{1})(1-q_{2})(1-q_ {3})\cdots(1-q_{n})}. \tag{9.9}\]
Similarly, show that for \(\Re q_{1}\), \(\Re q_{2}\), \(\Re q_{3}\), \(\ldots\), and \(\Re q_{n}\) all \(>1\),
\[\sum_{\begin{subarray}{c}\gcd(a_{1},a_{2},a_{3},\ldots,a_{n})=1\\ a_{1},a_{2}\ldots a_{n}\geq 1\end{subarray}}\frac{1}{a_{1}^{q_{1}}a_{2}^{q_{2}}a_{3 }^{q_{3}}\cdots a_{n}^{q_{n}}}=\frac{\zeta(q_{1})\zeta(q_{2})\zeta(q_{3})\ldots \zeta(q_{n})}{\zeta(q_{1}+q_{2}+q_{3}+\cdots+q_{n})}. \tag{9.10}\]
Infer from (9.7) and (9.8) that they encode two similar statements about partitions into lattice point vectors \(\langle a_{1},a_{2},a_{3},\ldots,a_{n}\rangle\) with \(gcd(a_{1},a_{2},a_{3},\ldots,a_{n})=1\) and for (9.7) we have non-negative integers \(a_{1}\), \(a_{2}\), \(\ldots\), \(a_{n-1}\) and positive integers \(a_{n}\); whereas for (9.10) we have positive integers for \(a_{1}\), \(a_{2}\), \(\ldots\), up to \(a_{n}\).
|
2305.00523 | Starobinsky-Type B-L Higgs Inflation Leading Beyond MSSM | Models of induced-gravity inflation are formulated within Supergravity
employing as inflaton the Higgs field which leads to a spontaneous breaking of
a U(1)_{B-L} symmetry at Mgut=2x10^16 GeV. We use a renormalizable
superpotential, fixed by a U(1) R symmetry, and logarithmic or semi-logarithmic
Kahler potentials with integer prefactors which exhibit a quadratic non-minimal
coupling to gravity. We find inflationary solutions of Starobinsky type in
accordance with the observations. The inflaton mass is predicted to be of the
order of 10^13 GeV. The model can be nicely linked to MSSM offering an
explanation of the magnitude of the mu parameter consistently with
phenomenological data. Also it allows for baryogenesis via non-thermal
leptogenesis, provided that the gravitino is heavier than about 10 TeV. | C. Pallis | 2023-04-30T16:46:11Z | http://arxiv.org/abs/2305.00523v1 | # Starobinsky-Type \(B-L\) Higgs Inflation Leading Beyond MSSM
###### Abstract
Models of induced-gravity inflation are formulated within Supergravity employing as inflaton the Higgs field which leads to a spontaneous breaking of a \(U(1)_{B-L}\) symmetry at \(M_{\rm GUT}=2\cdot 10^{16}\ {\rm GeV}\). We use a renormalizable superpotential, fixed by a \(U(1)\) R symmetry, and logarithmic or semi-logarithmic Kahler potentials with integer prefactors which exhibit a quadratic non-minimal coupling to gravity. We find inflationary solutions of Starobinsky type in accordance with the observations. The inflaton mass is predicted to be of the order of \(10^{13}\ {\rm GeV}\). The model can be nicely linked to MSSM offering an explanation of the magnitude of the \(\mu\) parameter consistently with phenomenological data. Also it allows for baryogenesis via non-thermal leptogenesis, provided that the gravitino is heavier than about \(10\ {\rm TeV}\).
Corbu Summer Institute 2022 "School and Workshops on Elementary Particle Physics and Gravity" August 28 - September 7 2022 Corfu, Greece
## 1 Introduction
It is well-known [1, 2, 3] that one of the possible incarnations of Starobisky-type inflation [4] in _Supergravity_ (SUGRA) can be relied on the hypothesis of induced gravity [5, 6, 7]. According to this, inflation is driven in the presence of a non-minimal coupling among the inflaton field and the Ricci scalar curvature, \(f_{R}\), such that the reduced Planck mass \(m_{\rm P}\) is determined by a large (close to Planckian scale \(m_{\rm P}\)) _vacuum expectation value_ (v.e.v) of the inflaton at the end of the slow roll. This is to be contrasted to the case of non-minimal [8, 9, 10] or pole-induced [11] Higgs inflation where the v.e.v of inflaton is negligible. In this talk we focus on the implementation of this scenario employing as inflaton a Higgs field within an "elementary" _Grand Unified Theory_ (GUT) which extends the gauge symmetry of the _Standard Model_ (SM) by a \(U(1)_{B-L}\) factor [12]. In a such case, the unification condition within _Minimal Supersymmetric SM_ (MSSM) may be employed to uniquely determined the strength of \(f_{R}\) giving rise to an economical, predictive and well-motivated setting, thereby called _Induced-gravity Higgs inflation_ (IHI) - cf. Ref. [13].
Here, we concentrate on the simplest models of IHI introduced in Ref. [12] considering exclusively integer prefactors for the logarithms included in the Kahler potentials. The particle physics framework of our presentation is described in Sec. 2 whereas the engineering of induced-gravity hypothesis is outlined in Sec. 3. The inflationary part of this context is investigated in Sec. 4. Then, in Sec. 5, we explain how the MSSM is obtained as a low energy theory and, in Sec. 6, we outline how the observed _baryon asymmetry of the universe_ (BAU) is generated via _non-thermal leptogenesis_ (nTL). Our conclusions are summarized in Sec. 7. Throughout the text, the subscript of type \(,z\) denotes derivation _with respect to_ (w.r.t) the field \(z\) and charge conjugation is denoted by a star. Unless otherwise stated, we use units where \(m_{\rm P}=2.433\cdot 10^{18}\) GeV is taken unity.
## 2 Particle Physics Embedding
We focus on a "GUT" based on \(G_{B-L}=G_{\rm SM}\times U(1)_{B-L}\), where \(G_{\rm SM}=SU(3)_{\rm C}\times SU(2)_{\rm L}\times U(1)_{Y}\) is the gauge group of the SM and \(B\) and \(L\) denote the baryon and lepton number respectively. We below - see Secs. 2.1 and 2.2 - present the basic ingredients of our proposal.
### Superpotential
The superpotential of our model naturally splits into four parts:
\[W=W_{\rm MSSM}+W_{\rm HI}+W_{\mu}+W_{\rm RHN},\ \ {\rm where} \tag{1}\]
(a) \(W_{\rm MSSM}\) is the part of \(W\) which contains the usual terms - except for the \(\mu\) term - of MSSM, supplemented by Yukawa interactions among the left-handed leptons (\(L_{i}\)) and \(N_{i}^{c}\):
\[W_{\rm MSSM}=h_{ijD}d_{i}^{c}Q_{j}H_{d}+h_{ijU}u_{i}^{c}Q_{j}H_{u}+h_{ijE}e_ {i}^{c}L_{j}H_{d}+h_{ijN}N_{i}^{c}L_{j}H_{u}.\] (2a) Here the \[i\] th generation \[SU(2)_{\rm L}\] doublet left-handed quark and lepton superfields are denoted by \[Q_{i}\] and \[L_{i}\] respectively, whereas the \[SU(2)_{\rm L}\] singlet antiquark [antilepton] superfields by \[u_{i}^{c}\] and \[d_{i}^{c}\] [ \[e_{i}^{c}\] and \[N_{i}^{c}\] respectively. The electroweak Higgs superfields which couple to the up [down] quark superfields are denoted by \[H_{u}\] [ \[H_{d}\] ]. Note that the introduction of three right-handed neutrinos, \[N_{i}^{c}\], is necessary to cancel the \[B-L\] gauge anomaly.
2. \(W_{\rm HI}\) is the part of \(W\) which is relevant for IHI and takes the form \[W_{\rm HI}=\lambda S\left(\bar{\Phi}\Phi-M^{2}/4\right).\] (2.2b)
The imposed \(U(1)_{R}\) symmetry ensures the linearity of \(W_{\rm HI}\) w.r.t \(S\). This fact allows us to isolate easily via its derivative the contribution of the inflaton into the F-term SUGRA potential, placing \(S\) at the origin - see Sec. 4.1. The inflaton is contained in the system \(\bar{\Phi}-\Phi\). We are obliged to restrict ourselves to subplanckian values of \(\bar{\Phi}\Phi\) since the imposed symmetries do not forbid non-renormalizable terms of the form \((\bar{\Phi}\Phi)^{p}\) with \(p>1\) - see Sec. 4.2.
(c) \(W_{\mu}\) is the part of \(W\) which is responsible for the generation of the \(\mu\) term of MSSM and takes the form \[W_{\mu}=\lambda_{\mu}SH_{u}H_{d}.\] (2.2c)
As \(W_{\rm HI}\), \(W_{\mu}\) is also linear to \(S\) and so, the imposed \(U(1)_{R}\) plays also a key role in the resolution of the \(\mu\) problem of MSSM - see Sec. 5.
(d) \(W_{\rm RHN}\) is the part of \(W\) which provides Majorana masses for neutrinos and reads
\[W_{\rm RHN}=\lambda_{iN^{c}}\bar{\Phi}{N^{c}_{i}}^{2}\,. \tag{2.2d}\]
The same term assures the decay of the inflaton to \(\widetilde{N^{c}_{i}}\), whose subsequent decay can activate nTL [14]. Here, we work in the so-called \(N^{c}_{i}\)_-basis_, where \(M_{iN^{c}}\) is diagonal, real and positive. These masses, together with the Dirac neutrino masses of the forth term in Eq. (2.2a), lead to the light neutrino masses via the seesaw mechanism - see Sec. 6.2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Superfields & Representations & \multicolumn{3}{c|}{Global Symmetries} \\ \cline{3-5} & under \(G_{B-L}\) & \(R\) & \(B\) & \(L\) \\ \hline \hline \multicolumn{5}{|c|}{Matter Fields} \\ \hline \(e^{c}_{i}\) & \(({\bf 1},{\bf 1},1,1)\) & \(1\) & \(0\) & \(-1\) \\ \(N^{c}_{i}\) & \(({\bf 1},{\bf 1},0,1)\) & \(1\) & \(0\) & \(-1\) \\ \(L_{i}\) & \(({\bf 1},{\bf 2},-1/2,-1)\) & \(1\) & \(0\) & \(1\) \\ \(u^{c}_{i}\) & \(({\bf 3},{\bf 1},-2/3,-1/3)\) & \(1\) & \(-1/3\) & \(0\) \\ \(d^{c}_{i}\) & \(({\bf 3},{\bf 1},1/3,-1/3)\) & \(1\) & \(-1/3\) & \(0\) \\ \(Q_{i}\) & \((\bar{\bf 3},2,1/6,1/3)\) & \(1\) & \(1/3\) & \(0\) \\ \hline \multicolumn{5}{|c|}{Higgs Fields} \\ \hline \(H_{d}\) & \(({\bf 1},{\bf 2},-1/2,0)\) & \(0\) & \(0\) & \(0\) \\ \(H_{u}\) & \(({\bf 1},{\bf 2},1/2,0)\) & \(0\) & \(0\) & \(0\) \\ \hline \(S\) & \(({\bf 1},{\bf 1},0,0)\) & \(2\) & \(0\) & \(0\) \\ \(\Phi\) & \(({\bf 1},{\bf 1},0,2)\) & \(0\) & \(0\) & \(-2\) \\ \(\bar{\Phi}\) & \(({\bf 1},{\bf 1},0,-2)\) & \(0\) & \(0\) & \(2\) \\ \hline \end{tabular}
\end{table}
Table 1: Representations under \(G_{B-L}\) and extra global charges of the superfields of our model.
### Kahler Potentials
The objectives of our model are feasible if \(W\) in Eq. (2.1) cooperates with _one_ of the following Kahler potentials:
\[K_{1} = -3\ln\left(c_{R}(F_{R}+F_{R}^{*})-\frac{|\Phi|^{2}+|\bar{\Phi}|^{2}} {3}+F_{1X}(|X|^{2})\right)\ \ \ \mbox{with}\ \ \ F_{1X}=-\ln\left(1+|X|^{2}/3\right), \tag{2.3a}\] \[K_{2} = -2\ln\left(c_{R}(F_{R}+F_{R}^{*})-\frac{|\Phi|^{2}+|\bar{\Phi}|^{2 }}{2}\right)+F_{2X}(|X|^{2})\ \ \ \mbox{with}\ \ \ F_{2X}=N_{X}\ln\left(1+|X|^{2}/N_{X}\right), \tag{2.3b}\]
where \(F_{R}=\Phi\bar{\Phi}\), \(0<N_{X}<6\), \(X^{\gamma}=S,H_{u},H_{d},\widetilde{N}_{i}^{c}\) and the complex scalar components of the superfields \(\Phi,\bar{\Phi},S,H_{u}\) and \(H_{d}\) are denoted by the same symbol whereas this of \(N_{i}^{c}\) by \(\widetilde{N}_{i}^{c}\). We assume that \(X^{\gamma}\) have identical kinetic terms expressed by the functions \(F_{IX}\) with \(l=1,2\). These functions ensures the stability and the heaviness of these modes [15] employing _exclusively_ quadratic terms. Both \(K\)'s reduce to the same \(K_{0}\) for \(X^{\alpha}=0\) with the aid of the frame function \(\Omega\) defined as
\[K_{0}=-N\ln\left(-\frac{\Omega}{N}\right)\ \ \ \mbox{with}\ \ \frac{\Omega}{N}=-c_{R}(F_{R}+F_{R}^{*})+\frac{|\Phi|^{2}+|\bar{\Phi}|^{2}}{N }\ \ \mbox{and}\ \ \ N=\begin{cases}3&\mbox{for}\ \ \ K=K_{1},\\ 2&\mbox{for}\ \ \ K=K_{2}.\end{cases} \tag{2.4}\]
Henceforth, \(N\) assists us to unify somehow the two \(K\)'s considered in Eqs. (2.3a) and (2.3b).
## 3 SUGRA Version of Induced-Gravity Conjecture
The scale \(M\) and the function \(F_{R}\) involved in Eqs. (2.2b), (2.3a) and (2.3b) assist us in the implementation of the idea of induced gravity. To explain how it works, we introduce our notation in the two relevant frames in Sec. 3.1 and then, in Sec. 3.2, we derive the SUSY vacuum which plays a key role imposing the induced-gravity condition - see Sec. 3.3.
### From Einstein to Jordan Frame
We concentrate on \(W_{\rm HI}\) and extract the part of the _Einstein frame_ (EF) action within SUGRA related to the complex scalars \(z^{\alpha}=S,\Phi,\bar{\Phi}\). This has the form [12]
\[S=\int d^{4}x\sqrt{-\bar{\Phi}}\left(-\frac{1}{2}\widehat{R}+K_{\alpha\bar{ \beta}}\widehat{g}^{\mu\nu}D_{\mu}z^{\alpha}D_{\nu}z^{\bar{\beta}}-\widehat{V} _{\rm SUGRA}\right)\,, \tag{3.1}\]
where \(\widehat{R}\) is the EF Ricci scalar curvature, \(D_{\mu}\) is the gauge covariant derivative, \(K_{\alpha\bar{\beta}}=K_{z^{\alpha}z^{\bar{\beta}}}\), and \(K^{\alpha\bar{\beta}}K_{\bar{\beta}\gamma}=\hat{S}_{\gamma}^{\alpha}\) and \(\mathfrak{g}\) is the determinant of the EF metric \(\widehat{g}^{\mu\nu}\). Also, \(\widehat{V}\) is the EF SUGRA potential which can be found in terms of \(W_{\rm HI}\) in Eq. (2.2b) and the \(K\)'s in Eqs. (2.3a) - (2.3b) via the formula
\[\widehat{V}_{\rm SUGRA}=\widehat{V}_{\rm F}+\widehat{V}_{\rm D}\ \ \ \mbox{with}\ \ \widehat{V}_{\rm F}=e^{K}\left(K^{\alpha\bar{\beta}}(D_{\alpha}W_{\rm HI})D_{ \bar{\beta}}^{*}W_{\rm HI}^{*}-3|W_{\rm HI}|^{2}\right)\ \ \ \mbox{and}\ \ \ \widehat{V}_{\rm D}=\frac{g_{BL}^{2}}{2}{\rm D}_{BL}^{2}.\] (3.2a) Here the Kahler covariant derivative reads \[D_{\alpha}W_{\rm HI}=W_{\rm HI,z^{\alpha}}+K_{z^{\alpha}}W_{\rm HI}\] whereas the D term due to \[B-L\] symmetry is found to be \[{\rm D}_{BL}=\left(|\Phi|^{2}-|\bar{\Phi}|^{2}\right)/(-\Omega/N).\] (3.2b) As induced by Eqs. ( 2.4 ) and ( 2.3b ), the field configuration \[\langle\Phi\rangle_{\rm I}=\langle\bar{\Phi}\rangle_{\rm I}\ \ \mbox{and}\ \ \ \langle X^{\alpha}\rangle_{\rm I}=0, \tag{3.3}\]
assures \(\langle\widehat{V}_{\rm D}\rangle_{\rm I}=0\) where the symbol \(\langle Q\rangle_{\rm I}\) denotes values of a quantity \(Q\) along the path of Eq. (3.3). Henceforth, we confine ourselves to this path - assuming in addition that \(\arg(\Phi)=\arg(\bar{\Phi})\) - which is a honest inflationary trajectory, supporting IHI driven exclusively by \(\widehat{V}_{\rm F}\).
The performance of a conformal transformation after defining the _Jordan Frame_ (JF) metric as
\[g^{\mu\nu}=-\frac{\Omega}{N}\widetilde{g}^{\mu\nu}\ \ \ \mbox{yields}\ \mbox{ \@@cite[cite]{[\@@bibref{}{K-1}{}{}]} via Eq. (3.1)}\ \ \ \ \mbox{${\sf S}$}=\int d^{4}x\sqrt{-\mathfrak{g}}\left(\frac{\Omega}{2N}R- \cdots\right) \tag{3.4}\]
which reveals that \(-\Omega/N\) plays the role of a (dimensionless) non-minimal coupling to gravity - here we use unhatted symbols for the JF quantities and the ellipsis includes terms irrelevant for our discussion. Comparing Eq. (2.4) with the \(K\)'s in Eqs. (2.3a) and (2.3b) we can infer that the emergence of Einstein gravity at the vacuum dictates
\[-\langle\Omega/N\rangle=2(Nc_{R}+1)\langle\Phi\rangle^{2}/N=1, \tag{3.5}\]
where we assume that \(\langle\Phi\rangle\) is included in the inflationary trough of Eq. (3.3). Its value as a function of the model parameters is calculated in the next section.
### SUSY Potential
The implementation of the IHI requires the generation of \(m_{\rm P}\) at the vacuum of the theory. It can be determined expanding \(V_{\rm SUGRA}\) in powers of \(1/m_{\rm P}\). Namely, we obtain the following low-energy effective potential which plays the role of SUSY one
\[V_{\rm SUSY}=\left\langle\widetilde{K}^{\alpha\tilde{\beta}}W_{\rm HI\alpha} W_{\rm HI\beta}^{*}\right\rangle_{\rm I}+\cdots,\] (3.6a) where the ellipsis represents terms proportional to \[W_{\rm HI}\] or \[|W_{\rm HI}|^{2}\] which obviously vanish along the path in Eq. ( 3.3 ). Also, \[\widetilde{K}\] is the limit of the \[K\]'s in Eqs. ( 2.3a ) and ( 2.3b ) for \[m_{\rm P}\to\infty\]. The absence of unity in the arguments of the logarithms multiplied by \[N\] in these \[K\]'s prevents the drastic simplification of \[\widetilde{K}\] - cf. Ref. [10]. As a consequence, the expression of the resulting \[V_{\rm SUSY}\] is rather lengthy. For this reason we confine ourselves below to \[K=K_{2}\] where \[F_{25}\] is placed outside the first logarithm in Eq. ( 2.3a ) and so \[\widetilde{K}\] can be somehow simplified. Namely, we get \[\widetilde{K}=-N\ln\left(-\Omega/N\right)+|S|^{2}\,,\] (3.6b) from which we can then compute \[\left(\langle\widetilde{K}_{\alpha\tilde{\beta}}\rangle_{\rm I}\right)={\rm diag }\left(\widetilde{M}_{\Phi\Phi},1\right)\ \ \ \mbox{with}\ \ \ \widetilde{M}_{\Phi\Phi}=\frac{2}{\langle\Omega\rangle_{\rm I}^{2}}\ \left\{\begin{array}{ll}(4c_{R}-1)|\Phi|^{2}&|2c_{R} \Phi-\Phi^{*}|^{2}\\ |2c_{R}\Phi-\Phi^{*}|^{2}&(4c_{R}-1)|\Phi|^{2}\end{array}\right\}.\] (3.7a) To compute \[V_{\rm SUSY}\] we need to know \[\langle\widetilde{K}^{\alpha\tilde{\beta}}\rangle_{\rm I}={\rm diag}\left( \widetilde{M}_{\Phi\Phi}^{-1},1\right),\ \ \ \mbox{where}\ \ \ \widetilde{M}_{\Phi\Phi}^{-1}=-\frac{\langle\Omega\rangle_{\rm I}^{2}}{2{\rm det }\widetilde{M}_{\Phi\Phi}}\ \left\{\begin{array}{ll}-(4c_{R}-1)|\Phi|^{2}&|2c_{R}\Phi-\Phi^{*}|^{2}\\ |2c_{R}\Phi-\Phi^{*}|^{2}&-(4c_{R}-1)|\Phi|^{2}\end{array}\right\},\] (3.7b) where the prefactor can be explicitly written as \[\frac{\langle\Omega\rangle_{\rm I}^{2}}{{\rm det}\widetilde{M}_{\Phi\Phi}}= \frac{|\Phi|^{2}-c_{R}(\Phi^{2}-\Phi^{*2})}{c_{R}(\Phi^{2}+\Phi^{*2}-4c_{R}| \Phi|^{2})} \tag{3.7c}\]
Upon substitution of Eq. (3.7b) into Eq. (3.6a) we obtain
\[V_{\rm SUSY}\simeq\lambda^{2}\left|\bar{\Phi}\Phi-\frac{1}{4}M^{2}\right|^{2}+ \frac{\langle\Omega\rangle_{1}^{2}}{\det\widetilde{M}_{\Phi\Phi}}\lambda^{2}|S |^{2}|\Phi|^{2}\left((4c_{R}^{2}-1)|\Phi|^{2}-|\Phi-2c_{R}\Phi^{*}|^{2}\right). \tag{3.8}\]
We remark that the SUSY vacuum lies along the direction in Eq. (3.3) with
\[\langle S\rangle=0\;\;{\rm and}\;\;|\langle\Phi\rangle|=|\langle\bar{\Phi} \rangle|=M/2, \tag{3.9}\]
where \(\langle S\rangle\) may slightly deviate from its value above after inclusion of soft SUSY breaking effects - see Sec. 5.1. The result in Eq. (3.9) holds also for \(K=K_{1}\) as we can verify after a more tedious computation. From Eq. (3.9) it is clear that \(\langle\Phi\rangle\) and \(\langle\bar{\Phi}\rangle\) spontaneously break \(U(1)_{B-L}\) down to \(\mathbb{Z}_{2}^{B-L}\). Note that \(U(1)_{B-L}\) is already broken during IHI and so no cosmic string are formed - see Sec. 4.2.
### Induced-Gravity Requirement
Inserting Eq. (3.9) into Eq. (3.5) we deduce that the conventional Einstein gravity can be recovered at the vacuum if
\[M=\sqrt{2N/(Nc_{R}-1)}. \tag{3.10}\]
As we show in Sec. 4.3, the GUT requirement offers the prediction \(c_{R}\sim 10^{4}\). Therefore, the resulting \(M\) has a size comparable to \(m_{\rm P}\) as expected from the establishment of the theory in Sec. 2.1.
## 4 Inflationary Scenario
The salient features of our inflationary scenario are studied at tree level in Sec. 4.1 and at one-loop level in Sec. 4.2. We then present its predictions in Sec. 4.3.
### Inflationary Potential
If we express \(\Phi,\bar{\Phi}\) and \(X^{\gamma}=S,H_{u},H_{d},\widetilde{N}_{i}^{c}\) according to the parametrization
\[\Phi=\phi\,e^{i\theta}\cos\theta_{\Phi}/\sqrt{2},\;\;\bar{\Phi}=\phi\,e^{i \theta}\sin\theta_{\Phi}/\sqrt{2}\;\;\;{\rm and}\;\;\;X^{\gamma}=\left(x^{ \gamma}+i\bar{x}^{\gamma}\right)/\sqrt{2},\;\;{\rm where}\;\;\;0\leq\theta_{ \Phi}\leq\pi/2, \tag{4.1}\]
the D-flat direction in Eq. (3.3) is now expressed as
\[x^{\gamma}=\bar{x}^{\gamma}=\theta=\bar{\theta}=H_{u}=H_{d}=\widetilde{N}_{i} ^{c}=0\;\;{\rm and}\;\;\theta_{\Phi}=\pi/4\,. \tag{4.2}\]
Along this, the only surviving term of \(\widehat{V}_{\rm SUGRA}\) in Eq. (3.2a) - after replacing \(W_{\rm HI}\) with \(W_{\rm HI}+W_{\mu}+W_{\rm RHN}\) - can be written as
\[\widehat{V}_{\rm IHI}=e^{K}K^{SS^{*}}\,|W_{{\rm HI},S}|^{2}=\frac{\lambda^{2} (\phi^{2}-M^{2})^{2}}{16f_{R}^{N}}\cdot\begin{cases}f_{R}&{\rm for}\;\;K=K_{1},\\ 1&{\rm for}\;\;K=K_{2},\end{cases}\;\;{\rm where}\;\;\;f_{R}=-\left\langle \frac{\Omega}{N}\right\rangle_{1}=\frac{(Nc_{R}-1)\phi^{2}}{2N}\end{cases} \tag{4.3}\]
- see Eq. (2.4). Clearly \(\widehat{V}_{\rm IHI}\) develops an inflationary plateau as in the original Starobinsky inflationary model [1, 16]. To specify the EF canonically normalized fields, we note that, for the \(K\)'s in Eqs. (2.3a) and (2.3b), \(K_{\alpha\bar{\beta}}\) along the configuration in Eq. (4.2) takes the form
\[\langle K_{\alpha\bar{\beta}}\rangle_{1}={\rm diag}\left(M_{\Phi\Phi},\underbrace {K_{\gamma\bar{\gamma}},...,K_{\gamma\bar{\gamma}}}_{8\;\;{\rm elements}} \right)\;\;\;{\rm with}\;\;\;M_{\Phi\Phi}=\;\begin{cases}\kappa\;\;\bar{\kappa} \\ \bar{\kappa}\;\;\kappa\end{cases}\;\;{\rm and}\;\;\;K_{\gamma\bar{\gamma}}= \begin{cases}f_{R}^{-1}&{\rm for}\;\;K=K_{1},\\ 1&{\rm for}\;\;K=K_{2},\end{cases} \tag{4.4}\]
where \(\kappa=(1+Nc_{R})/2f_{R}\) and \(\tilde{\kappa}=N/\phi^{2}\). Upon diagonalization of \(M_{\tilde{\Phi}\Phi}\) we find its eigenvalues which are
\[\kappa_{+}=Nc_{R}/f_{R}\,\,\,\,\mbox{and}\,\,\,\kappa_{-}=1/f_{R}. \tag{4.5}\]
Note that the existence of the real terms \(|\Phi|^{2}+|\tilde{\Phi}|^{2}\) in Eqs. (2.3a) and (2.3b) is vital for our models, since otherwise the off diagonal elements of \(M_{\tilde{\Phi}\Phi}\) would have been zero, one of the eigenvalues above would have been zero and so no \(M_{\tilde{\Phi}\Phi}^{-1}\) could have been defined.
Inserting Eqs. (4.1) and (4.4) into the kinetic term of S in Eq. (3.1) we can specify the canonically normalized (hatted) fields, as follows
\[\frac{d\widehat{\phi}}{d\phi}=J,\,\,\,\,\,\,\widehat{\theta}_{+}=\frac{J}{ \sqrt{2}}\phi\,\theta_{+},\,\,\,\,\widehat{\theta}_{-}=\sqrt{\frac{\kappa_{-} }{2}}\phi\,\theta_{-},\,\,\,\,\widehat{\theta}_{\Phi}=\sqrt{\kappa_{-}}\phi \left(\theta_{\Phi}-\frac{\pi}{4}\right)\,\,\,\,\,\mbox{and}\,\,\,\,\,(\tilde{ \varsigma}^{\prime\prime},\widehat{\varsigma}^{\prime\prime})=\sqrt{K_{ \gamma\widehat{\gamma}}}(x^{\gamma},\tilde{x}^{\gamma})\,, \tag{4.6}\]
where \(J=\sqrt{\kappa_{+}}\) and \(\theta_{\pm}=\left(\bar{\theta}\pm\theta\right)/\sqrt{2}\). As we show below, the masses of the scalars besides \(\widehat{\phi}\) during IHI are heavy enough such that the dependence of the hatted fields on \(\phi\) does not influence their dynamics.
### Stability and one-Loop Radiative Corrections
We can verify that the inflationary direction in Eq. (4.2) is stable w.r.t the fluctuations of the non-inflaton fields. To this end, we construct the mass-squared spectrum of the scalars taking into account the canonical normalization of the various fields in Eq. (4.6). In the limit \(c_{R}\gg 1\), we find the expressions of the masses squared \(\widehat{m}_{\kappa a}^{2}\) (with \(z^{\alpha}=\theta_{+},\theta_{\Phi},x^{\gamma}\) and \(\tilde{x}^{\gamma}\)) arranged in Table 2. These results approach rather well for \(\phi=\phi_{+}\) - see Sec. 4.2 - the quite lengthy, exact expressions taken into account in our numerical computation. The various unspecified there eigenstates are defined as follows
\[\widehat{h}_{\pm}=(\widehat{h}_{u}\pm\widehat{h}_{d})/\sqrt{2},\,\,\,\,\, \widehat{\bar{h}}_{\pm}=(\widehat{\bar{h}}_{u}\pm\widehat{\bar{h}}_{d})/\sqrt {2}\,\,\,\,\,\,\mbox{and}\,\,\,\,\,\,\widehat{\psi}_{\pm}=(\widehat{\psi}_{ \Phi_{+}}\pm\widehat{\psi}_{\rm S})/\sqrt{2}, \tag{4.7a}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Fields &
\begin{tabular}{c} Einge- \\ states \\ \end{tabular} & \multicolumn{2}{c|}{Masses Squared} \\ \hline \hline
14 Real & \(\widehat{\theta}_{+}\) & \(\widehat{m}_{\theta+}^{2}\) & \(4\widehat{H}_{\rm IHI}^{2}\) & \(6\widehat{H}_{\rm IHI}^{2}\) \\ Scalars & \(\widehat{\theta}_{\Phi}\) & \(\widehat{m}_{\theta_{\Phi}}^{2}\) & \(M_{BL}^{2}\) & \(M_{BL}^{2}\) \\ & \(\widehat{s},\widehat{\bar{s}}\) & \(\widehat{m}_{s}^{2}\) & \(\widehat{H}_{\rm IHI}^{2}(c_{R}\phi^{2}-9)\) & \(6\widehat{H}_{\rm IHI}^{2}/N_{X}\) \\ & \(\widehat{h}_{\pm},\widehat{\bar{h}}_{\pm}\) & \(\widehat{\bar{m}}_{\pm}^{2}\) & \(3\widehat{H}_{\rm IHI}^{2}c_{R}\left(\phi^{2}/6\pm 2\lambda_{\mu}/\lambda\right)\) & \(3\widehat{H}_{\rm IHI}^{2}\left(1+1/N_{X}\pm 4\lambda_{\mu}/\lambda\phi^{2}\right)\) \\ & \(\widehat{\varsigma}_{i}^{\prime},\widehat{\bar{\varsigma}}_{i}^{\prime}\) & \(\widehat{m}_{i^{\prime\prime}}^{2}\) & \(3\widehat{H}_{\rm IHI}^{2}c_{R}\left(\phi^{2}/6+8\lambda_{i^{\prime\prime}}^{2} /\lambda^{2}\right)\) & \(3\widehat{H}_{\rm IHI}^{2}\left(1+1/N_{X}+16\lambda_{i^{\prime\prime}}^{2}/ \lambda^{2}\phi^{2}\right)\) \\ \hline
1 Gauge Boson & \(A_{BL}\) & \(M_{BL}^{2}\) & \(2Ng^{2}/(Nc_{R}-1)\) \\ \hline
7 Weyl & \(\widehat{\psi}_{\pm}\) & \(\widehat{m}_{\Psi\pm}^{2}\) & \(12\widehat{H}_{\rm IHI}^{2}/c_{R}^{2}\phi^{4}\) \\ Spinors & \(\lambda_{BL},\widehat{\psi}_{\Phi-}\) & \(M_{BL}^{2}\) & \(2Ng^{2}/(Nc_{R}-1)\) \\ & \(\widehat{N}_{i}^{\prime}\) & \(\widehat{m}_{i^{\prime\prime}}^{2}\) & \(48\widehat{H}_{\rm IHI}^{2}c_{R}\lambda_{\psi-}^{2}/\lambda^{2}\phi^{2}\) \\ \hline \end{tabular}
\end{table}
Table 2: The mass squared spectrum of our models along the path in Eq. (4.2) for \(\phi\ll 1\) and \(N\)’s defined in Eq. (2.4).
where the (unhatted) spinors \(\psi_{\Phi}\) and \(\psi_{\Phi}\) associated with the superfields \(\Phi\) and \(\bar{\Phi}\) are related to the normalized (hatted) ones in Table 2 as follows
\[\widehat{\psi}_{\Phi\pm}=\sqrt{\kappa_{\pm}}\psi_{\Phi\pm}\ \ \ {\rm with}\ \ \ \psi_{\Phi\pm}=(\psi_{\Phi}\pm\psi_{\Phi})/\sqrt{2}\,. \tag{4.7b}\]
From Table 2 it is evident that \(0<N_{X}\leq 6\) assists us to achieve \(m_{s}^{2}>\widehat{H}_{\rm HHI}^{2}=\widehat{V}_{\rm HHI}/3\) - in accordance with the results of Ref. [15] - and also enhances the ratios \(m_{X\tilde{\gamma}}^{2}/\widehat{H}_{\rm HHI}^{2}\) for \(X^{\tilde{\gamma}}=h_{+},\psi_{i}^{c}\) w.r.t the values that we would have obtained, if we had used just canonical terms in the \(K\)'s. On the other hand, \(\widehat{m}_{h-}^{2}>0\) implies
\[\lambda_{\mu}\lesssim\lambda\phi^{2}/4N\ \ \ {\rm for}\ \ \ K=K_{1}\ \ \ {\rm and}\ \ \ \lambda_{\mu}\lesssim\lambda\phi^{2}(1+1/N_{X})/4\ \ \ {\rm for}\ \ \ K=K_{2}\,. \tag{4.8}\]
In both cases, the quantity in the right-hand side of the inequalities takes its minimal value at \(\phi=\phi_{\rm f}\) - see Sec. 4.2 - and numerically equals to \(2\cdot 10^{-5}-5\cdot 10^{-6}\). In Table 2 we display also the mass \(M_{BL}\) of the gauge boson \(A_{BL}\) which becomes massive having 'eaten' the Goldstone boson \(\theta_{-}\). This signals the fact that \(G_{B-L}\) is broken during IHI and so no cosmological defects are produced. Also, we can verify [12] that radiative corrections a la Coleman-Weinberg can be kept under control provided that we conveniently select the relevant renormalization mass scale involved.
### SUSY Gauge Coupling Unification
The value of \(M_{BL}\) in Table 2 computed at the vacuum of Eq. (3.9), \(\langle M_{BL}\rangle\), may in principle, be unconstrained since \(U(1)_{B-L}\) does not disturb the unification of the MSSM gauge coupling constants. To be more specific, though, we prefer to determine \(M_{BL}\) by requiring that it takes the value \(M_{\rm GUT}\) dictated by this unification at the vacuum of Eq. (3.9). Namely, we impose
\[\langle M_{BL}\rangle=M_{\rm GUT}\simeq 2/2.43\cdot 10^{-2}=8.22\cdot 10^{-3}\,. \tag{4.9}\]
This simple principle has an important consequence for IHI, since it implies via the findings of Table 2
\[c_{R}=\frac{1}{N}+\frac{2g_{BL}^{2}}{M_{\rm GUT}^{2}}\simeq 1.451\cdot 10^{4}\,, \tag{4.10}\]
leading to \(M\simeq 0.0117\) via Eq. (3.10). Here we take \(g_{BL}\simeq 0.7\) which is the value of the unified coupling constant within MSSM.
Although \(c_{R}\) above is very large, there is no problem with the validity of the effective theory, in accordance with the results of earlier works [1, 3, 7]. To clarify further this point, we have to identify the ultraviolet cut-off scale \(\Lambda_{\rm UV}\) of theory analyzing the small-field behavior of our models. Indeed, expanding about \(\langle\phi\rangle=M\) - see Eq. (3.10) - the second term in the r.h.s of Eq. (3.1) for \(\mu=\nu=0\) and \(\widehat{V}_{\rm HHI}\) in Eq. (4.3) we obtain
\[J^{2}\dot{\phi}^{2}\simeq\left(1-\sqrt{\frac{2}{N}}\widehat{\delta\dot{\phi}}+ \frac{3}{2N}\widehat{\delta\dot{\phi}}^{2}-\sqrt{\frac{2}{N^{3}}}\widehat{ \delta\dot{\phi}}^{3}+\cdots\right)\dot{\widehat{\delta\dot{\phi}}}^{2}\,,\] (4.11a) where \[\widehat{\delta\dot{\phi}}\] is the canonically normalized inflaton at the vacuum - see Sec. 6.1 - and \[\widehat{V}_{\rm HHI}\simeq\frac{\lambda^{2}\widehat{\delta\dot{\phi}}^{2}}{2 Nc_{R}^{2}}\left(1-\frac{2N-1}{\sqrt{2N}}\widehat{\delta\dot{\phi}}+\frac{8N^{2}-4N+1}{8N} \widehat{\delta\dot{\phi}}^{2}+\cdots\right). \tag{4.11b}\]
These expressions indicate that \(\Lambda_{\rm UV}=m_{\rm P}\), since \(c_{R}\) does not appear in any of their numerators.
### Inflationary Observables
A period of slow-roll IHI is controlled by the strength of the slow-roll parameters
\[\widehat{\epsilon}=\frac{1}{2}\left(\frac{\widehat{V}_{\rm HHI,\widehat{\phi}}}{ \widehat{V}_{\rm HHI}}\right)^{2}\simeq 16\frac{f_{\rm W}^{2}}{Nc_{\star}^{4} \phi^{8}}\quad\mbox{and}\quad\widehat{\eta}=\frac{\widehat{V}_{\rm HHI, \widehat{\phi}\widehat{\phi}}}{\widehat{V}_{\rm HHI}}\simeq 8\frac{2-f_{\rm W}}{ Nf_{\rm W}^{2}}\ \ \mbox{with}\ \ f_{\rm W}=c_{R}\phi^{2}-2. \tag{4.12}\]
Expanding \(\widehat{\epsilon}\) and \(\widehat{\eta}\) for \(\phi\ll 1\) we can find that IHI terminates for \(\phi=\phi_{\rm f}\) such that
\[\mbox{max}\{\widehat{\epsilon}(\phi_{\rm f}),|\widehat{\eta}(\phi_{\rm f})|\} =1\ \ \Rightarrow\ \ \phi_{\rm f}\simeq\mbox{max}\left(\frac{2}{\sqrt{c_{R}\sqrt{N}}},2\sqrt{ \frac{2}{Nc_{R}}}\right). \tag{4.13}\]
The number of e-foldings, \(\widehat{N}_{\star}\), that the pivot scale \(k_{\star}=0.05/\mbox{Mpc}\) suffers during IHI can be calculated through the relation
\[\widehat{N}_{\star}=\ \int_{\widehat{\phi}_{\star}}^{\widehat{\phi}_{\star}}d \widehat{\phi}\ \frac{\widehat{V}_{\rm HHI}}{\widehat{V}_{\rm HHI,\widehat{\phi}}}\simeq \frac{Nc_{R}}{8}\phi_{\star}^{2}\ \Rightarrow\ \phi_{\star}\simeq 2\left(\frac{2 \widehat{N}_{\star}}{Nc_{R}}\right)^{1/2}\simeq\begin{cases}0.11,&K=K_{1},\\ 0.13,&K=K_{2},\end{cases} \tag{4.14}\]
where \(\widehat{\phi}_{\star}\) [\(\phi_{\star}\)] with \(\phi_{\star}\gg\phi_{\rm f}\) is the value of \(\widehat{\phi}\) [\(\phi\)] when \(k_{\star}\) crosses the inflationary horizon. Thanks to large \(c_{R}\) in Eq. (4.10), \(\phi_{\star}\ll 1\) and therefore, our proposal is automatically well stabilized against corrections from higher order terms of the form \((\Phi\bar{\Phi})^{p}\) with \(p>1\) in \(W_{\rm HI}\) - see Eq. (2.2b).
The normalization of the amplitude, \(A_{\rm s}\), of the power spectrum of the curvature perturbations generated by \(\phi\) at the pivot scale \(k_{\star}\) allows us to determine \(\lambda\) as follows
\[\sqrt{A_{\rm s}}=\ \frac{1}{2\sqrt{3}\,\pi}\ \frac{\widehat{V}_{\rm HHI}( \widehat{\phi}_{\star})^{3/2}}{|\widehat{V}_{\rm HHI,\widehat{\phi}}(\widehat {\phi}_{\star})|}=4.58\cdot 10^{-5}\ \Rightarrow\ \lambda=32\pi\sqrt{6NA_{\rm s}}c_{R}\frac{\widehat{N}_{\star}}{(4\widehat{N} _{\star}-N)^{2}}\simeq\begin{cases}0.29,&K=K_{1},\\ 0.24,&K=K_{2}.\end{cases} \tag{4.15}\]
The resulting relation reveals that \(\lambda\) is proportional to \(c_{R}\). For these \(\lambda\) values we display \(\widehat{V}_{\rm HHI}\) as a function of \(\phi\) in Fig. 1. We observe that \(\widehat{V}_{\rm HHI}\) is a monotonically increasing function of \(\phi\). The inflationary scale, \(\widehat{V}_{\rm HHI}^{1/4}\), approaches the SUSY GUT scale in Eq. (4.9) and lies well below \(\Lambda_{\rm UV}=1\), consistently with the classical approximation to the inflationary dynamics.
At the pivot scale, we can also calculate the scalar spectral index, \(n_{\rm s}\), its running, \(a_{\rm s}\), and the tensor-to-scalar ratio, \(r\), via the relations
\[n_{\rm s} = 1-6\widehat{\epsilon}_{\star}\ +\ 2\widehat{\eta}_{\star}\simeq 1 -\frac{2}{\widehat{N}_{\star}}=0.963,\ \ \ r=16\widehat{\epsilon}_{\star}\simeq\frac{4N}{\widehat{N}_{\star}^{2}}=0.0032 \ [0.0022], \tag{4.16a}\] \[a_{\rm s} = \frac{2}{3}\left(4\widehat{\eta}_{\star}^{2}-(n_{\rm s}-1)^{2} \right)-2\widehat{\xi}_{\star}\simeq-\frac{2}{\widehat{N}_{\star}^{2}}-\frac{7N }{2\widehat{N}_{\star}^{3}}=-0.005\ \ \ \mbox{for}\ \ \ K=K_{1}\ \ [K_{2}] \tag{4.16b}\]
with \(\widehat{\xi}=\widehat{V}_{\rm HHI,\widehat{\phi}}\widehat{V}_{\rm HHI,\widehat{ \phi}\widehat{\phi}\widehat{\phi}}/\widehat{V}_{\rm HHI}^{2}\) and the variables with subscript \(\star\) are being evaluated at \(\phi=\phi_{\star}\). The numerical values are obtained employing \(\widehat{N}_{\star}\simeq(57.5-60)\) which corresponds to a quartic potential. It is expected to approximate \(\widehat{V}_{\rm HHI}\) rather well for \(\phi\ll 1\)[12].
The results above turn out to be in nice agreement with the fitting of the _Planck_ (release 4) [16], baryon acoustic oscillations, cosmic microwave background lensing and Bicep2/_Keck Array_ data [18] with the \(\Lambda\)CDM\(+r\) model, i.e.,
\[\mbox{(a)}\ n_{\rm s}=0.965\pm 0.009\ \ \mbox{and}\ \ \mbox{(b)}\ r\leq 0.032, \tag{4.17}\]
at 95% _confidence level_ (c.l.) with \(|a_{\rm s}|\ll 0.01\).
## 5 IHI and \(\mu\) Term of MSSM
A byproduct of our setting is that it assists us to understand the origin of \(\mu\) term of MSSM, as we show in Sec. 5.1, consistently with the low-energy phenomenology of MSSM - see Sec. 5.2. Hereafter we restore units, i.e., we take \(m_{\rm P}=2.433\cdot 10^{18}\) GeV.
### Generation of the \(\mu\) Term of MSSM
The contributions from the soft SUSY breaking terms, although negligible during IHI, since these are much smaller than \(\phi\sim m_{\rm P}\), may shift slightly \(\langle S\rangle\) from zero in Eq. (3.9). Indeed, the relevant potential terms are
\[V_{\rm soft}=\left(\lambda A_{\lambda}S\bar{\Phi}\Phi+\lambda_{\mu}A_{\mu}SH_{ \mu}H_{d}+\lambda_{iN^{c}}A_{iN^{c}}\Phi\widetilde{N_{i}}^{c2}-{\rm a_{S}}S \lambda M^{2}/4+{\rm h.c.}\right)+m_{\gamma}^{2}|X^{\gamma}|^{2}\,, \tag{5.1}\]
where \(m_{\gamma},A_{\lambda},A_{\mu},A_{iN^{c}}\) and \({\rm a_{S}}\) are soft SUSY breaking mass parameters. Rotating \(S\) in the real axis by an appropriate \(R\)-transformation, choosing conveniently the phases of \(A_{\lambda}\) and \({\rm a_{S}}\) so as the total low energy potential \(V_{\rm tot}=V_{\rm SUSY}+V_{\rm soft}\) to be minimized - see Eq. (3.8) - and substituting in \(V_{\rm soft}\) the \(\Phi\) and \(\bar{\Phi}\) values from Eq. (3.9) we get
\[\langle V_{\rm tot}(S)\rangle=\lambda^{2}\frac{(Nc_{R}-1)M^{4}S^{2}}{4N^{2}m_ {\rm P}^{2}c_{R}}-\lambda M^{2}{\rm Sa_{3/2}}m_{3/2}\ \ {\rm with}\ \ \ {\rm a_{3/2}}=\left(|A_{\lambda}|+|{\rm a_{S}}|\right)/2m_{3/2},\] (5.2a) where \[m_{3/2}\] is the gravitino ( \[\widetilde{G}\] ) mass and \[{\rm a_{3/2}}>0\] is a parameter of order unity which parameterizes our ignorance for the dependence of \[|A_{\lambda}|\] and \[|{\rm a_{S}}|\] on \[m_{3/2}\]. We also take into account that \[m_{S}\ll M\]. The extermination condition for \[\langle V_{\rm tot}(S)\rangle\] w.r.t \[S\] leads to a non vanishing \[\langle S\rangle\] as follows \[d\langle V_{\rm tot}(S)\rangle/dS=0\ \ \ \Rightarrow\ \ \langle S\rangle\simeq Nc_{R}{\rm a_{3/2}}m_{3/2}/\lambda\,,\] (5.2b) where we employed Eq. ( 3.10 ). The extremum above is a global minimum since \[d^{2}\langle V_{\rm tot}(S)\rangle/dS^{2}=2\lambda^{2}m_{\rm P}^{2}/c_{R}(Nc_{ R}-1)>0\]. The generated \[\mu\] term from the term in Eq. ( 2 ) is \[\mu=\lambda_{\mu}\langle S\rangle\simeq\frac{\lambda_{\mu}}{32\pi}\sqrt{\frac{ N}{6{\rm a_{s}}}}\frac{(4\widetilde{N}_{\star}-N)^{2}}{\widetilde{N}_{\star}}{ \rm a_{3/2}}m_{3/2}, \tag{5.3}\]
where we make use of Eq. (4.15) which reveals that the resulting \(\mu\) above does not depend on \(\lambda\) and \(c_{R}\). Thanks to the presence of \(\sqrt{A_{\rm s}}\sim 10^{-5}\) in the denominator any \(\mu/m_{3/2}<1\) value is accessible for \(\lambda_{\mu}<10^{-5}\) which is allowed by Eq. (4.8) without causing any ugly hierarchy between \(m_{3/2}\) and \(\mu\). On the other hand, given that \(m_{3/2}\) is currently constrained beyond the TeV region a mild hierarchy between \(\mu\) and \(m_{3/2}\) assists us to alleviate the little hierarchy problem ameliorating the naturalness of SUSY models after the LHC Higgs discovery [19].
### Connection with the MSSM Phenomenology
The SUSY breaking effects, considered in Eq. (5.1), explicitly break \(U(1)_{R}\) to a subgroup, \(\mathbb{Z}_{2}^{R}\) which remains unbroken by \(\langle S\rangle\) in Eq. (5.2b) and so no disastrous domain walls are formed. Combining \(\mathbb{Z}_{2}^{R}\) with the \(\mathbb{Z}_{2}^{\rm f}\) fermion parity, under which all fermions change sign, yields the well-known \(R\)-parity. This residual symmetry prevents rapid proton decay and guarantees the stability of the _lightest SUSY particle_ (LSP), providing thereby a well-motivated _cold dark matter_ (CDM) candidate.
The candidacy of LSP may be successful, if its abundance is consistent with the expectations for it from the \(\Lambda\)CDM model [17] within a concrete low energy framework. We here adopt the _Constrained MSSM_ (CMSSM), which is relied on the following free parameters
\[{\rm sign}\mu,\ \ \tan\beta=\langle H_{u}\rangle/\langle H_{d}\rangle,\ \ M_{1/2},\ \ m_{0}\ \ {\rm and}\ \ A_{0}, \tag{5.4}\]
where \({\rm sign}\mu\) is the sign of \(\mu\), and the three last mass parameters denote the common gaugino mass, scalar mass and trilinear coupling constant, respectively, defined (normally) at \(M_{\rm GUT}\). Imposing a number of cosmo-phenomenological constraints - from which the consistency of LSP relic density with observations plays a central role - the best-fit values of \(|A_{0}|\), \(m_{0}\) and \(|\mu|\) can be determined as in Ref. [20]. Their results are listed in the first four lines of Table 3. We see that there are four allowed regions characterized by the specific mechanism for suppressing the relic density of the LSP which is the lightest neutralino \((\chi)-\bar{\tau}_{1},\bar{t}_{1}\) and \(\bar{\chi}_{1}^{\pm}\) stand for the lightest stau, stop and chargino eigenstate whereas \(A\) and \(H\) are the CP-odd and the heavier CP-even Higgs bosons of MSSM respectively. The proposed regions pass all the currently available LHC bounds [21] on the masses of the various sparticles.
Enforcing the conditions for the electroweak symmetry breaking a value for the parameter \(|\mu|\) can be achieved in each of the regions in Table 3. Taking this \(|\mu|\) value as input we can extract the \(\lambda_{\mu}\) values, if we first derive \({\rm a}_{3/2}\) setting, e.g.,
\[m_{0}=m_{3/2}\ \ \ {\rm and}\ \ \ |A_{0}|=|A_{\lambda}|=|{\rm a}_{S}|. \tag{5.5}\]
Here we ignore possible renormalization group effects. The outputs of our computation is listed in the two rightmost columns of Table 3 for \(K=K_{1}\) and \(K_{2}\). From these we infer that the required \(\lambda_{\mu}\) values, in all cases besides the one, written in italics, are comfortably compatible with Eq. (4.8) for \(N_{X}=2\) which imply \(\lambda_{\mu}\lesssim 2\cdot 10^{-5}\). Concluding, the whole inflationary scenario can be successfully combined with all the allowed regions CMSSM besides region (II) for \(K=K_{1}\). On the other hand, regions (I) & (IV) are more favored from the point of view of the \(\widetilde{G}\) constraint. Indeed, only for \(m_{3/2}\gtrsim 9\) TeV the unstable \(\widetilde{G}\) becomes cosmologically safe for the \(T_{\rm rh}\) values, necessitated for satisfactory nTL - see Eqs. (6.11) and (6.12b) in Sec. 6.3 below.
## 6 Non-Thermal Leptogenesis and Neutrino Masses
We below specify how our inflationary scenario makes a transition to the radiation dominated era (Sec. 6.1) and offers an explanation of the observed BAU (Sec. 6.2) consistently with the \(\widetilde{G}\) constraint and the low energy neutrino data. Our results are summarized in Sec. 6.3.
### Inflaton Mass & Decay
Soon after the end of IHI, the (canonically normalized) inflaton
\[\widehat{\delta\phi}=\left\langle J\right\rangle\delta\phi\;\;\mbox{with}\; \;\delta\phi=\phi-M\;\;\mbox{and}\;\;\left\langle J\right\rangle=\sqrt{Nc_{R}} \tag{6.1}\]
acquires mass given by
\[\widehat{m}_{\delta\phi}=\left\langle\widehat{V}_{\rm HII,\widehat{\delta \phi}}\right\rangle^{1/2}=\left\langle\widehat{V}_{\rm HII,\phi\phi}/J^{2} \right\rangle^{1/2}\simeq\frac{\lambda m_{\rm p}}{\sqrt{c_{R}\left(Nc_{R}-1 \right)}}\simeq 2.8\cdot 10^{4}\;\mbox{EeV}, \tag{6.2}\]
where \(1\;\mbox{EeV}=10^{9}\;\mbox{GeV}\). This value is equal to that encountered in other models of induced-gravity inflation [1, 3] and larger than those obtained in several versions of non-minimal [10] or pole-induced [11] Higgs inflation. Also \(\widehat{\delta\phi}\) settles into a phase of damped oscillations abound the minimum in Eq. (3.9) reheating the universe at a temperature [12]
\[T_{\rm rh}=\left(72/5\pi^{2}g_{*}\right)^{1/4}\left(\widehat{\Gamma}_{\delta \phi}m_{\rm p}\right)^{1/2}\;\;\mbox{with}\;\;\widehat{\Gamma}_{\delta\phi}= \widehat{\Gamma}_{\delta\phi\to N^{c}_{1}N^{c}_{1}}+\widehat{\Gamma}_{\delta \phi\to H_{a}H_{d}}+\widehat{\Gamma}_{\delta\phi\to XYZ}\,. \tag{6.3}\]
Also \(g_{*}=228.75\) counts the MSSM effective number of relativistic degrees of freedom and we take into account the following decay widths
\[\widehat{\Gamma}_{\delta\phi\to N^{c}_{1}N^{c}_{1}} = \frac{g_{iN^{c}}^{2}}{16\pi}\widehat{m}_{\delta\phi}\left(1- \frac{4M_{iN^{c}}^{2}}{\widehat{m}_{\delta\phi}^{2}}\right)^{3/2}\;\;\mbox{ with}\;\;g_{iN^{c}}=(N-1)\frac{\lambda_{iN^{c}}}{\left\langle J\right\rangle}, \tag{6.4a}\] \[\widehat{\Gamma}_{\delta\phi\to H_{a}H_{d}} = \frac{2}{8\pi}g_{H}^{2}\widehat{m}_{\delta\phi}\;\;\mbox{with}\; \;g_{H}=\frac{\lambda_{\mu}}{\sqrt{2}},\] (6.4b) \[\widehat{\Gamma}_{\delta\phi\to XYZ} = \frac{14g_{y}^{2}}{512\pi^{3}}\frac{\widehat{m}_{\delta\phi}^{3} }{m_{\rm p}^{2}}\;\;\mbox{with}\;\;g_{y}=y_{3}\left(\frac{Nc_{R}-1}{2c_{R}} \right)^{1/2} \tag{6.4c}\]
\begin{table}
\begin{tabular}{|l c|c|c|c||c|c|c|} \hline & \multicolumn{2}{c|}{CMSSM} & \(|A_{0}|\) & \(m_{0}\) & \(|\mu|\) & \({\rm a}_{3/2}\) & \multicolumn{2}{c|}{\(\lambda_{\mu}\) (\(10^{-6}\))} \\ & \multicolumn{2}{c|}{Region} & \multicolumn{2}{c|}{\(({\rm TeV})\)} & \multicolumn{2}{c|}{\(({\rm TeV})\)} & \multicolumn{2}{c|}{\(({\rm TeV})\)} & \multicolumn{2}{c|}{\(K=K_{1}\)} & \(K=K_{2}\) \\ \hline \hline \multicolumn{2}{|c|}{**(I)**} & \(A/H\) Funnel & 9.9244 & 9.136 & 1.409 & 1.086 & 0.963 & 1.184 \\ \multicolumn{2}{|c|}{**(II)**} & \(\bar{\tau}_{1}-\chi\) Coannihilation & 1.2271 & 1.476 & 2.62 & 0.831 & _14.48_ & 17.81 \\ \multicolumn{2}{|c|}{**(III)**} & \(\bar{t}_{1}-\chi\) Coannihilation & 9.965 & 4.269 & 4.073 & 2.33 & 2.91 & 3.41 \\ \multicolumn{2}{|c|}{**(IV)**} & \(\bar{\chi}_{1}^{\pm}-\chi\) Coannihilation & 9.2061 & 9.000 & 0.983 & 1.023 & 0.723 & 0.89 \\ \hline \end{tabular}
\end{table}
Table 3: The required \(\lambda_{\mu}\) values which render our models compatible with the best-fit points in the CMSSM, as found in Ref. [20], for the assumptions of Eq. (5.5), \(N_{X}=2\), and \(K=K_{1}\) or \(K=K_{2}\).
and \(y_{3}=h_{t,b,t}(\widehat{m}_{\delta\phi})\simeq 0.5\). Here \(h_{t},h_{b}\) and \(h_{\tau}\) are the Yukawa coupling constants \(h_{3U}\), \(h_{2D}\) and \(h_{3E}\) in Eq. (2.2a) respectively - we assume that diagonalization has been performed in the generation space. They arise from the lagrangian terms
\[\mathcal{L}_{\widehat{\delta\phi}\to N_{i}^{c}N_{i}^{c}} = -\frac{1}{2}e^{K/2m_{\tau}^{2}}W_{\rm RHN,N_{i}^{c}N_{i}^{c}}N_{i }^{c}\,+\,{\rm h.c.}=g_{iN^{c}}\widehat{\delta\phi}\,\,\,(N_{i}^{c}N_{i}^{c}\,+ \,{\rm h.c.})+\cdots, \tag{6.5a}\] \[\mathcal{L}_{\widehat{\delta\phi}\to H_{a}H_{d}} = -e^{K/m_{\tau}^{2}}K^{SS^{c}}\left|W_{\mu,S}\right|^{2}=-g_{tt} \widehat{m}_{\delta\phi}\widehat{\delta\phi}\,(H_{u}^{*}H_{d}^{*}\,+\,{\rm h.c.})+\cdots,\] (6.5b) \[\mathcal{L}_{\widehat{\delta\phi}\to XYZ} = -g_{y}(\widehat{\delta\phi}/m_{\rm P})\,(X\psi_{\tau}\,\psi_{Z}+ Y\,\psi_{X}\,\psi_{Z}+Z\psi_{X}\psi_{Y})+{\rm h.c.}, \tag{6.5c}\]
describing \(\widehat{\delta\phi}\) decay into a pair of \(N_{j}^{c}\) with masses \(M_{jN^{c}}=\lambda_{jN^{c}}M\), \(H_{u}\) and \(H_{d}\) and three MSSM (s)-particles \(X,Y,Z\), respectively.
### Lepton-Number and Gravitino Abundances
For \(T_{\rm rh}<M_{iN^{c}}\), the out-of-equilibrium decay of \(N_{i}^{c}\) generates a lepton-number asymmetry (per \(N_{i}^{c}\) decay), \(\epsilon_{i}\). The resulting lepton-number asymmetry is partially converted through sphaleron effects into a yield of the observed BAU
\[Y_{B}=-0.35\cdot\frac{5}{2}\frac{T_{\rm rh}}{\widehat{m}_{\delta\phi}}\sum_{i }\frac{\widehat{\Gamma}_{\delta\phi\to N_{i}^{c}N_{i}^{c}}}{\widehat{\Gamma}_{ \delta\phi}}\epsilon_{i}\,\,\,\,\,\mbox{with}\,\,\,\,\,\epsilon_{i}=\sum_{j \neq i}\frac{{\rm Im}\left[(m_{\rm D}^{\dagger}m_{\rm D})_{ij}^{2}\right]}{8 \pi\langle H_{u}\rangle^{2}(m_{\rm D}^{\dagger}m_{\rm D})_{ii}}\bigg{(}F_{ \rm S}\,(x_{ij},y_{i},y_{j})+F_{\rm V}(x_{ij})\bigg{)}. \tag{6.6}\]
Here \(\langle H_{u}\rangle\simeq 174\) GeV, for large \(\tan\beta\), \(F_{\rm S}\) [\(F_{\rm V}\)] are the functions entered in the vertex and self-energy contributions computed as indicated in Ref. [22] and \(m_{\rm D}\) is the Dirac mass matrix of neutrinos, \(\nu_{i}\), arising from the forth term in Eq. (2.2a). Employing the seesaw formula we can then obtain the light-neutrino masses \(m_{i\nu}\) in terms of \(m_{i{\rm D}}\) and \(M_{iN^{c}}\) given by Eq. (2.2d). As a consequence, nTL can be nicely linked to low energy neutrino data. We take into account the recently updated best-fit values [23] of that data listed in Table 4. Furthermore, the sum of \(m_{i\nu}\)'s is bounded from above at 95% c.l. by the data [17, 23]
\[\sum_{i}m_{i\nu}\leq 0.23\,\,\mbox{eV}\,\,\,\,\,\mbox{for NO}\,\,m_{i\nu}\mbox{'s}\,\,\,\, \mbox{or}\,\,\,\,\,\sum_{i}m_{i\nu}\leq 0.15\,\,\,\mbox{eV}\,\,\,\,\,\mbox{for IO}\,\,m_{i\nu} \mbox{'s}, \tag{6.7}\]
where NO [O] stands for _normal [inverted] ordered_ neutrino masses \(m_{i\nu}\)'s.
The validity of Eq. (6.6) requires that the \(\widehat{\delta\phi}\) decay into a pair of \(N_{i}^{c}\)'s is kinematically allowed for at least one species of the \(N_{i}^{c}\)'s and also that there is no erasure of the produced \(Y_{L}\) due to \(N_{1}^{c}\) mediated inverse decays and \(\Delta L=1\) scatterings. These prerequisites are ensured if we impose
\[({\rm a})\,\,\widehat{m}_{\delta\phi}\geq 2M_{1N^{c}}\,\,\,\mbox{and}\,\,\, \,({\rm b})\,\,M_{1N^{c}}\gtrsim 10T_{\rm rh}. \tag{6.8}\]
Finally, Eq. (6.6) has to reproduce the observational result [17]
\[Y_{B}=(8.697\pm 0.054)\cdot 10^{-11}. \tag{6.9}\]
The required \(T_{\rm rh}\) in Eq. (6.6) must be compatible with constraints on the \(\widetilde{G}\) abundance, \(Y_{3/2}\), at the onset of _nucleosynthesis_ (BBN), which is estimated to be
\[Y_{3/2}\simeq 1.9\cdot 10^{-22}\,\,T_{\rm rh}/\mbox{GeV}, \tag{6.10}\]
where we take into account only thermal production of \(\widetilde{G}\), and assume that \(\widetilde{G}\) is much heavier than the MSSM gauginos. On the other hand, \(Y_{3/2}\) is bounded from above in order to avoid spoiling the success of the BBN. For the typical case where \(\widetilde{G}\) decays with a tiny hadronic branching ratio, we have
\[Y_{3/2}\lesssim\begin{cases}10^{-13}\\ 10^{-12}\end{cases}\;\;\text{for}\;\;m_{3/2}\simeq\begin{cases}10.6\text{ TeV}\\ 13.5\text{ TeV}\end{cases}\;\;\text{implying}\;\;T_{\text{rh}}\lesssim 0.53 \cdot\begin{cases}1\text{ EeV}\,,\\ 10\text{ EeV}\,.\end{cases} \tag{6.11}\]
The bounds above can be somehow relaxed in the case of a stable \(\widetilde{G}\).
### Results
Confronting \(Y_{B}\) and \(Y_{3/2}\) - see Eqs. (6.6) and (6.10) - with observations we can constrain the parameters of neutrino sector. This is because \(Y_{B}\) and \(Y_{3/2}\) depend on \(\widehat{m}_{\delta\phi}\), \(T_{\text{rh}}\), \(M_{iN^{c}}\) and \(m_{\text{ID}}\) and can interconnect IHI with neutrino physics. We follow the bottom-up approach detailed in Ref. [12], according to which we find the \(M_{iN^{c}}\)'s by using as inputs the \(m_{\text{ID}}\)'s, a reference mass of the \(\nu_{i}\)'s \(-m_{1\nu}\) for NO \(m_{i\nu}\)'s, or \(m_{3\nu}\) for IO \(m_{i\nu}\)'s -, the two Majorana phases \(\varphi_{1}\) and \(\varphi_{2}\) of the PMNS matrix, and the best-fit values for the low energy parameters of neutrino physics shown in Table 4.
The outcome of our computation is presented in Fig. 2, where we depict the allowed values of \(m_{\text{2D}}\) versus \(m_{\text{1D}}\) for \(K=K_{2}\) with \(N_{X}=2\), \(\lambda_{\mu}=10^{-6}\) and the remaining parameters shown in the Table of Fig. 2. The conventions adopted for the various lines is depicted in the plot label. In particular, we use solid, dashed and dot-dashed line when the remaining inputs - i.e. \(m_{i\nu}\), \(m_{\text{3D}}\), \(\varphi_{1}\), and \(\varphi_{2}\) - correspond to the cases A, B and C of the Table of Fig. 2 respectively. We consider NO (cases A and B) and IO (case C) \(m_{i\nu}\)'s. In all cases, the current limit in Eq. (6.7) is safely met. The gauge symmetry considered here does not predict any particular Yukawa unification pattern and so, the \(m_{\text{\it{D}}}\)'s are free parameters. This fact offers us a convenient flexibility for the fulfilment of all the imposed requirements. Care is also taken so that the perturbativity of \(\lambda_{iN^{c}}\) holds, i.e., \(\lambda_{iN^{c}}^{2}/4\pi\leq 1\). The inflaton \(\widehat{\delta\phi}\) decays mostly into \(N_{1}^{c}\)'s. In all cases \(\widehat{\Gamma}_{\delta\phi\to N_{1}^{c}N_{1}^{c}}<\widehat{\Gamma}_{\delta \phi\to H_{u}H_{d}}<\widehat{\Gamma}_{\delta\phi\to XYZ}\) and so the ratios \(\widehat{\Gamma}_{\delta\phi\to N_{1}^{c}N_{1}^{c}}/\widehat{\Gamma}_{\delta \phi}\) in Eq. (6.6) introduce a considerable reduction in the derivation of \(Y_{B}\). For the considered cases in Fig. 2 we obtain:
\[0.01\lesssim M_{1N^{c}}/10^{3}\text{ EeV}\lesssim 6.4,\;\;2\lesssim M_{2N^{c}}/10^{3 }\text{ EeV}\lesssim 447\;\;\text{and}\;\;0.1\lesssim M_{2N^{c}}/10^{6}\text{ EeV}\lesssim 9.5. \tag{6.12a}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline Parameter & \multicolumn{2}{c|}{Best Fit \(\pm 1\sigma\)} \\ \cline{2-3} & Normal & Inverted \\ \cline{2-3} & \multicolumn{2}{c|}{Hierarchy} \\ \hline \hline \(\Delta m_{21}^{2}/10^{-5}\text{eV}^{2}\) & \multicolumn{2}{c|}{\(7.5^{+0.22}_{-0.20}\)} \\ \(\Delta m_{31}^{2}/10^{-3}\text{eV}^{2}\) & \(2.55^{+0.02}_{-0.03}\) & \(2.45^{+0.02}_{-0.03}\) \\ \hline \(\sin^{2}\theta_{12}/0.1\) & \multicolumn{2}{c|}{\(3.18\pm 0.16\)} \\ \(\sin^{2}\theta_{13}/0.01\) & \(2.2^{+0.069}_{-0.062}\) & \(2.225^{+0.064}_{-0.070}\) \\ \(\sin^{2}\theta_{23}/0.1\) & \(5.74\pm 0.14\) & \(5.78^{+0.10}_{-0.17}\) \\ \hline \(\delta/\pi\) & \(1.08^{+0.13}_{-0.12}\) & \(1.58^{+0.15}_{-0.16}\) \\ \hline \end{tabular}
\end{table}
Table 4: Low energy experimental neutrino data for normal or inverted hierarchical neutrino masses.
As regards the other quantities, in all we obtain
\[1.4\lesssim Y_{\widetilde{G}}/10^{-13}\lesssim 1.7\,\,\,\text{with}\,\,\,0.75 \lesssim T_{\text{rh}}/\text{EeV}\lesssim 0.9\,. \tag{6.12b}\]
As a bottom line, nTL is a realistic possibility within our setting provided that \(m_{3/2}\sim 10\) TeV as deduced from Eqs. (6.11) and (6.12b). As advertised in Sec. 5.2, these values are in nice agreement with the ones needed for the solution of the \(\mu\) problem within CMSSM in regions (I) and (IV) of Table 3.
## 7 Conclusions
We investigated the realization of IHI in the framework of a \(B-L\) extension of MSSM endowed with the condition that the GUT scale is determined by the renormalization-group running of the three gauge coupling constants. Our setup is tied to the super- and Kahler potentials given in Eqs. (2.1) and (2.3a) - (2.3b). Our models exhibit the following features:
* they predict the correct \(n_{\text{s}}\) and low \(r\) thanks to the induced-gravity and the GUT requirements;
* they ensure the validity of the effective theory up-to \(m_{\text{P}}\);
* they inflate away cosmological defects;
* they offer a nice solution to the \(\mu\) problem of MSSM, provided that \(\lambda_{\mu}\) is somehow small;
* we obtain \(M_{iN^{c}}\) in the range \((10^{10}-10^{15})\) GeV.
It remains to introduce a consistent soft SUSY breaking sector - see, e.g., Ref. [24] - to obtain a self-contained theory - cf. Ref. [25, 26]. Moreover, since our main aim here is the demonstration of the mechanism of IHI in SUGRA, we opted to utilize the simplest GUT embedding. Extensions to more structured GUTs are also possible - see e.g. Ref. [9, 13] - with similar inflationary observables.
Figure 2: Contours, yielding the central \(Y_{B}\) in Eq. (6.9) consistently with the inflationary requirements, in the \(m_{1\text{D}}-m_{2\text{D}}\) plane. We take \(K=K_{2}\) with \(N_{X}=2\), \(\lambda_{\mu}=10^{-6}\) and the values of \(m_{\text{iv}}\), \(m_{1\text{D}}\), \(m_{3\text{D}}\), \(\phi_{1}\) and \(\phi_{2}\) which correspond to the cases A (solid line), B (dashed line) and C (dot-dashed line).
AcknowledgmentsI would like to thank H. Baer and S. Ketov and for interesting discussions. This research work was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant" (Project Number: 2251).
|
2309.07199 | Kinematics reconstruction in solenoidal spectrometers operated in active
target mode | We discuss the reconstruction of low-energy nuclear reaction kinematics from
charged-particle tracks in solenoidal spectrometers working in Active Target
Time Projection Chamber mode. In this operation mode, reaction products are
tracked within the active gas medium of the Active Target with a three
dimensional space point cloud. We have inferred the reaction kinematics from
the point cloud using an algorithm based on a linear quadratic estimator
(Kalman filter). The performance of this algorithm has been evaluated using
experimental data from nuclear reactions measured with the Active Target Time
Projection Chamber (AT-TPC) detector. | Yassid Ayyad, Adam K. Anthony, Daniel Bazin, Jie Chen, Wolfgang Mittig, Ben P. Kay, David K. Sharp, Juan Carlos Zamora | 2023-09-13T15:17:38Z | http://arxiv.org/abs/2309.07199v1 | # Kinematics reconstruction in solenoidal spectrometers operated in active target mode
###### Abstract
We discuss the reconstruction of low-energy nuclear reaction kinematics from charged-particle tracks in solenoidal spectrometers working in Active Target Time Projection Chamber mode. In this operation mode, reaction products are tracked within the active gas medium of the Active Target with a three dimensional space point cloud. We have inferred the reaction kinematics from the point cloud using an algorithm based on a linear quadratic estimator (Kalman filter). The performance of this algorithm has been evaluated using experimental data from nuclear reactions measured with the Active Target Time Projection Chamber (AT-TPC) detector.
+
Footnote †: journal: Elsevier
## 1 Introduction
The recent advances in radioactive (or exotic) isotope production has laid the foundations to redefine the goals of the modern low-energy nuclear physics by providing access to astonishing properties of isospin-imbalanced nuclear matter [1]. Arguably, some of the most prominent features strongly affected by the dramatic reorganization of nuclear matter at the limits of stability include: the evolution of the shell structure, collective phenomena involving oscillations, rotations and vibrations, the coexistence of nuclear shapes at close energies, molecule-like clustering, among many others. From
an experimental stand-point, the path to study such phenomena can be accessed via controlled nuclear reactions in inverse kinematics where the heavy radioactive nucleus is accelerated and impinges on a light target [2]. The selection of a suitable reaction mechanism as a probe depends very much on the phenomena to be investigated. One of the most powerful tools, for the cases explored here, are direct reactions, such as inelastic scattering or nucleon transfer reactions [3; 4]. These peripheral reactions can be used to obtain single-particle or collective properties of the nucleus using relatively simple observables. Conventional direct-reaction experiments usually require minimum intensities of the order of \(10^{4-5}\) particles per second (pps) with bombarding energies spanning from 10 up \(100A\) MeV. Such requirements highly constrain the number of available isotopes that can be studied via direct reactions in present facilities. Nonetheless, we are entering an era of next-generation radioactive beam facilities capable of producing sufficiently intense beams of exotic nuclei close to and beyond the drip lines. Some examples of these laboratories are the Facility for Rare Isotope Beams (FRIB) at the Michigan State University (USA) [5], the Facility for Antiproton and Ion Research (FAIR) at GSI (Germany) [6], HIE-ISOLDE at CERN (Switzerland) [7], the Advanced Rare Isotope Laboratory (ARIEL) of TRIUMF (Canada) [8], or the future Rare isotope Accelerator complex for ON-line experiments (RAON, Korea) [9].
The development of state-of-the-art instrumentation has progressed in parallel with the advancement of radioactive beam production. Pioneering direct-reaction experiments in inverse kinematics (transfer in particular) were performed using a telescope of solid state (namely silicon) detectors [10; 11] and a thin composite target (e.g. CH\({}_{2}\)). From the energy and angle of the light ejectile measured in the silicon detectors the kinematics of the reaction can be inferred with modest resolution (few hundred keV). This technique has been demonstrated in a plethora of successful experiments [3], but it has an important limitation: The energy is compressed due to the center-of-mass motion. As a result, the resolving power of these measurements is limited. This effect can be removed if the detection setup is placed inside a solenoid magnet. Such experimental devices are known as solenoidal spectrometers. The kinematic lines corresponding to the excited states of interest can be inferred by using the linear relation between the returning position of the spiraling particles along the solenoid axis and their energy measured in an array of silicon detectors deployed along the axis. The deleterious effects
of kinematic compression are avoided in this approach, demonstrated for the first time using the HELIOS solenoidal spectrometer of the Argonne National Lab (ANL) [12; 13]. The remarkable performance of this device has resulted in many outstanding transfer experiments with high-impact results [14; 15]. Its success has also spurred the development of similar devices in different facilities, such as the Isolde Solenoidal Spectrometer (ISS) at CERN [16] and SOLARIS at FRIB [17].
The outstanding resolution achieved by solenoidal spectrometers such as HELIOS depends strongly on the thickness of the target, which poses a limitation in terms of beam intensity. A typical thickness of a few hundreds of \(\mu\)g/cm\({}^{2}\) requires intensities of the order of \(10^{4-5}\) pps for single-nucleon transfer reactions. Higher intensities are needed for two-nucleon transfer reactions, as the cross sections are lower. Moreover, targets have contaminants (usually carbon) and are easily damaged under high-power irradiation. The acceptance of solenoidal spectrometers is also limited by the target thickness since particles with low energy are not able to escape from it. Naturally, the acceptance is also limited by the number of detectors and their geometry. In order to overcome these limitations, a different detection scheme can be adapted. For instance, the silicon array and target can be replaced by an Active Target Time Projection Chamber (TPC). In this case, a gas such as hydrogen, deuterium or helium, is used as a target and detector simultaneously. The gas is enclosed in a large volume with an intense electric field applied across. As charged particles ionize the gas, the ionization electrons are drifted to a highly segmented pad plane where they are detected. This is used to reconstruct the charged particles' trajectories inside the volume [18]. This scheme brings many advantages over conventional setups: luminosity is increased by orders of magnitude, detection thresholds are lowered down to about 100 keV and the detection efficiency is increased close to \(4\pi\)[19]. The magnetic field adds a robust observable, the magnetic rigidity, inferred from the track curvature and the magnetic field. The measurement of the energy loss together with the magnetic rigidity enables the identification of the particles. Moreover, the use of the magnetic field vastly extends the dynamic range of the detector. Because of such compelling capabilities, TPCs are gaining much interest for the study of radioactive nuclei far from stability [20]. At present, only two TPCs operate inside a solenoidal field: the Active Target Time Projection Chamber (AT-TPC) of the Facility for Rare Isotope (FRIB) of the Michigan State University (MSU) [21] and the Spec
MAT detector at ISOLDE [22]. The first experiments with low-intensity radioactive beams using the AT-TPC have already demonstrated the advantages of this technique for direct reactions (scope of this work), resonance proton scattering [23] and reactions of astrophysical interest [24].
Arguably, the most challenging aspect regarding the extraction of physical observables from TPC data is the reconstruction of kinematics from particle tracks. In general, this is a well established procedure in High Energy Physics (HEP) experiments where very efficient techniques have been developed over the course of many years [25]. However, the detector plus target scheme of Active Target TPCs leaves open challenges to address. Low-energy (on the range of 100 keV to several tens of MeV) reaction products slow down inside the TPC gas, tracing non-helical trajectories without an analytical description that complicates the extraction of relevant features from the three dimensional point cloud that represents the event recorded by the TPC [18]. In particular, the challenge lies in deducing the kinematic properties of the lowest energy particles that stop inside the volume and in inferring the reaction vertex and the track multiplicity. The latter has been successfully addressed using a non-parametric approach based on point triplets [26]. However, an efficient and reliable method for determining the kinematics of particle tracks in solenoidal spectrometers working in Active Target mode is still required.
In this work, we describe the application of a linear quadratic estimator known as the Kalman filter [27] for the fitting of particle tracks [28] in Active Target TPCs. In particular, we benchmark the reconstruction of kinematics using data from two experiments performed with the AT-TPC: \({}^{10}\mathrm{Be}+d\) reaction at around \(9A\) MeV at ReA6 (FRIB) and the \({}^{14}\mathrm{C}+p\) reaction at \(12A\) MeV measured at ATLAS at Argonne National Laboratory using the AT-TPC inside the HELIOS magnet. Both experiments were performed with a low beam intensity of about 2000 pps for a total running time of 2 and 1 days for \({}^{10}\)Be and \({}^{14}\)C, respectively. Performing experiments with such a low intensity in a very short amount of time demonstrates how powerful Active Targets are for direct reactions in inverse kinematics with exotic beams. Comprehensive descriptions of the detector and the filtering method are presented in the first part of this manuscript. The performance of the track filtering is then evaluated using different reaction channels measured in these experiments.
Active target solenoidal spectrometers: Active Taget Time Projection Chamber (AT-TPC).**
The AT-TPC is a time projection chamber that simultaneously works as the reaction target. The observables inferred in the AT-TPC are the three dimensional tracks of the charged particles from a nuclear reaction that takes place between a radioactive beam and the gas used as the tracking medium. These tracks are a collection of points (hit pattern) deduced from the drift time of the ionization electrons and their two dimensional projection on the highly segmented pad plane of the detector. In particular, the AT-TPC features a cylindrical gas volume of 1 m length and 50 cm in diameter. The pad plane into segmented in 10,240 triangular pads. A more detailed description of the detector and its mode of operation can be found in Refs. [21; 18; 29; 30].
The AT-TPC is placed inside the 4-T solenoid of SOLARIS for the measurement of the magnetic rigidity of the particles. Typical magnetic fields of 2-4 T are used. Fig. 1 shows an example of the hit pattern of an event recorded with the AT-TPC: A \({}^{10}\)Be beam of 10\(A\) MeV (around 1000 pps of intensity) is injected along the AT-TPC beam axis, which is coincident with the solenoidal field axis. The detector is filled with pure D\({}_{2}\) gas at 600 torr. In the figure, the beam reacts with a deuteron which is scattered forward following a helical trajectory with ever decreasing radius. The AT-TPC offers excellent luminosity and angular coverage due to the large geometrical acceptance and the large dynamic energy range provided by the Multi-layer THick Gas Electron Multiplier (MTHGEM) [31]. The MTHGEM also enables the use of pure elemental gases as the target medium, making the AT-TPC the thickest pure target for direct reactions with around 10 mg/cm\({}^{2}\).
The AT-TPC and its smaller version, the prototype AT-TPC (pAT-TPC) [32], have been utilized in many successful experiments with radioactive beams to investigate the structure of exotic nuclei via resonance elastic scattering [21], reactions of astrophysical interest [24] and particle emission following \(\beta\)-decay [33]. Its scientific program encompasses diverse topics such as nucleus clustering, shell evolution, shape coexistence, giant resonances, and exotic decay modes. The extraction of relevant physics from these experiments requires a robust and efficient analysis method for curved trajectories.
Figure 1: Hit pattern of a \({}^{10}\mathrm{Be}+d\) scattering event at 9\(A\) MeV in a 3 T magnetic field in SOLARIS. Points are colored by the amplitude of the signal observed in the corresponding pad
## 3 Low-energy particle track reconstruction with a Kalman Filter
The Kalman filter is a recursive algorithm that provides the best estimate for a set of discrete experimental measurements affected by noise [27]. The set of measurements represents the time-dependent state of a system or process with the best estimate provided by the minimization of the mean squared of the measurement errors. While the filter itself has many versions and applications, in this work we will just provide a brief outline of the mathematical formalism of the extended or nonlinear Kalman filter. More complete information on the application of Kalman filters in particle physics can be found in Refs. [28; 34; 35; 36].
The idea behind Kalman filtering for particle tracks is rather powerful and simple. It requires a robust physics model, including experimental noise. Following the notation of Refs. [28; 37], a set of measurements \(m_{k}\) are used to find an optimum estimate \(x_{k}\) of the true vector state \(\hat{x_{k}}\). The evolution of the system can be described by the propagation of the track state for each measurement \(k-1\) to state \(k\), using the information from previous measurements up to \(k-1\):
\[x_{k}=f_{k-1}(x_{k-1})+\omega_{k-1} \tag{1}\]
where \(\omega_{k-1}\) is the random (Gaussian) process noise due to the track propagation from \(k\) to \(k-1\), and \(f_{k-1}\) is the track propagator described by the motion of the particle. In the traditional formulation, the energy loss of the particles is included as part of this process noise. This is not ideal in our case since the energy loss of heavy ions is much larger than that of minimum ionizing particles (MIPs) which are typically measured in high-energy experiments. The proper way to account for the energy loss of the ions is by including it in the equation of motion, as we will discuss later. Since the state vector is not accessible directly, it is projected onto the \(m_{k}\) space using a linear transformation function \(h_{k}\) taking into account the Gaussian measurement noise \(\epsilon_{k}\):
\[m_{k}=h_{k}(x_{k})+\epsilon_{k} \tag{2}\]
In spite of \(f\) and \(h\) being non-linear functions, the evolution of the system's state vector is described by the associated linear transformations:
\[x_{k}=F_{k-1}(x_{k-1})+\omega_{k-1} \tag{3}\]
\[m_{k}=H_{k}(x_{k})+\epsilon_{k} \tag{4}\]
where \(F\) and \(H\) are first order Taylor expansions of their respective functions following the Extended Kalman Filter formalism [38]. For our particular case, \(F\) follows the Runge-Kutta method to describe the movement of a charged particle inside the AT-TPC tracking medium. The use of a linear propagator required by the algorithm is ensured by using the first-order expansion of the Runge-Kutta propagator around the particle trajectory. Conveniently, the description of the state vector \(x_{k}\) is chosen according to the topology of the TPC, where each hit in the space is defined in a local plane coordinate system, with two orthonormal vectors \(u\) and \(v\) with respect to the momentum of the particle \(u^{\prime}\) and \(v^{\prime}\)[37, 39]:
\[x_{k}=(q/p,u^{\prime},v^{\prime},u,v)^{\rm T} \tag{5}\]
Based on this parameterization, \(H\) transforms \(u\) and \(v\) into the \(m_{k}\) system.
The fitting process is performed in three steps: Prediction, filtering and smoothing. During the prediction state, the last known state and its covariance \(C\) are extrapolated to the present state based on all previous measurements up to \(k-1\) (indicated by the second index on the left side of the equations):
\[x_{k|k-1}=F_{k-1}(x_{k-1})+\omega_{k-1} \tag{6}\]
\[C_{k|k-1}=F_{k-1}C_{k-1}F_{k-1}^{T}+N_{k-1} \tag{7}\]
where \(N_{k-1}\) is the covariance matrix of the propagation noise. The predicted state at \(k\) is updated (filtered) based on the measurement \(m_{k}\) and the weight of the residual \(K\), also called Kalman gain:
\[x_{k}=x_{k|k-1}+K_{k}(m_{k}-H_{k}x_{k|k-1}) \tag{8}\]
\[C_{k}=(I-K_{k}H_{k})C_{k|k-1} \tag{9}\]
The Kalman gain gives a quantitative estimate of how much the estimate has to change based on the measurement. Lastly, the filtered track is also fitted in backward direction using the information from previous measurements (\(n\) index) to provide a smoothed track:
\[x_{k|n}=x_{k}+A_{k}(x_{k+1|n}-x_{k+1|k}) \tag{10}\]
\[C_{k|n}=C_{k}+A_{k}(C_{k+1|n}-C_{k+1|k})A_{k-1}^{T} \tag{11}\]
where \(A_{k}\) stands for the smoother gain matrix:
\[A_{k}=C_{k}F_{k}^{T}C_{k+1|k} \tag{12}\]
To summarize the procedure for our case of interest: For the Kalman fitting process described above, the hit pattern is used as measurement points together with a representation of the particle track characterized by the motion of a low-energy charged particle under the effect of the solenoid field in a gaseous medium. Points are clusterized (cluster hits) along the particle trajectory to define virtual detection planes following Eq. 5. Multiple scattering and straggling account for the process noise in the track propagation. The definition of the covariance matrix is based on the position uncertainty determined by the diffusion coefficients, pad plane granularity and the sampling rate [40]. In order to account for these effects more typical of low-energy nuclear physics experiments, we have modified the package genfit[37; 39] to fit AT-TPC data and investigated its performance. The code was integrated into the ATTPCROOTv2 analysis framework [41; 42; 43].
## 4 Event and kinematics reconstruction
The reconstruction of the hit pattern shown in Fig. 1 is done using the 512-samples trace recorded by each pixel of the Micromegas sensor [44] upon the arrival of the electrons. From the shape of these traces the \(z\) position and energy loss are inferred from their time at the peak and their amplitude. This simple approach is reasonable for scattered target nuclei because the shape of the trace is dominated by the shaping time of the electronics. Once the hit pattern is constructed, single tracks corresponding to different reaction products are identified using a dedicated clustering algorithm designed for the characteristic tracks of low-energy particles, as explained in Ref. [26]. At this point, each track can be pre-analyzed independently to extract the initial conditions for the fitting. Hits are clusterized in charge clusters of a certain radius and separation along the trajectory. The position of these new charge clusters is calculated using the center of gravity of the collection of hits that belong to each of them. The purpose of this process is two fold: smoothing the trajectory and preparing the track for the filtering process. Particle identification can be performed for each track from the respective
magnetic rigidity and energy loss. The rigidity (B\(\rho\)) is calculated from the radius of curvature of the track inferred from its geometry [29]. The energy loss is inferred from the charge collected over the first 30% of the track, averaged by the number of charge clusters. The identification plot for the \({}^{10}\mathrm{Be}+d\) reaction, shown in Fig. 2, clearly shows three regions corresponding to protons, deuterons and overlapping \(\alpha\) particles and beryllium isotopes.
Once scattered deuterons are selected in the particle identification plot, their energy and scattering angle are inferred from the B\(\rho\) and the angle that the track forms with the \(Z\) (beam/solenoid) axis, as explained before. The result of this geometrical estimate serves as the starting point for the Kalman filtering process. Cluster hits are used as measurement points with an associated covariance matrix defined by uncertainties in the hit-position determination. Each point acts as a virtual plane to perform the propagation of the track based on the chosen model. The state vector \(x_{k}\) is parameterized according to the initial vertex and momentum deduced from the rigidity. One
Figure 2: B\(\rho\) as a function of the energy loss inferred from the track geometry. The upper, middle and lower regions correspond to beryllium and \(\alpha\) particles, deuterons and protons, respectively.
of the main limitations of genfit regarding our scope is the treatment of the energy loss. Due to the complex charge-exchange interplay of low-energy heavy ions when traversing a gas, the default Bethe-Bloch energy loss formalism used in the code is not valid anymore. The evaluation of the energy loss between hit clusters was modified to include a parameterization based on the SRIM code [45]. Even though genfit was developed as a generic tool for tracking, it treats the energy loss as part of the process noise because it is generally used, to the best of our knowledge, to reconstruct the minimum ionizing particles in TPCs. The effect of the energy loss in the AT-TPC is large enough to be included as part of the equation of motion, as particles are continuously slowed down inside the detector describing complicated trajectories. This feature may pose a limitation in terms of resolution and performance. At the prediction stage, the hit clusters are accepted or rejected for filtering based on the initial trajectory. Once the initial conditions are defined, the fitting is realized on a recursive basis via filtering and smoothing. The result from the fit is a representation of the track from which the best estimate of the momentum for a given particle is inferred. Moreover, this track representation can be used to find the reaction vertex by extrapolating the first hit of the track to the pad plane origin.
Figure 3: Left panel: \({}^{10}\)Be+d kinematics reconstructed from the track geometry for reactions along the entire AT-TPC length. Right panel: Excitation energy distribution of \({}^{10}\)Be from inelastic scattering.
Results from the filter are shown in Fig. 3. The kinematics plot, shown in the left panel of Fig. 3, features the characteristic kinematic lines of several \({}^{10}\)Be states: ground state (\(0^{+}_{1}\)), first excited state 3.368 MeV (\(2^{+}_{1}\)), a multiplet of three peaks at around 6 MeV and another state at around 7.2 MeV. The angular distribution of the latter does not fit with the spin-parity assignment of the two known states around that region (7.371 MeV with \(3^{-}_{1}\) and 7.542 MeV \(2^{+}_{3}\)). The properties of that state will be discussed in a separate publication. The effect of the fitting is two-fold: it improves the resolution inferred from the track geometry to 350 keV (standard deviation) and the accuracy (about 5 keV) for the entire length of the detector. It is worth noting that the tracking efficiency decreases with the scattering angle, increasing the energy detection threshold (our limit is about 0.5 MeV). The main reason for this efficiency loss is the limited performance of genfit when reconstructing short tracks with scattering angles around \(90^{\circ}\), as the code was not designed for these cases. Below \(40^{\circ}\), there is a region of events that corresponds to non-resonant reactions above particle emission threshold but also to misidentified deuterons.
The energy resolution depends on several fit parameters, among many others related to the detector configuration. The length of the fitted track is one of the most critical parameters. Designing an experiment with the AT-TPC necessitates several trade-offs depending on the goal of the experiment. For example, lowering the pressure will decrease the statistics, but increase the length of the tracks thereby increasing the energy resolution. The selection of such parameters is critical for experiments aiming to measure reactions at very forward center-of-mass angles. The measurement of the \({}^{10}\mathrm{Be}+d\) reaction was not optimized for detecting and measuring low-energy tracks, but the resolution can be improved by studying the dependence of the excitation energy on the length of the track and the energy of the deuterons (right panel). Figure 4 shows the excitation energy as a function of the track length (left panel) and its projection after selecting tracks of more than 20 cm length and deuterons below 10 MeV kinetic energy. It is important to point out that the track length represents the useful part of the track that was used for the fitting procedure as genfit rejects clusters where the propagation between virtual planes failed. This usually happens when the cluster contains many noisy points that were assigned as inliers by the clustering algorithm due to their proximity to the track. In any case, one can see a clear correlation between the track length (proportional to the number of
fitted clusters) and the resolution. As expected, more information results in a better determination of the excitation energy, reducing the resolution down to 240 keV (standard deviation). Therefore, depending on the experiment, the gas parameters can be adjusted to either provide the best possible resolution (i.e. decreasing pressure for very-forward angle measurements) or the best luminosity (e.g. experiments with broad resonances).
Another critical aspect for the reconstruction of tracks are the secondary effects that impact the propagation of the scattered particles in the gas. Although genfit includes multiple scattering and straggling effects as gaussian noise in the filtering process, the models are more adequate for MIPs with much larger kinetic energy than our ions. As a consequence, the update of the state vector and the covariance matrices may be biased due to the underestimation of such effects. Ideally, one would expect to obtain a much better resolution if realistic effects are included in the estimation of the track parameters. However, the complexity of genfit source code renders its modification complicated. Instead, we have studied the impacts of secondary effects applying the same reconstruction procedure to the \({}^{14}\mathrm{C}+p\) reaction, where we used 300 Torr of pure H\({}_{2}\) gas and a magnetic field of 2.85 T using the HELIOS magnet.
The results shown in Fig. 5 for scattering (left upper panel) clearly in
Figure 4: Left panel: Track length as a function of the excitation energy. Right panel: Excitation energy of \({}^{10}\mathrm{Be}\) after selecting tracks of more than 20 cm length.
Figure 5: Left upper panel: Kinematics for the \({}^{14}\)C(p,p) and \({}^{14}\)C(p,p’) reaction. Right upper: Same as left but for the \({}^{14}\)C(\(p,d\))\({}^{13}\)C, \({}^{14}\)C(\(p,\alpha\))\({}^{11}\)B and the \({}^{14}\)C(p,t)\({}^{12}\)C reactions, measured simultaneously. Lower panel: Excitation energy of \({}^{14}\)C (see text for details).
dicate, at first glance, that the performance achieved for proton tracking is superior in terms of resolution. Moreover, the absence of characteristic background due to deuteron breakup observed in the \({}^{10}{\rm Be}+d\) (Fig. 3) makes the reconstruction cleaner. Overall, we have reconstructed the \({}^{14}\)C ground and several excited states: 6.093 (\(1^{-}\)) and 8.317 (\(2^{+}\)) (see lower panel of Fig. 3). The group of unresolved states are assigned, within our uncertainty, to 6.589 (\(0^{+}\)), 6.728 MeV (\(3^{-}\)), 7.012 (\(2^{+}\)) and 7.341 MeV (\(2^{-}\)), based on the National Nuclear Data Center database. This assignment is consistent with the only \({}^{14}\)C+p reaction performed to date that covers up to 9 MeV in excitation energy [46]. The best resolution achieved for this spectrum is about 145 keV (standard deviation), a very competitive value for reactions in inverse kinematics taking into account that these data were acquired at a bombarding intensity of 2000 pps. Simultaneously to the inelastic scattering of \({}^{14}\)C on protons, we have also measured the neutron removal \({}^{14}\)C\((p,d)^{13}\)C, two-neutron removal \({}^{14}\)C(p,t)\({}^{12}\)C and the \({}^{14}\)C\((p,\alpha)^{11}\)B transfer reactions, as can be seen in the kinematics plot in the right upper panel of Fig. 5. These reaction channels were identified through the B\(\rho\)-Energy loss correlation, as shown in Fig. 2, but for the \({}^{14}\)C + \(p\) reaction. Since deuterons and \(\alpha\) particles have the same rigidity (or mass-to-charge ratio), the apparent kinematic lines overlap within the same energy range as the \(B\rho\) calculation was done assuming protons. The interesting point about the comparison between inelastic scattering and transfer is the excitation energy resolution. For the \((p,d)\) reaction, the excitation energy resolution is worse than in the \((p,p^{\prime})\) case (about 220 keV). This resolution is comparable to the one obtained for the \({}^{10}{\rm Be}+d\), but at higher gas pressure. Therefore, this result suggests that the degradation of excitation energy has strong dependence on the straggling caused by a higher-mass particle, within the range of gas pressures studied in this work.
## 5 Simulations and angular distributions
Due to the complexity of the Kalman Filter package we are using, it is difficult to evaluate the absolute tracking efficiency. The high density of points in the hit pattern poses a problem for fitting convergence due to presence of outliers in some of the tracks. In addition, genfit was not designed for tracking particles that stop in the detection medium and therefore, the loss of efficiency at lower energy is more pronounced in the case of deuteron scattering. Following a rather pragmatic procedure, we corrected the an
gular distributions for the angle- and energy-dependent tracking efficiency provided by genfit. The correction was inferred from Monte Carlo simulations performed with ATTPCROOTv2. We simulated the \({}^{10}\mathrm{Be}+d\) reaction populating the ground state and the first excited state of \({}^{10}\mathrm{Be}\) using a flat angular distribution covering the same angular domain as the experimental data. The first part of the simulation involves the transport of charged particles in the detector, generating a collection of spatial points with an associated energy loss. At each point, a cloud of ionization electrons is generated based on the gas ionization potential. Each individual electron is transported to the pad plane where it is multiplied by avalanche and assigned to a certain pad based on its drift velocity and its lateral and longitudinal diffusion coefficients (ignoring the charge spread of the avalanche). The number of electrons collected in each pad along with their time of arrival are used to generate a pulse based on the electronics settings, namely the shaping time and the gain [47; 48]. The final pulse is the convolution of the current collected on each pad with the response function of the electronics. At this point, the simulated track reconstruction procedure follows the same steps as the experimental one and therefore the same reconstruction algorithms can be used.
Figure 6 shows the simulated kinematics and the excitation energy of the
Figure 6: Left panel: Kinematics of the simulated \({}^{10}\mathrm{Be}+d\) reaction showing the ground and the first excited state. Right panel: Excitation energy of \({}^{10}\mathrm{Be}\) inferred from the simulation. Only two well-known states were used on this simulation.
\({}^{10}\mathrm{Be}+d\) reaction. The excitation energy resolution amounts to 265 keV and 190 keV (standard deviation) for the ground and first excited state, respectively. In general, the simulation reproduces the experimental data rather well, in particular, the energy resolution, although some physical effects may be underestimated. The broadening of the kinematics at low kinetic energy is consistent with the experimental observation in Fig. 3. The degradation of the resolution at around 90\({}^{\circ}\) seems to be underestimated in the simulation compared to the experimental data. The efficiency correction function was inferred from the ratio of reconstructed and simulated events in the center of mass (CM) frame.
The correction function was used to correct the experimental angular distribution of the \({}^{10}\mathrm{Be}+d\) elastic scattering, shown as solid circles in the left panel of Fig. 7. The experimental angular distribution was normalized by the target thickness and the beam intensity, and corrected bin by bin (\(1^{\circ}\) size) with the results from the simulation. Below 20\({}^{\circ}\) CM (not shown) the efficiency falls rapidly because of the combined effect of the tracking and detection acceptance. The rest of the distribution has a rather good agreement with a simple optical model fit performed with p
Figure 7: Left panel: Angular distribution of the \({}^{10}\)Be+d elastic scattering corrected by efficiency. The solid circles and the solid line are the experimental data and the DWBA theoretical distribution. Right panel: Same as left panel but for the \({}^{14}\)C+p elastic scattering.
On the right panel of Fig. 7, we show the angular distribution for the \({}^{14}\mathrm{C}+p\) elastic scattering also compared to a DWBA calculation. In this case, the normalization of the data was based on the calculation. The efficiency correction is more critical at forward angles where the efficiency drops because the performance of the reconstruction package is very sensitive to the length of low energy particles tracks. As expected, this effect is more pronounced for deuteron scattering than proton elastic scattering where the correction is basically a scaling factor over the entire angular domain covered by the detector. The most remarkable aspect regarding the angular distributions is the large range covered which enables a precise adjustment of the associated optical potentials and the determination of physical quantities, such as matter deformation lengths, with much better precision [50].
## 6 Conclusions and outlook
In this work we have presented a tracking algorithm for the kinematics reconstruction of reactions in inverse kinematics using a solenoidal spectrometer in active target time projection chamber mode. The algorithm is based on the well-known extended Kalman filter implemented in the genfit package but adapted for low energy particles within a quite broad phase space. In particular, we have modified the code to use a more realistic description of the energy loss for ions traversing a gaseous material, which is rather critical at low energy. We have implemented genfit as an additional sequential task inside our ATTPCROOTv2 analysis framework. The performance of the entire reconstruction procedure, and particularly the track reconstruction algorithm, was evaluated using data acquired with the AT-TPC in two experiments in inverse kinematics with radioactive beams on proton and deuteron targets.
We have successfully identified several reaction channels (scattering and transfer) through the identification of charged particles produced in these reactions. By fitting the particle tracks, it was possible to extract the reaction kinematics and the excitation energy spectrum. Overall, we have obtained an excitation energy resolution ranging from 150 to 350 keV (standard deviation), depending on the target. This method, together with the outstanding performance of the detector, allowed for the determination of a broad angular distribution covering almost \(100^{\circ}\) in CM. We also simulated the scattering reaction to extract the correction factors for the angular distributions to ac
count for the efficiency loss.
The results obtained in this study clearly demonstrated that the Kalman Filter is a rather powerful tracking algorithm for kinematics reconstruction in solenoidal spectrometers. Developing a dedicated Kalman Filter for low energy particles with complicated trajectories requires adding several ingredients including a more robust treatment of the motion of low-energy particles in the medium and a realistic description of secondary scattering and straggling effects. We expect to improve the energy resolution if these effects are taken into account. Such a dedicated algorithm is currently under development by our collaboration.
## 7 Acknowledgements
This material is based upon work supported by NSF's National Superconducting Cyclotron Laboratory, which is a major facility fully funded by the National Science Foundation under award PHY-1565546. This material is also based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics and used resources of the Facility for Rare Isotope Beams (FRIB), which is a U.S. DOE Office of Science User Facility under Award No. DE-SC0000661 SOLARIS is funded by the DOE Office of Science under the FRIB Cooperative Agreement DE-SC0000661. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357 (Argonne). This research used resources of ANL's ATLAS facility, which is a DOE Office of Science User Facility. This work has received financial support from Xunta de Galicia (CIGUS Network of Research Centers). Y. A. acknowledges the support by the Spanish Ministerio de Economia y Competitividad through the Programmes "Ramon y Cajal" with the Grant No. RYC2019-028438-I.
|
2309.11923 | TextCLIP: Text-Guided Face Image Generation And Manipulation Without
Adversarial Training | Text-guided image generation aimed to generate desired images conditioned on
given texts, while text-guided image manipulation refers to semantically edit
parts of a given image based on specified texts. For these two similar tasks,
the key point is to ensure image fidelity as well as semantic consistency. Many
previous approaches require complex multi-stage generation and adversarial
training, while struggling to provide a unified framework for both tasks. In
this work, we propose TextCLIP, a unified framework for text-guided image
generation and manipulation without adversarial training. The proposed method
accepts input from images or random noise corresponding to these two different
tasks, and under the condition of the specific texts, a carefully designed
mapping network that exploits the powerful generative capabilities of StyleGAN
and the text image representation capabilities of Contrastive Language-Image
Pre-training (CLIP) generates images of up to $1024\times1024$ resolution that
can currently be generated. Extensive experiments on the Multi-modal CelebA-HQ
dataset have demonstrated that our proposed method outperforms existing
state-of-the-art methods, both on text-guided generation tasks and manipulation
tasks. | Xiaozhou You, Jian Zhang | 2023-09-21T09:34:20Z | http://arxiv.org/abs/2309.11923v1 | # TextCLIP: Text-Guided Face Image Generation And Manipulation Without Adversarial Training
###### Abstract.
Text-guided image generation aimed to generate desired images conditioned on given texts, while text-guided image manipulation refers to semantically edit parts of a given image based on specified texts. For these two similar tasks, the key point is to ensure image fidelity as well as semantic consistency. Many previous approaches require complex multi-stage generation and adversarial training, while struggling to provide a unified framework for both tasks. In this work, we propose TextCLIP, a unified framework for text-guided image generation and manipulation without adversarial training. The proposed method accepts input from images or random noise corresponding to these two different tasks, and under the condition of the specific texts, a carefully designed mapping network that exploits the powerful generative capabilities of StyleGAN and the text image representation capabilities of Contrastive Language-Image Pre-training (CLIP) generates images of up to \(1024\times 1024\) resolution that can currently be generated. Extensive experiments on the Multi-modal Celeba-HQ dataset have demonstrated that our proposed method outperforms existing state-of-the-art methods, both on text-guided generation tasks and manipulation tasks.
Text-guided image generation, Text-guided image manipulation, StyleGAN +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
others require a large number of training parameters or training data, making training too expensive.
For text-guided image manipulation task (Sang et al., 2015; Wang et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019), the corresponding image needs to be modified according to the specified text. It is important to note that the areas of the image that are semantically irrelevant to the specified text should be kept as close as possible to the original image, and only those areas of the image that are semantically relevant should be modified.TediGAN (Wang et al., 2019) is the first work to provide text-guided image generation and manipulation by exploiting the semantic properties of the latent space of GAN. However, the performance of TediGAN has much room for improvement.
StyleGAN (Liu et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) is now state-of-the-art generative adversarial networks with powerful image generation capabilities, providing realistic images at resolutions up to 1024\(\times\)1024, and more importantly, StyleGAN's latent space with good semantic performance and unraveling capabilities. The latent space of StyleGAN has been the subject of much recent research progress (Wang et al., 2016; Wang et al., 2018; Wang et al., 2019), which has significantly advanced several fields. Contrastive Language-Image Pre-training (CLIP) (Wang et al., 2019) is a powerful multimodal pretrained model that provides powerful text image representation capabilities and can be used as a supervisor for cross-modal tasks to achieve semantic-visual alignment. Some meaningful works based on pretrained StyleGAN and CLIP have been born recently (Wang et al., 2016; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
In this work, we propose TexCLIP, a unified framework for text-guided image generation and manipulation without adversarial training, which doesn't require the complex multi-stage generation and tedious adversarial training and outperforms extant state-of-the-art methods in two tasks. First, either random noise or images are used as input, with the random noise corresponding to the text-guided image generation task and the images corresponding to the text-guided image manipulation. Using a pre-trained encoder, the input is transformed into \(w_{0}\), which is used as the initial latent code. \(w_{0}\) is then subjected to a level-channel mapper with two parts: (a) level mapper: from coarse to fine, divided into three separate networks (coarse, medium, fine), each mapping a part of the initial latent code \(w_{0}\). (b) channel mapper: consists of 18 style modulation networks. The final mapping latent code \(w_{t}\) is obtained by level-channel mapper, which is then processed differently with the initial latent code \(w_{0}\) for different tasks to obtain the final latent code \(w_{s}\). \(w_{s}\) is used as input to the generator of StyleGAN to obtain the final image. Table 1 shows how our method compares with other methods. Compared with other text-guided image generation methods, our proposed method is able to produce high-resolution images, support manipulation of images and accept open-world text as input without the need for adversarial training and multi-stage generation. In contrast to TediGAN (Wang et al., 2019), we do not need to train different models for different texts.
In summary, this work consists of the following main contributions:
* For the two distinct tasks of text-guided image generation and text-guided image manipulation, we propose TextCLIP, a unified framework that enables text-guided image generation and manipulation without the need for complex adversarial training.
* We propose level-channel mapper that uses text as a condition to semantically map the initial latent code to the latent space \(\mathcal{W}+[1]\) of StyleGAN. Compared to previous work TediGAN (Wang et al., 2019), level-channel mapper does not require training different networks for different text conditions.
* Extensive qualitative and quantitative studies have shown that our proposed TextCLIP outperforms existing state-of-the-art methods on these two different tasks.
## 2. Related Work
### Text-Guided Image Generation
We have divided previous work on text-guided image generation into two categories. The first category is multi-stage generative models, where multiple generators and discriminators need to be used to complete the text-guided image generation work. StackGAN (Wang et al., 2019) was the first multi-stage generative model that used multiple generators and discriminators to first generate low-quality images and then generate high-quality images. Later StackGAN++ (Wang et al., 2019) implemented end-to-end training based on StackGAN to generate higher quality images. AttnGAN (Wang et al., 2019) introduced an attention mechanism to achieve word-level image generation, generating more realistic and realistic high-quality images; in addition, the proposed Deep Attention Multimodal Similarity Model (DAMSM) to compute the similarity of image-text pairs. DM-GAN (Wang et al., 2019) augments a low-resolution initial image with a smaller model size, and then uses dynamic memory networks to purify the initial image to produce a more realistic image. Much subsequent work, optimised on the basis of AttnGAN, has achieved higher quality image generation with more accurate semantic alignment (Wang et al., 2016; Wang et al., 2019; Wang et al., 2019).ControlGAN (Wang et al., 2019) proposes an innovative multi-stage generation architecture and introduces perceptual loss to solve the problem that if some words in a sentence are changed during text-guided image generation, the composite image will be very different from the original image.MirrorGAN (Wang et al., 2019) is inspired by CycleGAN (Wang et al., 2019) and reduces the generated images to text, further improving the quality of the generated images. DAE- GAN (Wang et al., 2019) takes into account the 'aspect'
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Method & AttnGAN (Wang et al., 2019) & ControlGAN (Wang et al., 2019) & DAE-GAN (Wang et al., 2019) & XMC-GAN (Wang et al., 2019) & TediGAN (Wang et al., 2019) & **TextCLIP** \\ \hline One Generator & - & - & - & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Single Model & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & - & \(\checkmark\) \\ High Resolution & - & - & - & - & \(\checkmark\) & \(\checkmark\) \\ Manipulation & - & \(\checkmark\) & - & - & \(\checkmark\) & \(\checkmark\) \\ Open World & - & - & - & - & \(\checkmark\) & \(\checkmark\) \\ w/o Adversarial Training & - & - & - & - & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparison of Different Text-Guided Image Generation Models.
information of the input text and incorporates it into the multi-stage generation process.
The second category is represented by XMC-GAN (Wang et al., 2017) and DALL-E (Dall et al., 2018). XMC-GAN (Wang et al., 2017) uses contrast learning as supervision, takes into account image text contrast loss, true-false image contrast loss, and image region word contrast loss, and uses modulation layers to build a single-stage generative network that achieves state-of-the-art performance on several public dataset. DALL-E (Dall et al., 2018) trains a large number of text-image pairs on a Transformer with 12 billion network parameters, achieving zero-s hot generation. CogView (Coguew et al., 2018) is similar to DALL-E in that it trains a Transformer (Wang et al., 2018) with 4 billion network parameters to autoregressively model images and text, achieving stronger zero-sample generation. TediGAN-A (Wang et al., 2017) uses a pretrained StyleGAN with a GAN inverse module, a visual semantic similarity module, and an instance-level optimization module to perform an optimized search in the latent space, resulting in text-guided image generation. TediGAN-B (Wang et al., 2017) improves the performance of TediGAN-A by using a pretrained image inversion model and CLIP (Dall et al., 2018).
### Text-Guided Image Manipulation
ManiGAN (ManiGAN et al., 2017) is a multi-stage text-guided image manipulation work using multiple generators and discriminators and has demonstrated good performance on the CUB and COCO datasets. StyleCLIP (Dall et al., 2018) provides three different methods for text-guided image manipulation, including optimizers, mappers and global direction. The mapper requires training a model with different parameters for different text conditions and is an inflexible approach for practical applications. The optimizer and global direction approaches require inference on different instances each time and take longer to infer. Our proposed TextCLIP differs from previous work in that we propose a more flexible way to perform text-guided image manipulation and achieve higher quality image manipulation. Instead of training a different model for each text, TextCLIP can use the trained model to generate results directly based on the image and text conditions, without excessive inference time.For example, we can train a model for the same class of text conditions, e.g. a model trained about skin color can perform inference on dark skin, white skin, red skin, etc.
### StyleGAN And CLIP
StyleGAN (Dall et al., 2018; Dall et al., 2018; Dall et al., 2018; Dall et al., 2018) is an excellent tool for image generation and is state-of-the-art work in the field of adversarial generative networks. StyleGAN's input is mapped to the latent space by processing eight fully connected layers, which are then fed into the StyleGAN generator. The StyleGAN generator has 18 layers, with every two layers corresponding to a resolution from 2 to 1024. Each layer of the generator of StyleGAN accepts a 512-dimensional latent code as input. Due to the good semantic properties of the latent space of StyleGAN, many extensions on the latent space of StyleGAN have been born recently, such as \(\mathcal{W}\)+ and \(\mathcal{S}\) space, and these researches are good to enhance the applications of StyleGAN. The \(\mathcal{W}\)+ space of StyleGAN consists of 18 512-dimensional latent codes, each corresponding to one of the layers of StyleGAN generator and serving as its input. The excellent performance of StyleGAN's \(\mathcal{W}\)+ space has driven advances in the field of GAN inversion. GAN inversion work (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) can well invert images into the \(\mathcal{W}\)+ space of StyleGAN, thus facilitating semantic editing of images. Contrastive Language-Image Pre-training (CLIP) (Dall et al., 2018) trains a large number of image-text pairs, providing a powerful image-text representation. By encoding the image and text into the space of CLIP, the similarity of the image text can be quantified.
## 3. The TextCLIP Framework
Based on the powerful image generation capability of StyleGAN (Dall et al., 2018) and the cross-modal text-image representation capability of CLIP (Dall et al., 2018), we propose TextCLIP, a unified approach for text-guided image generation and manipulation. We divide TextCLIP into three stages:
* **Stage 1.** Using a pretrained encoder, the image or random noise is mapped to the \(\mathcal{W}\)+ (Chen et al., 2018) space of StyleGAN model pretrained on the FFHQ dataset (Dall et al., 2018) to obtain an initial latent code \(w_{0}\).
* **Stage 2.** The initial latent code \(w_{0}\) is passed through the level-channel mapper to obtain the mapping latent code \(w_{t}\).
* **Stage 3.** The mapping latent code \(w_{t}\) is then processed differently with the initial latent code \(w_{0}\) depending on the task to obtain the style latent code \(w_{s}\), which is the input of the generator of a pretrained StyleGAN to obtain the final image.
### Overview
The global framework is shown in Figure 3. TextCLIP supports either random noise or image as input (random noise for text-guided image generation task and image for text-guided image manipulation task), and we use a pretrained encoder to map the input to the latent space \(\mathcal{W}\)+ of StyleGAN1. For image, we use e4e (Wang et al., 2017) as the pretrained encoder; for random noise, we use a pretrained mapping network in StyleGAN (Dall et al., 2018) as encoder. the
Figure 2. Diverse text-guided image generation results. On the same text conditions, TextCLIP can generate multiple images at \(1024\times 1024\) resolution.
process can be formulated as:
\[w_{0}=E(g_{0}), \tag{1}\]
where \(g_{0}\) represents the initial input,\(w_{0}\) represents the initial latent code mapped to the \(\mathcal{W}\)+ space of StyleGAN and \(E\) represents the pretrained encoder. The obtained initial latent code is then passed through the level-channel mapper to obtain the mapping latent code \(w_{t}\), the mathematical equation of which is shown below:
\[w_{t}=F_{LCM}(w_{0}), \tag{2}\]
where \(F_{LCM}\) denotes the level-channel mapper. Next, we do different things depending on the task. For text-guided image generation task:
\[w_{s}=w_{t}, \tag{3}\]
For text-guided image manipulation task:
\[w_{s}=0.1w_{t}+w_{0}, \tag{4}\]
where the style latent code \(w_{s}\) is used as the input of StyleGAN generator to obtain the final image \(g_{s}\). The mathematical equation is shown below:
\[g_{s}=G(w_{s}), \tag{5}\]
where \(G\) denotes the generator of a pretrained StyleGAN, \(w_{0}\), \(w_{t}\) and \(w_{s}\in\mathcal{W}\)+.
### Level-Channel Mapper
The level-channel mapper consists of two parts: the level mapper and the channel mapper.
#### 3.2.1. **Level Mapper**
Many previous studies have shown that different layers of StyleGAN generator control different attributes, so from coarse to fine we divided the layers of StyleGAN generator into three parts (coarse, medium, fine). In the same way we divided the input latent code \(w_{0}\) into three parts, as follows:
\[w_{0}=(w_{0}^{c},w_{0}^{m},w_{0}^{f}), \tag{6}\]
For each part, we design a network consisting of several fully connected layers, each of which is followed by operations such as layernorm and leaklyrelu. This is shown below:
\[M(w_{0})=(M^{c}(w_{0}^{c}),M^{m}(w_{0}^{m}),M^{f}(w_{0}^{f})). \tag{7}\]
In practice, we can train only one sub-network of \(M\). Doing so allows us to change only the relevant image attributes and not some irrelevant ones.
As shown in Table 2, experimental results show that each layer of StyleGAN (Srivastava et al., 2017) controls different attributes, such as eye, hair color, age, face color, and other attributes. After our division, the coarse level controls attributes such as nose, head shape, lips, and hair length; the middle level controls attributes such as hair and face color; and the fine level controls age, gender and some micro attributes.
#### 3.2.2. **Channel Mapper**
We design a channel mapper for each layer of a StyleGAN generator. There are 18 channel mappers in total. \(M^{c}\) corresponds to 4 channel mappers, \(M^{m}\)to 4 channel mappers and \(M^{f}\) to 10 channel mappers. For each channel mapper, it takes the output from the corresponding level mapper and the text code \(t\) encoded by the CLIP (Srivastava et al., 2017) text encoder as input. As shown in Figure 4, the text is first encoded by CLIP text encoder to obtain
Figure 3. The framework of TextCLIP. The level mappers \(M_{c},M_{m},M_{f}\) consist of several fully connected layers that take a part of \(w_{0}\) as input. There are a total of 18 channel mappers, each taking \(t\) encoded by the CLIP text encoder and the output of the corresponding level mapper as input. The outputs of the 18 channel mappers are concatenated to form \(w_{t}.w_{t}\) is then processed differently for different tasks to obtain \(w_{s}\). \(w_{s}\) is used as input of the pretrained StyleGAN generator to obtain the final image.
text conditional code \(t\). \(t\) modulates the input that comes from the corresponding level mapper after processing in two fully connected layers. The mathematical form is shown below:
\[c^{\prime}_{i}=c_{i}+F_{1}(t)c_{i}+F_{2}(t),i=0,1,...,17, \tag{8}\]
where \(F_{1}\) and \(F_{2}\) are two networks designed by fully connected layers,\(c_{i}\) is the input of layer \(i\). Finally, the resulting 18 channel styles are concatenated to obtain the final style latent code \(w_{s}\). The mathematical form shown below:
\[w_{s}=Concat(c^{\prime}_{0},c^{\prime}_{1},c^{\prime}_{2},...,c^{\prime}_{17}), \tag{9}\]
where \(Concat\) means that the outputs of the 18 channel mappers are sequentially concatenated together.
### Loss Function
#### 3.3.1. **Semantic Loss**
An important aspect of text-guided image generation and manipulation is the need to ensure that the generated images are semantically consistent with the corresponding text. For this consideration, we propose semantic loss. The text and image are first encoded separately using CLIP (Zhou et al., 2017) pretrained encoder, and then the result is computed as the cosine similarity to obtain the semantic loss.
\[\mathcal{L}_{semantic}=1-cos(t,F_{img}(G(w_{s}))), \tag{10}\]
where \(F_{img}\) represents the pretrained image encoder of CLIP, \(t\) is the text vector obtained by processing the CLIP text encoder, and \(cos\) represents the cosine similarity calculation, \(\mathcal{L}_{semantic}\) represents semantic loss.
#### 3.3.2. **Identity Loss**
We need to ensure that the generated image is identical to the original facial identity, so we introduce identity loss as follows:
\[\mathcal{L}_{ID}=1-cos(R(g_{0}),(R(G(w_{s})))), \tag{11}\]
where \(g_{0}\) represents the original image and \(R\) represents a pretrained Arcface (Cheng et al., 2017) network for extracting the identity features of the image. The identity loss \(\mathcal{L}_{ID}\) is obtained by calculating the cosine similarity of the face identity features of the two images.
#### 3.3.3. **Image Loss**
The image loss consists of pixel loss \(\mathcal{L}_{pixel}\) and image feature loss \(\mathcal{L}_{lipips}\). Pixel loss refers to the fine-grained supervision of the generated image by comparing each pixel of the generated image with the original image \(g_{0}\). Feature loss refers to the comparison of the images at the feature level, typically using a pretrained network for feature extraction (Zhou et al., 2017). Image loss is defined as follows:
\[\mathcal{L}_{pixel}=\|g_{0}-G(w_{s})\|_{2}^{2}, \tag{12}\]
\[\mathcal{L}_{lipips}=\|F_{VGG}(g_{0})-F_{VGG}(G(w_{s}))\|_{2}^{2}, \tag{13}\]
where \(F_{VGG}\) represents a pretrained VGG network for extracting image features (Zhou et al., 2017). The total image loss is shown below:
\[\mathcal{L}_{img}=\lambda_{pixel}\mathcal{L}_{pixel}+\lambda_{lipips}\mathcal{ L}_{lipips}, \tag{14}\]
where \(\lambda_{pixel},\lambda_{lipips}\) are the corresponding hyperparameters.
#### 3.3.4. **Fidelity Loss**
After experimentation, it was found that previous research in text-guided image generation and manipulation tended to produce some low-quality and blurred images. To address this issue, we introduce the fidelity loss to prevent the generation of some low-quality and blurred images. It is shown as follows:
\[\mathcal{L}_{d}=\sigma(D(g_{s}))), \tag{15}\]
where \(\sigma\) represents sigmoid function, \(g_{s}\) represents generated image, \(D\) represents StyleGAN discriminator. We use a pretrained discriminator \(D\) of StyleGAN (Zhou et al., 2017), which performs image fidelity determination to prevent the model from generating blurred photos.
#### 3.3.5. **Overall Loss**
In summary, in order to make the images generated by the model realistic and semantically similar to the corresponding text, we define the following loss function:
\[\mathcal{L}=\lambda_{semantic}\mathcal{L}_{semantic}+\lambda_{ID}\mathcal{L}_{ ID}+\lambda_{img}\mathcal{L}_{img}+\lambda_{d}\mathcal{L}_{d}, \tag{16}\]
where \(\lambda_{semantic},\lambda_{ID},\lambda_{img},\lambda_{d}\) are the corresponding hyperparameters.
## 4. Experiments
### Experiments Setup
#### 4.1.1. **Datasets**
In order to carry out the performance of text-guided face image generation and manipulation, we conducted our experiments to verify the soundness and efficiency of the TextCLIP method. We have selected the following face dataset to carry out our experiments.
* **Multi-modal CelebA-HQ Dataset**(Zhou et al., 2017): a multimodal dataset consists of images, descriptive text, semantic masks and sketch,and contains 30,000 images, 24,000 images in the training set and 6,000 images in the test set. Each image of Multi-modal CelebA-HQ Dataset corresponds to 10 text descriptions.
\begin{table}
\begin{tabular}{c|c|l} \hline \hline Level & Layers & Attributes \\ \hline coarse & 0-3 & face shape,hair length,nose,lip,_et,al._ \\ medium & 4-7 & hair color,face color,_et,al._ \\ fine & 7-17 & age,gender,micro features,_et,al._ \\ \hline \hline \end{tabular}
\end{table}
Table 2. Layer-wise Analysis of a 18-layer StyleGAN Generator.
Figure 4. The structure of channel mapper.\(t\) is the text vector encoded by the CLIP text encoder. \(c_{i}\) is the input of the ith channel mapper and comes from the corresponding level mapper.
#### 4.1.2. **Evaluation Metric**
Text-guided image generation and manipulation require that the generated images are not only really enough to be realistic but also maintain a semantic similarity to the corresponding text. For this purpose, we have chosen the following evaluation metric.
* **Frechet Inception Distance (FID)**[(13)]: FID represents the distance between the feature vectors of the generated image and the feature vectors of the real image. The closer the distance is, the better the result of the model.FID gives us a good indication of whether the model is generating the exact data we desired.
* **R-Precision**[(45)]: another important property of text-guided image generation and manipulation is semantic similarity.we use R-Precision which evaluates the top-1 retrieval accuracy as the major evaluation metric in an image The higher the value of R-Precision, the higher the semantic similarity.
* **Learned Perceptual Image Patch Similarity(LPIPS)**[(49)]: to further evaluate the similarity of the generated image and the original image, we use LPIPS, which is a metric that learns the inverse of the generated image and the real image. A lower value of LPIPS indicates that the two images are more similar.
* **Identity similarity(IDS)**[(15)]: for text-guided image manipulation, we want the modified face image to be identity consistent with the original image, so we use IDS to evaluate this performance. IDS denotes identity similarity before and after editing calculated by Curricularface. The higher the IDS, the better the identity similarity.
**User study**: we also conducted a user study. 10 users from different backgrounds were selected and a user study was conducted by randomly generating 50 images under the same textual conditions. User request to rank images generated by different models under the same conditions. The user study consisted of the following aspects:
* **Image realism**: to evaluate whether the generated images are realistic.
* **Semantic similarity**: for the image generation task, semantic similarity refers to whether the generated image is semantically consistent with the corresponding text; for the image manipulation task, semantic similarity refers to whether the model modifies the input image according to the specified text.
### Results on Text-Guided Image Generation
#### 4.2.1. **Quantitative Results**
As shown in Table 3, on the Multi-Modal CelebA-HQ Dataset [(42)], we compared the three metrics FID, LPIPS, R-precision with previous works. Based on the powerful image generation capability of StyleGAN [(22)] and the powerful image text representation capability of CLIP [(28)], our proposed TextCLIP surpasses the previous state-of-the-art approach. Our
Figure 5. Qualitative comparison of text-guided image generation compared with the state-of-the-art methods. TextCLIP generates more realistic and semantically similar images than previous methods.
\begin{table}
\begin{tabular}{c|c c c} \hline Method & FID\(\downarrow\) & R-Precision\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline AttnGAN [(45)] & 125.98 & 0.232 & 0.512 \\ ControlGAN [(23)] & 116.32 & 0.286 & 0.522 \\ DFGAN [(36)] & 137.60 & 0.343 & 0.581 \\ DM-GAN [(54)] & 131.05 & 0.313 & 0.544 \\ TediGAN [(42)] & 106.37 & 0.188 & 0.456 \\
**TextCLIP (ours)** & **88.27** & **0.384** & **0.396** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Quantitative Comparison of Text-Guided Image Generation on the Multi-modal CelebA-HQ dataset.
proposed level-channel mapper can map textual information to the latent space \(\mathcal{W}\)+ of StyleGAN well and achieve high-quality image generation. At the same time, the loss function we designed can ensure generate the clearest possible images while ensuring semantic alignment. As shown in Table 4, user research shows that our approach outperforms the previous state-of-the-art approaches in terms of image realism and semantic similarity.
#### 4.2.2. **Qualitative Results**
As shown in Figure 5, we compare qualitatively with the previous state-of-the-art methods. The comparison shows that our generated images have higher semantic similarity and image fidelity. In terms of semantic similarity, we use the semantic loss for supervision and exploit the powerful cross-modal text-image representation capability of the CLIP model to achieve higher cross-modal semantic alignment compared to other methods. In terms of image fidelity, we generated more realistic and realistic, higher resolution images. Unlike previous studies, we introduced an image fidelity loss to ensure that the generated images are realistic enough, taking into account the model's overfitting to semantic loss. Also based on the powerful generative power of StyleGAN, images with a resolution of \(1024\times 1024\) were generated. While AttnGAN (Wang et al., 2018) and ControlGAN (Wang et al., 2018) only can generate lower resolution images and TediGAN (Wang et al., 2018) sometimes generates some blurred images. Take the sentence "She has a pointy nose with her mouth closed" as an example, the focus is on "she", "pointy nose" and "mouth closed".Our generated images are highly semantically aligned with these three features; whereas TediGAN generated
\begin{table}
\begin{tabular}{c|c c} \hline \hline Method & Acc. (\%)\(\uparrow\) & Real(\%)\(\uparrow\) \\ \hline AttnGAN (Wang et al., 2018) & 18.6 & 12.8 \\ ControlGAN (Wang et al., 2018) & 19.7 & 13.9 \\ DM-GAN (Wang et al., 2018) & 21.1 & 16.3 \\ TediGAN (Wang et al., 2018) & 17.8 & 22.3 \\
**TextCLIP (ours)** & **27.8** & **39.7** \\ \hline \hline \end{tabular}
\end{table}
Table 4. User Study on Multi-modal CelebA-HQ dataset. Acc. denotes semantic similarity and Real. denotes image realism.
Figure 6. Qualitative comparison of text-guided image manipulation compared with the state-the-of-art methods. TextCLIP accomplishes more accurate semantic editing against the original image than previous methods.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & IDS\(\uparrow\) & LPIPS \(\downarrow\) & Acc.(\%)\(\uparrow\) & Real.(\%)\(\uparrow\) \\ \hline TediGAN (Wang et al., 2018) & 0.18 & 0.45 & 10.8 & 12.4 \\ StyleCLIP (Wang et al., 2018) & 0.76 & 0.42 & 38.9 & 40.1 \\
**TextCLIP (ours)** & **0.84** & **0.39** & **50.3** & **47.5** \\ \hline \hline \end{tabular}
\end{table}
Table 5. Quantitative Comparison and User Study of Text-Guided Image Manipulation on the Multi-modal CelebA-HQ Dataset. Acc. denotes semantic similarity and Real. denotes image realism.
images with mouths not closed, AttnGAN and ControlGAN generated somewhat blurred and low resolution images.As shown in Figure 2,for the same text, our method generates several different images, which demonstrates the diversity of our text-guided image generation methods.
### Results on Text-Guided Image Manipulation
#### 4.3.1. **Quantitative Results**
As shown in Table 5, we compared with the previous TediGAN (Wang et al., 2018), StyleCLIP (Wang et al., 2019). Instead of using FID to evaluate text-guided image manipulation as in previous methods, we use IDS to evaluate whether the identity information is well preserved before and after the image is semantically modified,and use LPIPS to determine whether some semantically irrelevant image regions are preserved. And we conduct user study to determine the goodness of the model. The experiments show that, in contrast to previous methods, our proposed TextCLIP does a good job of semantically editing relevant image regions and partially preserving irrelevant image regions.
#### 4.3.2. **Qualitative Results**
As shown in Figure 6, we compare it with the previous TediGAN (Wang et al., 2018), StyleCLIP (Wang et al., 2019). Our method does a good job of modifying the semantically relevant parts according to the specified text, while not modifying the semantically irrelevant parts. In all six examples, TediGAN does not generate semantically relevant images well, while StyleCLIP produces similar results to our method, but the images produced by our method are more relevant to the given text while retaining the semantically irrelevant image regions well. This is not only because our designed level-channel mapper accurately maps the initial latent code according to the conditions of corresponding text, but also because our designed loss functions, including identity loss and semantic loss, accurately modify the images, preserving semantically irrelevant regions of the images such as face identity well.
## 5. Ablation Study
### Ablation Study On Loss Functions
As shown in Table 6, we designed a loss function that helps to improve the performance of the text-guided image generation and manipulation tasks. The semantic loss function makes the generated images semantically consistent with the given text, which takes advantage of CLIP (Wang et al., 2019) strong image-text representation capability. The identity loss function, especially on text-guided image manipulation tasks, allows for the good preservation of identity information of face images. The image loss function and the fidelity function allow the generated image to be close to the original image while being more realistic.
### Ablation Study On Network Structures
As shown in Table 7, the level-channel mapper demonstrates a powerful performance combined with StyleGAN (Wang et al., 2019) and CLIP (Wang et al., 2019). The level mapper helps to extract features in a hierarchical manner, and the channel mapper enables finer control of text-based conditions at a finer granularity. The experimental results show that the level-channel mapper formed by the combination of level mapper and channel mapper has excellent performance.
## 6. Limitations
After analysis we believe there are several limitations:
* TextCLIP is only done for specific face domains now, in the future we hope to extend this method to other domains such as flowers, birds, etc. In order to verify the superiority of the performance of our method on the flower and bird domains, we need to pre-train StyleGAN on the relevant flower and bird datasets. The StyleGAN pre-trained on the flower and bird dataset can generate high resolution flower and bird pictures, which is our next step in the future.
* Since TextCLIP is based on StyleGAN (Wang et al., 2019) and CLIP (Wang et al., 2019), the problems that arise in CLIP and StyleGAN itself will also arise in TextCLIP. For example, some attributes, such as hats and earrings, are not well represented in the latent space of StyleGAN so we do not get the desired results. In addition, CLIP is at risk of being attacked.
## 7. Conclusion
Based on the powerful image generation capabilities of StyleGAN and the image text alignment capabilities of Contrastive Language-Image Pre-training(CLIP), we propose a new approach that provides a unified framework for text-guided image generation and manipulation, does not require adversarial training, and can accept open-world texts. Extended experiments on the Multi-modal CelebA-HQ dataset demonstrate that our approach outperforms previous state-of-the-art methods in both text-guided image generation tasks and text-guided image manipulation tasks. In the future, we hope that TextCLIP will not be limited to the face domain, but will be extended to other domains such as flowers, birds, etc. In addition, for text-guided image manipulation tasks, we would like to explore a unified approach which does not need to go through the process of training different models for different classes of textual conditions, using only one model to complete the task.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Gen.} & \multicolumn{2}{c}{Man.} \\ & FID\(\downarrow\) & R-precision\(\uparrow\) & IDS\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline w/o \(\mathcal{L}_{semantic}\) & 99.93 & 0.143 & 0.11 & 0.46 \\ w/o \(\mathcal{L}_{ID}\) & 90.34 & 0.433 & 0.34 & 0.44 \\ w/o \(\mathcal{L}_{img}\) & 94.54 & 0.428 & 0.78 & 0.45 \\ w/o \(\mathcal{L}_{d}\) & 93.28 & 0.483 & 0.83 & 0.40 \\ \hline
**TextCLIP (ours)** & **88.27** & **0.384** & **0.84** & **0.39** \\ \hline \hline \end{tabular}
\end{table}
Table 6. Ablation Study On Loss Function. Gen. denotes image generation, Man. denotes image manipulation.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Gen.} & \multicolumn{2}{c}{Man.} \\ & FID\(\downarrow\) & R-precision\(\uparrow\) & IDS\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline w/o Level Mapper & 92.46 & 0.448 & 0.81 & 0.48 \\ w/o Channel Mapper & 100.22 & 0.396 & 0.78 & 0.42 \\ \hline
**TextCLIP (ours)** & **88.27** & **0.384** & **0.84** & **0.39** \\ \hline \hline \end{tabular}
\end{table}
Table 7. Ablation Study On Network Structure. Gen. denotes image generation, Man. denotes image manipulation. |
2309.14683 | A Simple Text to Video Model via Transformer | We present a general and simple text to video model based on Transformer.
Since both text and video are sequential data, we encode both texts and images
into the same hidden space, which are further fed into Transformer to capture
the temporal consistency and then decoder to generate either text or images.
Considering the image signal may become weak in the long sequence, we introduce
the U-Net to reconstruct image from its noised version. Specifically, we
increase the noise level to the original image in the long sequence, then use
the $down$ module from U-Net to encode noised images, which are further input
to transformer to predict next clear images. We also add a constraint to
promote motion between any generated image pair in the video. We use GPT2 and
test our approach on UCF101 dataset and show it can generate promising videos. | Gang Chen | 2023-09-26T05:26:30Z | http://arxiv.org/abs/2309.14683v1 | # A Simple Text to Video Model via Transformer
###### Abstract
We present a general and simple text to video model based on Transformer. Since both text and video are sequential data, we encode both texts and images into the same hidden space, which are further fed into Transformer to capture the temporal consistency and then decoder to generate either text or images. Considering the image signal may become weak in the long sequence, we introduce the U-Net to reconstruct image from its noised version. Specifically, we increase the noise level to the original image in the long sequence, then use the \(down\) module from U-Net to encode noised images, which are further input to transformer to predict next clear images. We also add a constraint to promote motion between any generated image pair in the video. We use GPT2 and test our approach on UCF101 dataset and show it can generate promising videos.
## 1 Introduction
Text to video has gained popularity in computer vision and machine learning community. At the very beginning, most work focus on how to generate images, such as GAN [1] and VAE [2] both of which have shown impressive image and speech synthesis results. Diffusion probabilistic models [3; 4; 5] have recently shown high quality image generations. Recently Ho et al. proposed video diffusion models [9], which is a natural extension of the standard 2D image to 3D architecture. Imagen [6] is a text-to-image diffusion model which is conditional on the text embedding from language models (e.g. T5). Imagen generates an unprecedented degree of photorealism given text prompt and boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Ho et al. extended Imagen and presented Imagen Video [7], a text-conditional video generation system based on a cascade of video diffusion models. Given a text prompt, Imagen Video generates high definition videos using a base video generation model and a sequence of interleaved spatial and temporal video super-resolution models. However, Imagen Video requires videos with fixed length to construct 3D convolution process [9].
VideoGPT [8] is a conceptually simple architecture which uses VQ-VAE that learns downsampled discrete latent representations of a raw video by employing 3D convolutions and axial self-attention, and then leverage transformer to autoregressively model the discrete latents using spatio-temporal position encodings. Make-A-Video [10] is an approach for directly translating the tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video (T2V). Its intuition is simple: learn what the world looks like and how it is described from paired text-image data, and learn how the world moves from unsupervised video footage.
Unfortunately these methods mentioned above either required the fixed video length for training or only generate the constrained videos with the same background scene. In this work, we present an approach to train on (text, video) pairs with varied lengths and different scenes based on transformer framework. In addition, to handle the possible weak signal in the long sequence, we introduce U-Net to reconstruct the video data.
Specifically, we encode both text and video into the same hidden space, then we leverage transformer to capture the temporal and spatial consistency between video frames. In addition, we reconstruct the video data with noise to handle the long sequence scenario. For the text to video, we can either use a simple decoder or a conditional diffusion model to generate image, then capture the motion with temporal and spatial constrains. One possible issue is the generated videos may concentrate on certain scene, so we add a constraint to promote motion. Considering the limited computation power, we use a simple decoder and train our model in a end-to-end fashion. Because there are limited text and video training dataset available at this moment, we focus on the UCF 101 action dataset. We select 60 categories of action from UCF 101 dataset, and label about 1 to 5 videos for each type of action with text descriptions. We tested our approach on this dataset, and show it can generate meaningful videos given a text prompt1.
Footnote 1: Our implementation is available at [https://github.com/vividitytech/text2videoGPT](https://github.com/vividitytech/text2videoGPT)
## 2 Model
We present a simple text to video model via transformer. In the following parts, we will introduce the language models and then discuss how to combine transformer and U-Net to generate videos from texts.
### Background
Given a vocabulary \(\mathcal{V}\) and an ordered sequence of symbols (or tokens) \((x_{1},x_{2},...,x_{n})\) with \(x_{i}\in\mathcal{V}\), the language model [11] is defined as the joint probability over sequences of tokens \(\mathcal{V}^{n}\), which is factorized into product of conditional probabilities
\[p(x_{1},x_{2},...,x_{n};\theta)=\prod_{1\leq i<n}p(x_{i}|x_{1},x_{2},...,x_{i- 1};\theta) \tag{1}\]
where the vocabulary \(\mathcal{V}\) is a large but finite set, and \(\theta\) is the model parameter. \(p(x_{i}|x_{1},x_{2},...,x_{i-1})\) is conditional probability to predict next word given the previous sequences.
Many NLP problems can be formulated as \(p(Y|X;\theta)\), where \(X\in\mathcal{V}^{n}\) is the input sequence and \(Y\in\mathcal{V}^{m}\) is the output. There have been many models that can compute these conditional probabilities, such as recurrent neural networks LSTM [12] and self-attention Transformer [13]. Especially the transformer architecture have significant improvements in the expressiveness and accuracy of models [14; 15]. To learn the model parameters \(\theta\), we can use cross entropy loss:
\[L_{i}(\theta)=-\log p(y_{i})=-\log p(y_{i}|y_{<i},X;\theta) \tag{2}\]
where only \(y_{i}\) holds and other tokens in \(\mathcal{V}\backslash y_{i}\) are zeros. Then, the cross entropy error over the sequence of size \(m\) is:
\[L(\theta)=\sum_{i=1}^{m}L_{i}(\theta)=-\sum_{i=1}^{m}\log p(y_{i}) \tag{3}\]
While predicting the next symbol \(\hat{y}_{i}\sim p(y_{i}|y_{<i},X;\theta)\), we can either sample it or take a greedy strategy to select \(\hat{y}_{i}\) with maximum probability. Note that the conditional probability \(p(y_{i})=p(y_{i}|y_{<i},X;\theta)\) to predict next token is a discrete space \(\mathcal{V}\), which is constrained by the vocabulary size. Compared to text, the image space is significant large and it is a much challenge problem to generate video from text.
### The text to video model
What if \(Y=\{y_{1},y_{2},...,y_{m}\}\) is a video, not text? In this part, we will introduce how to extend Transformer to handle both texts and videos.
To generate the video \(Y=\{y_{1},y_{2},...,y_{m}\}\) as the sequence of frames, we need to take the similar approach as we encode the tokens in the transformer framework in Fig. 1. Since texts and images are from different domains, we need to map them into the same hidden space, which are further
input to transformer. In Fig. 1, we have a "Module" marked by red dash line to generate image. For example, we can either use decoder or conditional diffusion model from Fig. 2 to do this job. As for conditional diffusion model, we can use the pretrained model and then need another deep nets to capture the motion information from the video. Although conditional diffusion model [16; 6] can generate high resolution image, it is time consuming and computing intensive. In this paper, we use decoder in Fig. 2(a) and train the whole model in an end-to-end fashion.
In other words, we have an encoder function \(e:y\to h\) and a decoder \(d:h\to y\). And we also require the generated \(\hat{Y}\) matches the ground truth \(Y\). We can minimize the following square error:
\[loss=J(\theta)=\sum_{i=1}^{m}|y_{i}-\hat{y}_{i}|^{2} \tag{4}\]
where \(\hat{y}_{i}=d(h_{<i})\) and \(h_{<i}\) is the last hidden output at location \(i\) from Transformer. The square error loss above on images is similar to the cross entropy in language models.
Figure 1: The figure shows the architecture to encode both text and video using transformer, where the module inside red dash line can be any deep nets to generate image.
Figure 2: The figure shows how to replace the Module in Fig 1. (a) Decoder; (b) Conditional diffusion model
Another assumption that we make is that the image signal may become weak in the long sequential video. To enhance the signal, we take a similar approach from diffusion model. Thus, we use U-Net [17] to construct images from its noised versions. The process is as follows:
1. create the noised data \(\tilde{y}_{i}=(1-\beta_{i})y_{i}+\beta_{i}\epsilon\), where \(\beta_{i}\) is the noise level coefficient
2. encode \(h_{i}=e(\tilde{y}_{i})\) using the \(down\) module from U-Net(\(down,up\)), where we use the down module as our encoder
3. predict \(h_{i+1}\) using the transformer
4. decode the output \(\hat{y}_{i+1}=d(h_{i+1})\) and reconstruct the \(\hat{\hat{y}}_{i}=up(down(\bar{y}_{i}))\) with the \(up\) module from U-Net(\(down,up\))
5. update the model parameters by minimizing the loss equation 5 below
\[loss =L(\theta)+\alpha J(\theta)=\sum_{i=1}^{n}L_{i}(\theta)+\lambda J (\theta)\] \[=-\sum_{i=1}^{n}\log p(x_{i})+\lambda_{1}\sum_{j=1}^{m}|y_{j}- \hat{y}_{j}|^{2}+\lambda_{2}\sum_{j=1}^{m}|y_{j}-\hat{\hat{y}}_{j}|^{2}- \alpha|\hat{y}_{s\in[0,m]}-\hat{y}_{t\in[0,m]}| \tag{5}\]
where the first term is the cross entropy loss from the text part, the second is the reconstructing loss from the video part, and the last loss is to avoid the concentration in \(\hat{y}\). And \(\lambda=\{\lambda_{1},\lambda_{2}\}\) and \(\alpha\) are the weights to balance these terms. As for the reconstruction loss, we consider both losses from reconstruction decoder (\(\hat{y}\)) and U-Net (\(\hat{\bar{y}}_{j}\)). As for the last loss \(|\hat{y}_{s}-\hat{y}_{t}|\), we want to promote the motion between any two frames \((s,t)\). In the implementation, we randomly sample two frames \(s,t\in[0,m)\), and we want the difference between these two frames as large as possible.
## 3 Experimental results
We used the smallest version of GPT-2 with 124M parameters and U-Net, and tested our approach on UCF101 dataset 2. There are total 101 types of actions in UCF101 dataset, and we select 60 actions and sample about 1 to 5 videos for each action. We label each video with a text description and then resize the video into \(72\times 72\) pixels to construct the final training dataset. In Fig. 3, we show the sampled (text, video) pair, where \(X\) is the text and \(Y\) is the video that we want to generate.
Footnote 2: [https://www.crcv.ucf.edu/data/UCF101.php](https://www.crcv.ucf.edu/data/UCF101.php)
Because of limited computation resources, we resize the image to \(32\times 32\) and use a simple U-Net to encode it before transformer. The decoder is a simple linear layer with \(tanh\) activation function, then it is resized to \(72\times 72\) to reconstruct the output image. We also tried a four layer decoder, with each layer of cov2d, reshape and relu module, but it did not gain much in the reconstructed images. In the experiment, we found that layer normalization does not help to generate images, so we do not use it in the decoder. U-Net parameters: the base dim is 16, with dimensional multiples as (1, 2, 4, 8) and the num of resnets blocks as 2. We set \(\lambda=1\) for the first order image difference and \(\lambda=5\) for the second order image difference. And we set \(\alpha=10\) to promote motion between any reconstructed image pair.
We test our model with text prompts and show sampled images from generated videos in Fig. 4. It shows that the reconstructed image resolution is not good enough. The reasons can be (1) the training set is small, and the image resolution is low; (2) the model structure is simple; (3) the decoder module is not good because the generated images become blurry as we increase the sequence length.
## 4 Conclusion
We present a simple text to video model via transformer. Specifically, we combine both Transformer and U-Net to handle sequential text and long video datasets, and train the model in a end-to-end manner in order to generate videos from text prompt. The limited training dataset such as (text, video) pair is still a challenge to train a good model at this moment. In addition, we need to caption motion which should be object insensitive. In the next stage, we will try to improve quality with
more datasets and complex models, such as conditional diffusion model to generate images from text, and transformer to capture motion in an object-insensitive manner.
Figure 4: A girl with white clothes is performing the floor gymnastics from right to left.
Figure 5: Two men wearing fencing suit practicing with sword against each other
Figure 6: Stir the flour and water combination into source on the pan.
Figure 3: The sampled frames from 3 videos arranged in rows. The corresponding descriptions for each video is listed as: (1) A man with black clothes is throwing javelin with face forward from right to left; (2) A female gymnast performing front flip on balance beam from right to left; (3) A man and a woman with black clothes are doing ice dancing.
Figure 6: Stir the flour and water combination into source on the pan.
Figure 7: A boy in white t-shirt is biking with helmet. |
2310.20445 | Harnessing collective radiative phenomena on a photonic kagome lattice | Photonic lattices enable experimental exploration of transport and
localization phenomena, two of the mayor goals in physics and technology. In
particular, the optical excitation of some lattice sites which evanescently
couple to a lattice array emulates radiation processes into structured
reservoirs, a fundamental subject in quantum optics. Moreover, the simultaneous
excitation of two sites simulates collective phenomena, leading to
phase-controlled enhanced or suppressed radiation, namely super and
subradiance. This work presents an experimental study of collective radiative
processes on a photonic kagome lattice. A single or simultaneous -- in or
out-of-phase -- excitation of the outlying sites controls the radiation
dynamics. Specifically, we demonstrate a controlable transition between a fully
localized profile at the two outlying sites and a completely dispersed state
into the quasi-continuum. Our result presents photonic lattices as a platform
to emulate and experimentally explore quantum optical phenomena in
two-dimensional structured reservoirs, while harnessing such phenomena for
controlling transport dynamics and implementing all-optical switching devices. | Ignacio Salinas, Javier Cubillos Cornejo, Alexander Szameit, Pablo Solano, Rodrigo A. Vicencio | 2023-10-31T13:28:11Z | http://arxiv.org/abs/2310.20445v1 | # Harnessing collective radiative phenomena on a photonic kagome lattice
###### Abstract
Photonic lattices enable experimental exploration of transport and localization phenomena, two of the mayor goals in physics and technology. In particular, the optical excitation of some lattice sites which evanescently couple to a lattice array emulates radiation processes into structured reservoirs, a fundamental subject in quantum optics. Moreover, the simultaneous excitation of two sites simulates collective phenomena, leading to phase-controlled enhanced or suppressed radiation, namely super and subradiance. This work presents an experimental study of collective radiative processes on a photonic kagome lattice. A single or simultaneous - in or out-of-phase - excitation of the outlying sites controls the radiation dynamics. Specifically, we demonstrate a controllable transition between a fully localized profile at the two outlying sites and a completely dispersed state into the quasi-continuum. Our result presents photonic lattices as a platform to emulate and experimentally explore quantum optical phenomena in two-dimensional structured reservoirs, while harnessing such phenomena for controlling transport dynamics and implementing all-optical switching devices.
## I Introduction
Injecting light on an impurity site excites a non-bounded mode, which radiates energy into a given lattice [1; 2]. This phenomenon is analog to an atom radiating into a structured reservoir, a fundamental problem in quantum optics [3]. For a weakly coupled lattice impurity, the system mimics the radiation of an atom into a continuum [4; 5; 6; 7; 8; 9], where the decaying dynamics is primarily exponential with a slow power-law decay at longer times [8]. In contrast, strongly coupled impurities lead to hybrid atom-photon bound states [3; 10; 11]. The coupling of two or more impurities to the same lattice reproduces the collective dynamics of many atoms interacting with a common reservoir in the single-photon regime, leading to super and subradiance behavior [12; 13; 14; 15; 16; 17]. Consequently, photonics lattices offer the potential to study novel quantum optical effects in otherwise typically inaccessible regimes, such as delayed-induced non-Markovianity [18; 19], topological reservoirs [20], or exploring radiation phenomena in two-dimensional (2D) structured reservoirs.
The kagome lattice is historically known as the most frustrated 2D system in magnetism due to the impossibility of forming an antiferromagnetic state [21]. Also, this lattice allows studying the interaction between topology and correlations [22] due to the coexistence of Dirac cones and a Flat Band (FB). The first theoretical study of its photonic implementation searched for localized nonlinear cubic solutions outside of the bands [23], without a special focus on the linear properties of this lattice. Then, a study on 2D discrete nonlinear dynamics showed the possibility for the mobility of highly compact nonlinear solutions [24], something that was indeed forbidden for standard nonlinear Kerr 2D systems [25]. A photonic kagome lattice was also suggested for non-diffracting image transmission based on the coherent linear combination of FB states [26]. Photonic kagome lattices have been fabricated by diverse means [27; 28; 29], by using photorefractive SBN crystals [31; 32] or femtosecond (fs) laser written structures [33]. However, previous experiments were limited to lattices with only few lattice sites [34; 35]. Moreover, the intrinsic ellipticity of the fs technique produces non-symmetric coupling constants and, for example, the FB properties of a geometrically symmetric kagome lattice [36; 37] are simply lost, transforming the system into a Graphene-like structure [38], already studied in diverse contexts of physics [39].
In this work, we study radiation phenomena on a photonic kagome lattice evanescently coupled to two outlying sites that emulate two radiating atoms. We numerically and experimentally demonstrate that the optical excitation of the outlying sites produces a radiation pattern into the lattice that initially decays exponentially with plateaus at around one-half of their initial energy. Simultaneous in-phase excitation of both sites (atoms) evidences superradiance, accelerating the
Figure 1: (a) A kagome lattice. (b) Linear spectrum for \(V_{d}/V_{h}=1.2\).
radiation dynamics and significantly increasing the energy radiated into the lattice, reducing the energy remaining within the initially excited outlying sites. We also study the effect of applying an arbitrary phase difference to the optically excited sites, where we evidence subradiant dynamics for an out-of-phase input condition. In this case, the input excitation coincides with the profile of a bound state into the continuum [40; 41; 42; 43; 44; 45], and the energy remains almost perfectly trapped between the outlying sites. We effectively switch the dynamics into well-defined spatial states by varying the input amplitude or phase between the optical excitations. Our results draw inspiration from experimentally studying collective effects in quantum optics to use the phenomena for transport control and all-optical switching in photonic lattices.
## II Theory and simulations
### Lattice model
Our photonic kagome lattice under study consist of an array of single-mode optical waveguides which evanescently couple to their nearest-neighbors. The dynamic is well described by a Discrete Lineal Schrodinger Equation (DLSE) [25] that reads, in a general and compact form, as
\[-i\frac{\partial u_{\vec{n}}}{\partial z}=\sum_{\vec{n}}V_{\vec{n},\vec{n}}u_ {\vec{n}}\;. \tag{1}\]
Here, \(u_{\vec{n}}\) describes the mode amplitude at the \(\vec{n}\)-site and \(z\) is the propagation coordinate along the waveguides (which corresponds to time in quantum mechanics). \(V_{\vec{n},\vec{n}}\) are the matrix coefficients defining the coupling interaction in between the nearest-neighbor sites \(\vec{n}\) and \(\vec{m}\), under the lattice geometry sketched in Fig. 1(a), with horizontal \(V_{h}\) and diagonal \(V_{d}\) coupling constants. A kagome lattice [26; 24] has three sites per unit cell, as shown by sites \(A\), \(B\) and \(C\) in the same figure. The total power \(P_{Total}\equiv\sum_{\vec{n}}P_{\vec{n}}\) is a conserved quantity of model (1), with \(P_{\vec{n}}\equiv|u_{\vec{n}}|^{2}\) the \(\vec{n}\)-th lattice site power.
We obtain the bands of the system by inserting into the model (1) a standard plane-wave (Bloch) ansatz of the form \(u_{\vec{n}}(z)=\{A_{0},B_{0},C_{0}\}\exp(\vec{ik}_{z}\cdot\vec{r})\exp(ik_{z}z)\). Here, \(A_{0},B_{0},C_{0}\) correspond to the amplitudes at the respective unit cell sites, \(\vec{k}_{\perp}\) to the transversal wavevector, and \(\vec{r}\) to a generic lattice position. \(k_{z}\) corresponds to the longitudinal propagation constant or spatial frequency along the propagation direction \(z\) (in a solid-state context, \(k_{z}\) corresponds to the energy [25]). The linear spectrum is composed of three bands, as shown in Fig. 1(b). Two of them are dispersive and connected by Dirac cones at the vertices of a hexagonal Brillouin zone. Dispersive bands are composed of extended propagating modes responsible for the transport on a given lattice system [25; 46]. In our case, the third (lower) band at the bottom is quasi flat [37]. This third band becomes perfectly flat (i.e., \(k_{z}=\) constant) only if all coupling constants are equal on a kagome geometry [26; 24]; i.e., for a completely isotropic lattice. However, when the band is nearly (quasi) flat, their modes are very slow (massive) and do not contribute efficiently to energy transport.
### Analogy to radiation
A single outlying lattice site, which is evanescently coupled to a lattice array, can be considered as a quantum emitter coupled to a quasi-continuum structured reservoir [3]. As long as the quantum system remains in the single-excitation regime, the same equations of motion describe the evolution of both systems. When the site/atom is initially excited, its excitation will decay into the array/reservoir in a process resembling radiation. The radiation behavior depends on the ratio in between the coupling \(g\) of the single site to the array and the coupling \(V\) between sites within the array. In the limit of weak coupling, \(g/V\ll 1\), the excited sites decay exponentially. In the strong coupling regime, \(g/V\gg 1\), the excitation is localized and oscillates between the outlying site and the nearest sites in the array (with a dimer-like effective dynamics). The behavior in the intermediate regime, \(g/V\sim 1\), is more complicated and depends strongly on the structure of the reservoir. Generally, the radiation begins as an exponential decay until reaching an approximately constant value that decays polynomially slowly at longer times [3]. This general behavior even holds for atoms radiating into free space [47], where a pure exponential decay gives a good approximation.
In the weak coupling approximation, the mostly exponential decay depends on the coupling to the array \(g\) and a finite density of states (DOS) [15]. Fermi's golden rule tells us that the exponential decay rate is \(\gamma=2\pi g^{2}\rho(\Delta)\), where \(\rho(\Delta)\) is the DOS at a frequency \(\Delta\). In the case of a waveguide coupled to a kagome lattice, the non-zero DOS at zero energy guarantees the excitation transport through the array. For two outlying sites radiating into the lattice array, their relative amplitudes and phases can lead to destructive or constructive interference. The case of constructive (destructive) interference enhances (suppresses) the radiation into the quasi-continuum, in analogy to the collective effects of superradiance (subradiance). A decay rate \(\gamma\) could be collectively enhanced (suppressed) to reach a decay rate \(\gamma_{\mathrm{tot}}=2\gamma\) (\(\gamma_{\mathrm{tot}}=0\)). Collective effects of radiation into 2D structured reservoirs have been theoretically studied [15; 16; 17], but to our knowledge, they lack experimental implementations.
### Dynamical analysis
We numerically integrate the model (1) to study the radiation phenomena in the waveguide array, establishing an analogy where a single outlying site acts as an atom and the lattice acts as a continuum reservoir [19]. Exciting the system on a single outlying site allows studying standard radiation processes, while exciting two sites simulates collective behaviors. Fig. 1(a) shows the \(A\) and the \(C\) outlying sites acting as radiating atoms, as emphasized by a green ellipse. Both sites connect to the rest of the lattice through a \(B\) site. In this scheme, we can use the analogy of atoms radiating into a 2D kagome lattice and study its dependence under different input conditions. To gain insight into the dynamical properties, as well as to approach to the experimental regime, we numerically study the isotropic (\(V_{d}=V_{h}\)) and weak anisotropic (\(V_{d}/V_{h}=1.2\)
cases. We characterize the dynamics by computing the remaining power at the isolated \(A\) and \(C\) atomic sites (\(P_{\rm atoms}\)), both located at the right-upper lattice corner, and dividing it by the total power in system (\(P_{\rm total}\)), including these atoms. We define \(P_{\rm atoms}/P_{\rm total}\) in analogy to the atomic excitation probability to quantify the radiation process and the dynamics of the system.
Figure 2(a) presents a compilation of our numerical results for isotropic (black) and anisotropic (red) lattices. We first excite a single waveguide and study the power evolution at the atomic site. We observe (normal lines) a similar behavior for both lattice cases, with approximately one-half of the energy being radiated to the lattice and the other half oscillating at the region of the atomic sites. As all the coupling constants are of the same order in our lattice, we assume this observation corresponds to an intermediate radiation regime [8], where the energy is shared between the two atoms and the lattice. Fig. 2(b1) shows the output profile after a propagation length \(L=10\) cm for the anisotropic case. We observe that both atoms are strongly excited, with an almost equal intensity. These two atoms create a dimer-like system, generating oscillation between them, while simultaneously the light is been radiated efficiently to the rest of the lattice. However, this is not so evident in Fig. 2(b1) due to the large intensity differences in between the atomic and lattice sites (the energy is homogeneously distributed into the lattice, with a low-intensity density per site of \(\sim 0.001\) compared to \(\sim 0.25\) contained at each atom).
A collective superradiant effect occurs when the two atoms are simultaneously excited in phase. Thicker lines in Fig. 2(a) show similar dynamics for both lattices, where we observe a quite notorious enhanced radiative dynamics. We observe a faster energy transport into the lattice, where for \(z\approx 0.5\) cm around 50% of the energy has been already disseminated (for a single atomic site excitation, this occurs at \(z\approx 2\) cm). However, even more important, almost all the energy has been disseminated to the lattice for \(z\approx 2.5\) cm. This figure shows a noticeable and robust difference between the regimes of radiation and superradiation for this 2D kagome lattice. Fig. 2(b2) shows the output profile for this case, at the propagation length of \(z=L\), where we observe a strong contrast with the single atomic site excitation shown in Fig. 2(b1). This numerical observation clearly shows that the chosen kagome configuration constitutes an excellent scenario for radiative-like studies.
Now, we study the effect of considering a simultaneous excitation of both atoms but having a nontrivial input phase structure. This idea comes from a recent work where authors use a Lieb ribbon photonic lattice [48] to study the excitation of 0- and \(\pi\)-phase qubits. Taking advantage of the FB properties of a Lieb geometry, those authors could cancel the transport to the lattice for an out-of-phase excitation. On the other hand, the energy radiates through the system for an in-phase condition. In our case, the lattice anisotropy demands us to use a balanced amplitude condition to fully cancel the transport through the lattice while exciting both atoms in an out-of-phase configuration [49]. Suppose we define the amplitude of the isolated atoms as \(a\) and \(c\), top and right, respectively. We should satisfy the condition \(V_{d}a+V_{h}c=0\) to achieve the required destructive interference at the connector site \(B\). In this case, the transport through the lattice would be minimal, with most of the energy remaining localized at both atomic sites with \(P_{atoms}/P_{total}\approx 1\). Thinner line plots in Fig. 2(a) show this regime for both lattice cases. We observe how the energy remains trapped only at the atomic sites for a perfectly isotropic lattice, while for a weakly anisotropic configuration, the energy slowly leaks into the lattice. The out-of-phase excitation
Figure 2: (a) \(P_{\rm atoms}/P_{\rm total}\) versus propagation coordinate \(z\) for isotropic (black) and anisotropic (red) kagome lattices. The normal lines show the dynamics after optically exciting a single atomic site. In contrast, thicker and thiner lines show the dynamics of two atomic sites optically excited in and out-of-phase, respectively. (b1) and (b2) Output intensity profiles at \(z=10\) cm for single-site and in-phase double-site excitations as indicated in (a). (c) Kagome spectrum for the finite lattice having 343 sites and \(V_{d}/V_{h}=1.2\). The inset shows the respective edge state in the continuum. (d) \(P_{\rm atoms}/P_{\rm total}\) versus \(\Delta\phi\) and \(V_{d}/V_{h}\), for the excitation of two atoms after a propagation of \(z=L=10\) cm. Insets show the indicated cases.
relates to a compact stationary state, which may correspond to a bound edge state in the continuum [40; 41; 42; 43; 44; 45]. This state has two sites different from zero, only for the isotropic case \(V_{d}=V_{h}\). However, for anisotropic lattices, the energy slowly radiates into the bulk, but it appears as an effective localized state for short propagation distances.
Figure 2(c) shows the eigenvalue spectra for the finite lattice structure under study [see Fig. 3(b)] considering \(V_{d}/V_{h}=1.2\). We observe that there is a state (red dot) inside the second band, at a frequency \(k_{z}\approx-1.2\), which is highly trapped at the atomic sites region, as the intensity profile shows at the inset. In fact, this is the highest localized state for this lattice geometry, with a participation number [24; 26] of \(5.3\) for \(V_{d}/V_{h}=1.2\). The intensity ratio between both atomic sites is \(1.23\), and the rest of the lattice amplitude is minimal but not zero. In fact, by using an out-of-phase input condition, we numerically find that \(\sim 96\%\) of the energy is trapped at the atomic sites after propagation distance of \(z=10\) cm. On the other hand, for an in-phase atoms excitation, this value strongly decreases to \(\sim 2\%\).
To characterize this better, we run several simulations by varying the lattice anisotropy \(V_{d}/V_{h}\) and the input phase \(\Delta\phi\) in between two equal amplitude atomic sites. After running each simulation, up to a propagation distance of \(z=10\) cm, we compute the energy remaining at atomic sites (\(P_{\text{atoms}}/P_{\text{total}}\)) and show our compiled results in Fig. 2(d). There, we observe an evident optical switch effect, which could be fully controllable by external optical means [48]. We notice that, for a perfect isotropic regime (\(V_{d}=V_{h}\)) and an out-of-phase (\(\Delta\phi=\pi\)) input condition, the energy remains perfectly trapped at the atomic sites, clearly shown at the right-panel inset. Around this parameter region, \(P_{atoms}/P_{total}\approx 1\) due to the effective excitation of a bound edge state in the continuum [40; 41; 42; 43; 44; 45], which has much larger amplitudes at the \(A\) and \(C\) atomic sites. Therefore, this input condition effectively excites a localized state at the atomic sites, which naturally does not radiate, or it only does so weakly. This regime is a perfect analogy to the subradiant regime in quantum optics [50]. On the other hand, for an in-phase input excitation (\(\Delta\phi\approx 0,2\pi\)), the energy is fully superradiated into the lattice, independently of the lattice anisotropy, as Figs. 2(a) and (b) show. We also notice [see the left-panel inset in Fig. 2(d)] that for an out-of-phase input condition, on a highly anisotropic lattice, the energy is also well radiated into the bulk. This effect is due to the absence of compact edge states to excite at the atomic sites, with only propagating modes available in the lattice after excitation. A larger anisotropy effectively decouples the atomic site \(C\) from the connector site \(B\), and no localized edge state is longer possible at the atomic sites region.
## III Experimental excitation
The kagome lattice under study was fabricated using the direct femtosecond laser-writing technique [33] [see the sketch in Fig. 3(a)] on a \(L=10\) cm-long fused silica glass wafer. Fig. 3(b) shows a microscope image of our fabricated kagome lattice with 343 single-mode waveguides (at 633 nm), having a lattice - center to center - spacing of \(20\)\(\mu\)m; i.e., a geometrically isotropic configuration. However, the waveguide ellipticity becomes quite evident after white light illumination, with an effective profile of \(\sim 4\times 12\)\(\mu\)m [33]. This ellipticity affects the propagation dynamics on this lattice due to the different evanescent coupling among different waveguides, depending on the waveguide orientation. Specifically, in this case, the horizontal coupling constant \(V_{h}\) becomes smaller than the diagonal one (\(V_{d}\)) at an equal geometrical distance. This asymmetry implies that our perfectly symmetric lattice configuration becomes effectively anisotropic in terms of dynamical properties.
First of all, we study this lattice experimentally using a standard characterization setup, which consists on focusing and linearly polarizing a HeNe laser beam to excite individual bulk waveguides. Figs. 3(c) and (d) show discrete diffraction patterns at the output facet for \(C\) and \(B\) bulk excitations, respectively. Both cases show excellent transport properties with the light fully exploring the lattice. The \(C\)-site excitation shows a more vertically oriented pattern due to the first hopping with the up and down \(A\) and \(B\) sites. On the other hand, a \(B\)-site excitation shows a more horizontal distribution of the energy through the lattice, with some weak localization tendency in the surroundings of the input excitation. This could be due to a better excitation of the quasi-flat band formed by slow propagating modes. Nevertheless, in this case, the light explores quite well the lattice as it can be noticed by observing some localized patterns at the lattice surface [see Fig. 3(d)].
Now, we implement an image setup based on a sequence of two spatial light modulators (SLMs) [51]. In the first stage, we use a transmission SLM to modulate the amplitude of a 640 nm wide laser beam and generate one or two light disks to excite one or two atoms, respectively. In the second stage, we use a reflective SLM to add a phase pattern to the gen
Figure 3: (a) fs writing technique. (b) A fs written kagome photonic lattice, including the effective site atoms emphasized by a red ellipse. (c) and (d) Output intensity experimental images after a \(C\) and \(B\) bulk site excitation, respectively.
erated amplitude modulated profile. In this way, we can simultaneously excite one or more waveguides with a well-defined amplitude and phase structure. We first excite every atom independently and observe the differences in the fabricated kagome lattice. Figures 4(a) and (b) show the excitation of the upper \(C\) and bottom \(A\) isolated atomic sites. The experiments show that the upper atomic site excitation radiates energy through the lattice more efficiently than the bottom atomic site. Nevertheless, both cases show a slow radiation process with an amount of radiated energy around 50%, as expected from the numerical simulations shown in Fig. 2(a). [As the experimental figures are normalized to the maximum intensity, the lattice background looks very weak, similar to the simulation shown previously in Fig. 2]
Figure 4(c) shows the collective effect of superradiance when both atoms are excited in phase (\(\Delta\phi=0\)), with both constructively radiating to the lattice bulk. We observe a well-disseminated output pattern, with the light exploring the lattice freely and with less than 5% of the total power remaining at the atomic sites. Although the intensity looks higher at those sites, the additive contribution of the lattice sites is indeed much higher. The contrast in between independent atomic radiation and superradiation phenomena on our kagome structure is quite evident by a simple eye inspection of these experimental images.
On the other hand, by adding a \(\pi\) phase difference between both excited atoms, we induce destructive interference dynamics at the connector \(B\) site. This interference produces that the energy radiated to the lattice, at the experimental dynamical scale of \(L=10\) cm, is around 15%. Therefore, this input condition excites an almost perfect compact localized edge state, which remains trapped at the excitation region with a slow leaking into the lattice. This result is in very good agreement with the numerical results presented in Fig. 2.
Now, we run a more intensive set of experiments, taking advantage of the possibilities of our image setup configuration. Specifically, we first set the excitation phase difference between both atomic sites as zero and we only vary the amplitude at the upper (\(C\)) atomic site, while keeping constant the amplitude at the bottom (\(A\)) one. This way, we can experimentally study the dynamic transition between pure radiative and superradiant processes. We show our collected results in Fig. 4(e), where we observe a well-defined transition between these two regimes, with the letters indicating the panels at the same figure. These two clear regimes, with two well-defined plateaus, can be used as an optical switch. By controlling the radiance and superradiance properties on our kagome lattice, we can transit from a weakly radiated pattern into a strongly radiated one and decide, in a very controllable way, the radiation state we need; i.e., a photonic amplitude valve/switch.
Finally, using the same image setup, we implement an experiment where we excite both atoms simultaneously with the same amplitude, but now by applying a controlled phase difference \(\Delta\phi\) between the two atoms. Fig. 4(f) shows our compiled results where we observe an almost perfect phase-controlled all-optical switch. There are well-defined states with the energy transiting from a superradiative pattern (with almost no energy at the atomic sites) at \(\Delta\phi=0,2\pi\) into a sub-radiative one at \(\Delta\phi=\pi\) (with most of the energy remaining trapped at atomic sites). In this case, we can select two very different dynamical states with high experimental precision by just controlling the phase difference between the atoms [48]. Both experiments show a clear opportunity to use the radiative processes of a given lattice structure to externally control different output spatial patterns on demand and to use them as, for example, state logical bits to transmit optical information.
## IV Conclusions
In this work, we use a photonic kagome lattice to numerically simulate and experimentally demonstrate collective radiative phenomena in structured two-dimensional systems, presenting a precise all-optical control over these processes. The experiments demonstrate the transition between radiative, superradiative, and subradiative processes, showcasing the potential for optical switching and transport control in lattice arrays. An in-phase excitation of two outlying sites/atoms yields superradiance through a kagome lattice array, which
Figure 4: (a) and (b) Output intensity profiles for up (\(C\)) and down (\(A\)) atomic site excitations, respectively. (c) and (d) Output intensity profiles for a simultaneous \(A\) and \(C\) in-phase and out-of-phase atoms excitation, respectively. Yellow circles show the corresponding input positions. (e) and (f) \(P_{atoms}/P_{total}\) vs \(P_{up}/P_{down}\) and \(\Delta\phi\), respectively. Letters indicate the relation with panels (a), (b), (c) and (d). The experimental data was measured for 640 nm on a \(L=10\) cm-long kagome photonic lattice.
accelerates radiation dynamics and significantly enhances the energy radiated to the lattice. In contrast, an out-of-phase excitation leads to subradiant dynamics, wherein energy remains highly confined between the excited atomic sites.
The study advances our knowledge of simulating quantum optical phenomena within photonic lattices and highlights the practical utility of these effects. These findings lay the foundation for future exploration of simulating quantum optical effects in two-dimensional structured reservoirs, setting the stage for harnessing these phenomena in photonic systems. The research contributes to the burgeoning field of quantum optics and photonic lattices, where manipulating light and its quantum properties could impact various technologies and applications.
###### Acknowledgements.
This work was supported by FONDECYT grants 1231313 and 11200192, CONICYT-PAI grant 77190033. P.S. is a CI-FAR Azrieli Global Scholar in the Quantum Information Science Program. A.S. acknowledges funding from the Deutsche Forschungsgemeinschaft (grants SZ 276/9-2, SZ 276/19-1, SZ 276/20-1, SZ 276/21-1, SZ 276/27-1, GRK 2676/1-2023 'Imaging of Quantum Systems', project no. 437567992, and SFB 1477 'Light-Matter Interactions at Interfaces', project no. 441234705).
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
**Ignacio Salinas: Investigation, Formal Analysis. Javier Cubillos: Data curation, Formal Analysis, Investigation. Alexander Szameit: Investigation, Funding acquisition. Pablo Solano: Formal Analysis, Funding acquisition, Writing. Rodrigo A. Vicencio: Formal Analysis, Funding acquisition, Investigation, Methodology, Resources, Supervision, Visualization, Writing.**
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.10397 | Locally trivial monodromy of moduli spaces of sheaves on K3 surfaces | In this paper we study monodromy operators on moduli spaces $M_v(S,H)$ of
sheaves on K3 surfaces with non-primitive Mukai vectors $v$. If we write
$v=mw$, with $m>1$ and $w$ primitive, then our main result is that the
inclusion $M_w(S,H)\to M_v(S,H)$ as the most singular locus induces an
isomorphism between the monodromy groups of these symplectic varieties,
allowing us to extend to the non-primitive case a result of Markman. | Claudio Onorati, Arvid Perego, Antonio Rapagnetta | 2023-09-19T07:58:54Z | http://arxiv.org/abs/2309.10397v1 | # Locally trivial monodromy of moduli spaces of sheaves on K3 surfaces
###### Abstract.
In this paper we study monodromy operators on moduli spaces \(M_{v}(S,H)\) of sheaves on K3 surfaces with non-primitive Mukai vectors \(v\). If we write \(v=mw\), with \(m>1\) and \(w\) primitive, then our main result is that the inclusion \(M_{w}(S,H)\to M_{v}(S,H)\) as the most singular locus induces an isomorphism between the monodromy groups of these symplectic varieties, allowing us to extend to the non-primitive case a result of Markman.
###### Contents
* 1 Preliminaries
* 2 A groupoid representation
* 3 Polarised monodromy of K3 surfaces and its lift to moduli spaces
* 4 The locally trivial monodromy group
## Introduction
Singular symplectic varieties (Section 1.1) have gained much interest lately, especially after the proof of a global Torelli Theorem in [1]. The outcome of these results can be summarised by saying that their geometry behaves very much like the geometry of irreducible holomorphic symplectic manifolds. Roughly, if \(X\) is either a smooth or singular irreducible symplectic variety, most of the geometry of \(X\) is controlled by the second integral cohomology group \(\mathrm{H}^{2}(X,\mathbb{Z})\) together with its pure weight two Hodge structure and the Beauville-Bogomolov-Fujiki lattice structure. We recall that the Beauville-Bogomolov-Fujiki lattice \(\mathrm{H}^{2}(X,\mathbb{Z})\) has always signature \((3,b_{2}(X)-3)\), where the positive three-space is generated by a Kahler class and the real and imaginary part of the symplectic form. The bimeromorphic classification of irreducible symplectic varieties in the same locally trivial deformation class is then encoded in the (locally trivial) _monodromy group_, which is a finite index subgroup \(\mathrm{Mon}^{2}_{\mathrm{lt}}(X)\) of the group \(\mathrm{O}(\mathrm{H}^{2}(X,\mathbb{Z}))\) of isometries of the lattice \(\mathrm{H}^{2}(X,\mathbb{Z})\).
In this paper we consider a special class of irreducible symplectic varieties, namely those that are locally trivially deformation equivalent (Definition 1.4) to a moduli space \(M_{v}(S,H)\) of sheaves on a projective K3 surface \(S\). Here \(v\) is a non-primitive Mukai vector and \(H\) is \(v\)-generic. The notion of \(v\)-genericity is technical and will be recalled in Definition 1.19. By [18], it is known
that, under these assumptions, \(M_{v}(S,H)\) is indeed an irreducible symplectic variety.
Let us remark that when the Mukai vector \(v\) is primitive, the moduli space \(M_{v}(S,H)\) is smooth and it is an irreducible holomorphic symplectic manifold, i.e. a simply connected Kahler manifold with a unique (up to scalar) holomorphic symplectic form. If \(X\) is any manifold deformation equivalent to a smooth moduli space \(M_{v}(S,H)\) as above, then the group \(\operatorname{Mon}^{2}(X)\) of the monodromy operators in \(\operatorname{H}^{2}(X,\mathbb{Z})\) is known by a result of Markman ([10]): it is the group of orientation preserving isometries that act as \(\pm\operatorname{id}\) on the discriminant group (see Section 1.3 for the notion of orientation). Recall that if \(\Lambda\) is a lattice, then the discriminant group is the finite group \(\Lambda^{*}/\Lambda\), where \(\Lambda^{*}=\operatorname{Hom}(\Lambda,\mathbb{Z})\) is the dual \(\mathbb{Z}\)-module.
Following Markman's notation, if \(\Lambda\) is any even lattice of signature \((3,n)\), then we denote by \(\mathsf{W}(\Lambda)\subset\operatorname{O}(\Lambda)\) the subgroup of orientation preserving isometries acting as \(\pm\operatorname{id}\) on the discriminant group.
Our first result is the following, which extends [10, Theorem 1.1] to the singular setting.
**Theorem A.1** (Corollary 4.16).: _Let \(X\) be an irreducible symplectic variety that is locally trivially deformation equivalent to a moduli space \(M_{v}(S,H)\), where \(S\) is a projective K3 surface, \(v=mw\) with \(m>1\) and \(w\) primitive, and \(H\) a \(v\)-generic polarisation. Then_
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(X)=\mathsf{W}(\operatorname{H}^{2} (X,\mathbb{Z}))\subset\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})).\]
We recall the definition of the group \(\mathsf{W}(\operatorname{H}^{2}(X,\mathbb{Z}))\) at the beginning of Section 4.1. We point out that our result does not subside Markman's one, inasmuch as we heavily use [10, Theorem 1.1] in our proof.
To the best of our knowledge, this is the first time an explicit description of the monodromy group of a class of singular symplectic varieties is exhibited. We recall that monodromy groups have been computed in all the known deformation classes of smooth irreducible holomorphic symplectic manifolds, see [10, 10, 11, 12, 13].
As a corollary one can explicitly compute the index of \(\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\) in the isometry group \(\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z}))\) (see Corollary 4.12). As an example, if \(X\) is deformation equivalent to a moduli space \(M_{v}(S,H)\), where \(v=mw\) and \(w^{2}=2\), then the group \(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\) is the whole group \(\operatorname{O}^{+}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}))\) of orientation preserving isometries (notice that this does not depend on \(m\), this feature is shared by an infinite class of examples in any dimension \(2m^{2}+2\)).
Our second result is an equivalent reformulation of Theorem A.1, in which the relation between the group \(\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\) and the monodromy group of an irreducible holomorphic symplectic manifold deformation equivalent to a smooth moduli space of sheaves is explained. First of all, if \(X\) is locally trivially deformation equivalent to a moduli space \(M_{v}(S,H)\) as before, then the most singular locus \(Y\) of \(X\) (cf. Proposition 1.3) is an irreducible holomorphic symplectic manifold deformation equivalent to the moduli space \(M_{w}(S,H)\) (here \(w\) is the primitive Mukai vector such that \(v=mw\)). Let us denote by \(i_{Y,X}\colon Y\to X\) the closed embedding.
The embedding \(i_{Y,X}\) induces an homomorphism
\[i_{Y,X}^{\sharp}\colon\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\longrightarrow \operatorname{Mon}^{2}(Y).\]
We will define this morphism carefully in Section 4.3 (see also Section 1.4 for the case \(X=M_{v}(S,H)\)), but we can intuitively describe it as follows. It sends a monodromy operator along a loop \(\gamma\) in a family \(p\colon\mathcal{X}\to T\) of deformations of \(X\), to the monodromy operator along the same loop \(\gamma\) on the family of deformations \(q\colon\mathcal{Y}\to T\) of \(Y\) obtained by restriction from \(p\), i.e. \(\mathcal{Y}\subset\mathcal{X}\) is the relative closed embedding of the most singular locus.
**Theorem B.1** (Corollary 4.17).: _Let \(X\) be an irreducible symplectic variety that is locally trivially deformation equivalent to a moduli space \(M_{v}(S,H)\) as above. Let \(Y\subset X\) be the most singular locus and \(i_{Y,X}\colon Y\to X\) the closed embedding. Then_
\[i_{Y,X}^{\sharp}\colon\operatorname{Mon}^{2}_{\operatorname{lt}}(X) \stackrel{{\sim}}{{\longrightarrow}}\operatorname{Mon}^{2}(Y)\]
_is an isomorphism._
### Outline of the proof
Since the locally trivial monodromy group is invariant along locally trivial families of primitive symplectic varieties, we can reduce the proof of the the two main results to statements about moduli spaces of sheaves. Therefore from now on we will work with \(X=M_{v}(S,H)\), where \(S\) is a projective K3 surface, \(v\) a Mukai vector of the form \(v=mw\), with \(m>1\) and \(w\) primitive, and \(H\) a \(v\)-generic polarisation.
Let us recall that \(v\) belongs to the so-called _Mukai lattice_ and so we can consider the ortoghonal complement \(v^{\perp}\), which is an even lattice of signature \((3,20)\). We will recall in Section 1.4 the definitions and constructions. By [10], there is an isometry
\[\lambda_{(S,v,H)}\colon v^{\perp}\to\operatorname{H}^{2}(M_{v}(S,H),\mathbb{ Z}).\]
When the Mukai vector \(v\) is primitive, the same result is due to O'Grady (see [11, 21]).
Theorem A.1 is equivalent to the following.
**Theorem A.2** (Theorem 4.10).: _Let \(S\) be a projective K3 surface, \(v\) a Mukai vector and \(H\) a \(v\)-generic polarisation. Then_
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))=\mathsf{W}( \operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}))\cong\mathsf{W}(v^{\perp}),\]
_where the last isomorphism is induced by the isometry \(\lambda_{(S,v,H)}\)._
We point out that the special case when \(m=2=w^{2}\) already appeared in [11, Theorem 6.1] with a different proof.
The proof follows two steps. In the first one we construct monodromy operators: this is performed in Section 4.1 and the main result is Theorem 4.2, where it is proved that \(\mathsf{W}(v^{\perp})\subset\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v }(S,H))\). This section parallels Markman's construction of monodromy operators in [13]. In fact we point out that it also works for primitive Mukai vectors: our only improvement with respect to Markman's work is that we only work with polarised families of K3 surfaces (see Proposition 3.9 and the remark soon after).
In the second step, performed in details in Section 4.2, we put a constraint on the monodromy group by using the inclusion in \(M_{v}(S,H)\) of its most singular locus. This step is where we crucially use that \(v=mw\) is not
primitive. In fact in this case the most singular locus of \(M_{v}(S,H)\) can be naturally identified with the smooth moduli space \(M_{w}(S,H)\); let us denote by \(i_{w,m}\colon M_{w}(S,H)\to M_{v}(S,H)\) the closed embedding. Then by Corollary 4.9 the map \(i_{w,m}\) induces an injective homomorphism
\[i_{w,m}^{\sharp}\colon\operatorname{Mon}_{\operatorname{lt}}^{2}(M_{v}(S,H)) \longrightarrow\operatorname{Mon}^{2}(M_{w}(S,H)\]
giving the desired constraint.
The conclusion of the proof is now a straightforward combination of the two steps before and [10, Theorem 1.1].
Working again in the case when \(X=M_{v}(S,H)\), the isomorphism in Theorem B.1 becomes very natural in terms of the isomorphisms
\[\lambda_{(S,v,H)}^{\sharp}\colon\operatorname{\mathsf{W}}(v^{\perp}) \longrightarrow\operatorname{Mon}_{\operatorname{lt}}^{2}(M_{v}(S,H))\]
and
\[\lambda_{(S,w,H)}^{\sharp}\colon\operatorname{\mathsf{W}}(w^{\perp}) \longrightarrow\operatorname{Mon}^{2}(M_{w}(S,H))\]
induced by conjugation from the isometries \(\lambda_{(S,v,H)}\) and \(\lambda_{(S,w,H)}\). Let us remark that since \(v=mw\), the lattices \(v^{\perp}\) and \(w^{\perp}\) are the same sub-lattice of the Mukai lattice of \(S\); in particular there is an equality \(\operatorname{\mathsf{W}}(v^{\perp})=\operatorname{\mathsf{W}}(w^{\perp})\).
**Theorem B.2** (Theorem 4.14).: _Let \(M_{v}(S,H)\) be the moduli space of shaves on a K3 surface \(S\) with Mukai vector \(v\). Assume that \(v=mw\), with \(m>1\) and \(w\) primitive, and that \(H\) is \(v\)-generic. Then the closed embedding \(i_{w,m}\colon M_{w}(S,H)\to M_{v}(S,H)\) induces an isomorphism_
\[i_{w,m}^{\sharp}\colon\operatorname{Mon}_{\operatorname{lt}}^{2}(M_{v}(S,H)) \stackrel{{\sim}}{{\longrightarrow}}\operatorname{Mon}^{2}(M_{w }(S,H)),\]
_and the composition_
\[(\lambda_{(S,w,H)}^{\sharp})^{-1}\circ i_{w,m}^{\sharp}\circ\lambda_{(S,v,H)} ^{\sharp}\colon\operatorname{\mathsf{W}}(v^{\perp})\longrightarrow \operatorname{\mathsf{W}}(w^{\perp})\]
_is the identity._
### Plan of the paper
In Section 1 we recall the basic facts about singular symplectic varieties and moduli spaces of sheaves. The results in this section are all known to experts but we reproduce here some proofs whenever a precise reference was missing in the literature.
Section 2 recalls and extends Markman's construction of a groupoid representation that will be useful to construct monodromy operators and to prove surjectivity of the morphism \(i_{w,m}^{\sharp}\).
In Section 3 we provide a proof that the monodromy group of a K3 surface is generated by polarised families. This result is surely well known to experts, but we could not find any reference in the literature; its purpose is to allow us to lift monodromy operators from the K3 surface to the moduli space without deforming to a non-projective K3 surfaces, i.e. without the need to consider families of moduli spaces of sheaves on non-projective K3 surfaces.
Section 4 is the main and last section of the paper, where we prove Theorem 4.10 and Theorem 4.14.
### Acknowledgements
Claudio Onorati was supported by the grant ERC-2017-CoG771507-StabCondEn with principal investigator P. Stellari. Arvid Perego and Antonio Rapagnetta were partially supported by the Research Project PRIN 2020 - CuRVI, CUP J37G21000000001. Claudio Onorati and Antonio Rapagnetta wish to thank the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C1800010000. Arvid Perego wishes to thank the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Genoa, CUP D33C23001110001. Claudio Onorati and Arvid Perego gratefully acknowledge support from the Simons Center for Geometry and Physics, Stony Brook University, and the Japanese-European Symposium on Symplectic Varieties and Moduli Spaces at which some of the research for this paper was performed. The authors are member of the INDAM-GNSAGA.
## 1. Preliminaries
This section is dedicated to review some basic material we will need. More precisely, in the first two subsections we collect some fundamental results and definitions about primitive (resp. irreducible) symplectic varieties and their monodromy groups. The third subsection will be devoted to review the main results about moduli spaces of sheaves on K3 surfaces.
### Singular symplectic varieties
We first start by recalling the notion of a symplectic variety. Let \(X\) be a normal complex analytic variety, and denote by \(X_{\operatorname{reg}}\) its smooth locus; notice that the complement of \(X_{\operatorname{reg}}\) has codimension at least \(2\). If \(j\colon X_{\operatorname{reg}}\to X\) is the corresponding open embedding, then for every integer \(0\leq p\leq\dim(X)\) we let
\[\Omega^{[p]}_{X}:=j_{*}\Omega^{p}_{X_{\operatorname{reg}}}=\big{(}\wedge^{p} \Omega_{X}\big{)}^{**},\]
whose global sections are called _reflexive \(p\)-forms_ on \(X\). A reflexive \(p\)-form on \(X\) is then a holomorphic \(p\)-form on \(X_{\operatorname{reg}}\).
There is a notion of _singular Kahler form_ due to Grauert and recalled in [1, Section 2.3]. We will not recall the definition of singular Kahler forms, but we point out some of their properties. Every singular Kahler form \(\omega\) gives a class \([\omega]\in\operatorname{H}^{2}(X,\mathbb{R})\); the set of classes in \(\operatorname{H}^{2}(X,\mathbb{R})\) obtained in this way form an open cone that lies in \(\operatorname{H}^{1,1}(S,\mathbb{R})\), where the latter can be interpreted classically as
\[\operatorname{H}^{1,1}(X,\mathbb{R})=F^{1}\operatorname{H}^{2}(X,\mathbb{C}) \cap\operatorname{H}^{2}(X,\mathbb{R}),\]
where we are using the mixed Hodge structure on \(\operatorname{H}^{2}(X,\mathbb{C})\) (see [1, Definition 2.5, Remark 2.6.(1), Proposition 2.8]). In particular, in the cases of interest to us, the group \(\operatorname{H}^{2}(X,\mathbb{C})\) will have a pure Hodge structure and \(\operatorname{H}^{1,1}(X,\mathbb{R})=\operatorname{H}^{1}(X,\Omega^{[1]}_{X} )\cap\operatorname{H}^{2}(X,\mathbb{R})\).
A normal complex analytic variety admitting a Kahler form will be called a _Kahler space_. By [11, II, 1.2.1 Proposition] a smooth Kahler space is a Kahler manifold in the classical sense. Moreover, if \(X\) is reduced, then there exists a resolution of singularities \(\widetilde{X}\to X\) such that \(\widetilde{X}\) is a Kahler manifold. A subspace of a Kahler space is again a Kahler space (see for example [11, II, 1.3.1(i) Proposition]).
We now recall the definitions of a symplectic form and a symplectic variety (cf. [1]).
**Definition 1.1**.: Let \(X\) be a normal compact Kahler space.
1. A _symplectic form_ on \(X\) is a closed reflexive \(2\)-form \(\sigma\) on \(X\) which is non-degenerate at each point of \(X_{\mathrm{reg}}\).
2. If \(\sigma\) is a symplectic form on \(X\), the pair \((X,\sigma)\) is a _symplectic variety_ if for every (Kahler) resolution \(f\colon\widetilde{X}\to X\) of the singularities of \(X\), the holomorphic symplectic form \(\sigma_{\mathrm{reg}}:=\sigma_{|X_{\mathrm{reg}}}\) extends to a holomorphic \(2\)-form on \(\widetilde{X}\). By a slight abuse of notation, in this case we will say that \(X\) is a symplectic variety.
In what follows, we will be interested in two types of symplectic varieties, namely primitive symplectic varieties and irreducible symplectic varieties. We recall their definitions following [1] and [1]. Before doing this, we recall that if \(X\) and \(Y\) are two irreducible normal compact complex analytic varieties, a _finite quasi-etale morphism_\(f\colon Y\to X\) is a finite morphism which is etale in codimension one.
**Definition 1.2**.: Let \(X\) be a symplectic variety and \(\sigma\) a symplectic form on \(X\).
1. The variety \(X\) is a _primitive symplectic variety_ if \(\mathrm{H}^{1}(X,\mathcal{O}_{X})=0\) and \(\mathrm{H}^{0}(X,\Omega^{[2]}_{X})=\mathbb{C}\sigma\).
2. The variety \(X\) is an _irreducible symplectic variety_ if for every finite quasi-etale morphism \(f\colon Y\to X\) the exterior algebra of reflexive forms on \(Y\) is spanned by \(f^{[*]}\sigma\).
An irreducible symplectic variety is primitive symplectic (so in particular it is a symplectic variety), but there are examples of primitive symplectic varieties that are not irreducible symplectic.
By [1, Corollary 13.3], an irreducible symplectic variety \(X\) is simply connected. In particular, the \(\mathbb{Z}\)-module \(\mathrm{H}^{2}(X,\mathbb{Z})\) is free. Moreover, the fact that irreducible symplectic varieties are simply connected together with the Bogomolov Decomposition Theorem imply that smooth irreducible symplectic varieties are irreducible holomorphic symplectic manifolds.
We conclude this section by recalling a result originally due to Kaledin about the stratification of the singularities of a symplectic variety, that together with Remark 1.10 will play an important role in what follows (see [1, Theorem 2.3] or [1, Theorem 3.4.(2)]).
**Proposition 1.3**.: _Let \(X\) be a symplectic variety, and consider the finite stratification by closed subvarieties_
\[X=X_{0}\supset X_{1}\supset\cdots\supset X_{m},\]
_where \(X_{i+1}\) is the singular locus with reduced structure \((X_{i}^{\mathrm{sing}})_{\mathrm{red}}\) of \(X_{i}\)._
_Then for every \(i=0,\cdots,m\), the normalisation of each irreducible component of \(X_{i}\) is a symplectic variety._
We notice in particular that each stratum of the stratification of the singularities is even dimensional.
### Locally trivial deformations of symplectic varieties
**Definition 1.4**.:
1. A _locally trivial family_ is a proper morphism \(f\colon\mathcal{X}\to T\) of complex analytic spaces such that \(T\) is connected and, for every point \(x\in\mathcal{X}\), there exist open neighborhoods \(V_{x}\subset\mathcal{X}\) and \(V_{f(x)}\subset T\), and an open subset \(U_{x}\subset f^{-1}(f(x))\) such that \[V_{x}\cong U_{x}\times V_{f(x)},\] where the isomorphism is an isomorphism of analytic spaces commuting with the projections over \(T\). A _locally trivial deformation_ of a complex analytic variety \(X\) is a locally trivial family \(f\colon\mathcal{X}\to T\) for which there is \(t\in T\) such that \(f^{-1}(t)\simeq X\).
2. A _locally trivial family of primitive (resp. irreducible) symplectic varieties_ is a locally trivial family whose fibres are all primitive (resp. irreducible) symplectic.
3. Two primitive symplectic varieties are said to be _locally trivially deformation equivalent_ if they are members of a locally trivial family of primitive sympelctic varieties.
**Remark 1.5**.: The fibres of a locally trivial family of primitive or irreducible symplectic varieties are Kahler spaces by definition.
As usual, when we say that any small deformation of \(X\) enjoys a property, we mean that there exists an analytic open neighborhood \(U\) of the base of a versal deformation of \(X\) such that every fiber over \(U\) enjoys the property.
The behaviour of primitive symplectic varieties under small locally trivial deformations is known thanks to the following result.
**Proposition 1.6** ([1, Corollary 4.11]).: _Let \(X\) be a primitive symplectic variety. Then any small locally trivial deformation of \(X\) is again a primitive symplectic variety._
The same result for irreducible symplectic varieties seems to be unknown. We provide here two arguments that applies to two subclasses of irreducible symplectic varieties, which will be mostly interesting for us.
**Proposition 1.7**.: _Let \(X\) be an irreducible symplectic variety with simply connected regular locus. Then any small locally trivial deformation \(X^{\prime}\) of \(X\) is an irreducible symplectic variety with simply connected regular locus._
Proof.: Consider a locally trivial deformation \(\alpha\colon\mathcal{X}\to T\) of \(X\), and let \(0\in T\) be such that the fiber \(X_{0}\) of \(\alpha\) over \(0\) is isomorphic to \(X\). By Proposition 1.6 we know that, up to shrinking \(T\), for every \(t\in T\) we have that \(X_{t}\) is a (Kahler) primitive symplectic variety. Since \(\alpha\colon\mathcal{X}\to T\) is locally trivial, the regular loci of each fibre \(X_{t}\) fits together to form a flat fibration \(\alpha_{\mathrm{reg}}\colon\mathcal{X}_{\mathrm{reg}}\to T\) that locally on \(T\) is topologically trivial, by Thom's First Isotopy Lemma ([12, Theorem 6.5]). In particular, for any \(t,t^{\prime}\in T\), the fundamental groups \(\pi_{1}(X_{t,\mathrm{reg}})\) and \(\pi_{1}(X_{t^{\prime},\mathrm{reg}})\) are isomorphic and trivial: hence there are no non-trivial finite quasi-etale covers of \(X_{t}\) and the proof is completed.
We can drop the strong hypothesis on the fundamental group of the smooth locus, provided that the singularities are mild enough.
**Proposition 1.8**.: _Let \(X\) be an irreducible symplectic variety with at most terminal singularities. Then any small locally trivial deformation of \(X\) is an irreducible symplectic variety with at most terminal singularities._
Proof.: Consider again a locally trivial deformation \(\alpha\colon\mathcal{X}\to T\) of \(X\), and let \(0\in T\) be such that the fiber \(X_{0}\) of \(\alpha\) over \(0\) is isomorphic to \(X\). Up to shrinking \(T\), again by Proposition 1.6 we can suppose that any fibre \(X_{t}\) is a primitive symplectic variety. We need to prove that if \(f_{t}\colon Y_{t}\to X_{t}\) is a finite quasi-etale covering, then the dimension \(h^{[p],0}(Y_{t})\) of \(\operatorname{H}^{0}(Y_{t},\Omega^{[p]}_{Y_{t}})\) is \(0\) if \(p\) is odd, and \(1\) if \(p\) is even.
First of all, as we saw in the proof of Proposition 1.7, the fundamental groups of the smooth loci of the fibres of \(\alpha\) are all isomorphic. Moreover, as the isomorphism classes of finite quasi-etale coverings of \(X_{t}\) are prescribed by the subgroups of \(\pi_{1}(X_{t}^{\operatorname{reg}})\), it is enough to prove the result when \(f_{t}\) arises from a deformation of a finite quasi-etale covering (we recall that by definition the total space of a quasi-etale covering is normal).
We make the following claim.
**Claim**.: _Let \(f_{0}\colon Y_{0}\to X_{0}\) be a finite quasi-etale covering of \(X_{0}\); then there exist a locally trivial family \(\beta\colon\mathcal{Y}\to T\) and a proper morphism \(f\colon\mathcal{Y}\to\mathcal{X}\) such that \(Y_{t}=\beta^{-1}(t)\) is normal and \(f_{t}\colon Y_{t}\to X_{t}\) is a finite quasi-etale covering, for every \(t\in T\)._
Let us first see how the claim would finish the proof. First of all, since \(\beta\) is a locally trivial family, up to shrinking \(T\) we can take a relative resolution of singularities \(\pi\colon\widetilde{\mathcal{Y}}\to\mathcal{Y}\) such that \(\widetilde{Y}_{t}\) is Kahler for all \(t\in T\) (by [13, Proposition 5]), and hence the function mapping \(t\in T\) to \(h^{p,0}(\widetilde{Y}_{t})\) is constant. By [1, Theorem 2.4] there is an equality \(h^{[p],0}(Y_{t})=h^{p,0}(\widetilde{Y}_{t})\), so that \(h^{[p],0}(Y_{t})\) is also constant along \(T\). It follows that
\[\dim\operatorname{H}^{0}(Y_{t},\Omega^{[p]}_{Y_{t}})=\dim\operatorname{H}^{0 }(Y,\Omega^{[p]}_{Y})=\left\{\begin{array}{ll}1,&p\in 2\mathbb{N}\\ 0,&p\notin 2\mathbb{N}\end{array}\right.\]
where the last equality comes from the fact that \(X_{0}=X\) is an irreducible symplectic variety and \(Y_{0}=Y\) is a finite quasi-etale covering of \(X_{0}\).
Proof of the Claim.: First of all, since the family \(\alpha\colon\mathcal{X}\to T\) is locally trivial and the central fibre \(X_{0}\) is terminal by hypothesis, all the other fibres \(X_{t}\) are terminal. In particular, if \(\alpha_{\operatorname{reg}}\colon\mathcal{X}_{\operatorname{reg}}\to T\) is the smooth family of the smooth loci, then \(\mathcal{X}\setminus\mathcal{X}_{\operatorname{reg}}\) has codimension at least \(3\) (in fact Namikawa proves in [13, Corollary 1] that the codimension of the singular locus is always at least \(4\), but we will not need this fact). Now, since the family \(\alpha_{\operatorname{reg}}\colon\mathcal{X}_{\operatorname{reg}}\to T\) is smooth (and the fundamental group of the fibres is constant), there exists another smooth family \(\beta^{\prime}\colon\mathcal{Y}^{\prime}\to T\) and a proper etale cover \(f^{\prime}\colon\mathcal{Y}^{\prime}\to\mathcal{X}_{\operatorname{reg}}\).
The sheaf \(f^{\prime}_{*}\mathcal{O}_{\mathcal{Y}^{\prime}}\) is a locally free sheaf of algebras on \(\mathcal{X}_{\operatorname{reg}}\) and, if we denote by \(j\colon\mathcal{X}_{\operatorname{reg}}\to\mathcal{X}\) the open immersion, by [10, Theorem 2] the sheaf \(j_{*}f^{\prime}_{*}\mathcal{O}_{\mathcal{Y}^{\prime}}\) is a coherent sheaf of \(\mathcal{O}_{\mathcal{X}}\)-algebras. Let us define
\[f\colon\mathcal{Y}=\underline{\operatorname{Spec}}_{\mathcal{X}}(j_{*}f^{ \prime}_{*}\mathcal{O}_{\mathcal{Y}^{\prime}})\to\mathcal{X}\]
and set \(\beta:=\alpha\circ f\). For every \(t\in T\) the morphism \(f_{t}\colon Y_{t}\to X_{t}\) is a finite morphism etale in codimension \(1\): the morphism \(f_{t}\) is not necessary
quasi-etale because \(Y_{t}\) can be non-normal in general. We need to show that \(\beta\colon\mathcal{Y}\to T\) is locally trivial.
Let \(y\in\mathcal{Y}\) be a point and consider \(x=f(y)\in\mathcal{X}\). Since \(\alpha\colon\mathcal{X}\to T\) is locally trivial, there exist three open subsets \(V_{x}\subset\mathcal{X}\), \(V_{\alpha(x)}\subset T\) and \(U_{x}\subset F=\alpha^{-1}(\alpha(x))\) such that \(V_{x}\cong V_{\alpha(x)}\times U_{x}\). If we put \(V_{x}^{0}=(V_{x}\cap\mathcal{X}_{\mathrm{reg}})\), then \(V_{x}^{0}=V_{\alpha(x)}\times U_{x}^{0}\), where \(U_{x}^{0}=(U_{x}\cap\mathcal{X}_{\mathrm{reg}})\). Moreover putting \(\widetilde{V}_{x}^{0}:=(f^{\prime})^{-1}(V_{x}^{0})\) and \(\widetilde{U}_{x}^{0}:=(f^{\prime})^{-1}(U_{x}^{0})\), we have the following commutative diagram:
(1)
where the horizontal morphisms are the natural inclusions, and each square is cartesian.
Since \(\widetilde{V}_{x}^{0}\cong V_{\alpha(x)}\times\widetilde{U}_{x}^{0}\), we have that
\[f^{\prime}_{x,*}\mathcal{O}_{\widetilde{V}_{x}^{0}}\cong(p_{2}^{0})^{*}(f^{ \prime}_{U_{x}^{0},*}\mathcal{O}_{\widetilde{U}_{x}^{0}}), \tag{2}\]
where \(f^{\prime}_{U_{x}^{0}}\colon\widetilde{U}_{x}^{0}\to U_{x}^{0}\) is the restriction of \(f^{\prime}_{x}\) to the second factor.
We will now prove that there exists a sheaf of algebras \(G\) on \(U_{x}\) such that \(j_{x,*}f^{\prime}_{x,*}\mathcal{O}_{\widetilde{V}_{x}^{0}}\cong p_{2}^{*}G\). Since \(R=\underline{\mathrm{Spec}}(j_{x,*}f^{\prime}_{x,*}\mathcal{O}_{\widetilde{V} _{x}^{0}})\), this implies that \(R=V_{\alpha(x)}\times\widetilde{U}_{x}\), where \(V_{\alpha(x)}=V_{\beta(y)}\subset\beta^{-1}(\beta(y))\) is open. By using the equality (2), since the bottom square of the diagram (1) is cartesian we have the following chain of equalities,
\[j_{x,*}f^{\prime}_{x,*}\mathcal{O}_{\widetilde{V}_{x}^{0}} \cong j_{x,*}(p_{2}^{0})^{*}(f^{\prime}_{U_{x}^{0},*}\mathcal{O}_ {\widetilde{U}_{x}^{0}})\] \[\cong p_{2}^{*}(j_{x,*}^{0}f^{\prime}_{U_{x}^{0},*}\mathcal{O}_{ \widetilde{U}_{x}^{0}}).\]
Therefore, putting \(G=j_{x,*}^{0}f^{\prime}_{U_{x}^{0},*}\mathcal{O}_{\widetilde{U}_{x}^{0}}\) concludes the proof that \(\beta\colon\mathcal{Y}\to T\) is locally trivial.
Finally, to conclude the proof of the claim, let us notice that by the local triviality of the family \(\beta\), if we take the relative normalisation \(\nu\colon\mathcal{Y}^{\mathrm{nor}}\to\mathcal{Y}\), then the family \(\beta^{\mathrm{nor}}=\beta\circ\nu\colon\mathcal{Y}^{\mathrm{nor}}\to T\) remains locally trivial and the composition \(\mathcal{Y}^{\mathrm{nor}}\to\mathcal{X}\) is still a finite quasi-etale covering; the same holds also for the fibers.
We wish to remark that singularities are supposed to be terminal in Proposition 1.8 only in order to deduce that the singular locus has codimension at least \(3\): this allows us to apply Siu's theorem about the coherency of the pushforward in the analytic category and get that \(j_{*}f^{\prime}_{*}\mathcal{O}_{\mathcal{Y}^{\prime}}\) is coherent. In the algebraic category the analog of Siu's theorem holds assuming that the singular locus has codimension at least \(2\): it follows that in the projective
setting the statement of Proposition 1.8 holds in the general case of canonical singularities.
**Proposition 1.9**.: _Let \(X\) be a projective irreducible symplectic variety. Let \(f\colon\mathcal{X}\to T\) be a locally trivial family of primitive symplectic varieties as in Definition 1.4 such that \(f\) is projective and \(T\) is quasi-projective, and let \(\bar{t}\in T\) a point such that \(\mathcal{X}_{\bar{t}}=X\). Then there exists an analytic open neighborhood \(U\subset T\) of \(\bar{t}\) such that, for every \(t\in U\), the fibre \(\mathcal{X}_{t}\) of \(f\) is a projective irreducible symplectic variety._
**Remark 1.10**.: We remark that if \(p\colon\mathcal{X}\to T\) is a locally trivial family of symplectic varieties, there is a natural relative stratification
\[\mathcal{X}=\mathcal{X}_{0}\supset\mathcal{X}_{1}\supset\cdots\supset \mathcal{X}_{m},\]
where for every \(t\in T\)
\[\mathcal{X}_{t}\supset\mathcal{X}_{1,t}\supset\cdots\supset\mathcal{X}_{m,t}\]
is the stratification in Proposition 1.3 for the variety \(\mathcal{X}_{t}\), and for every \(i=0,\cdots,m\), the restriction \(p_{i}\colon\mathcal{X}_{i}\to T\) is a locally trivial family. By construction, the stratum \(X_{m}\) is smooth and, if \(\mathcal{X}_{m,t}\) is irreducible and of strictly positive dimension, then \(p_{m}\colon\mathcal{X}_{m}\to T\) is a smooth family of irreducible holomorphic symplectic manifolds.
### Locally trivial monodromy operators
We will now define the monodromy group of a primitive symplectic variety, following [1].
First of all we recall that if \(X\) is a primitive symplectic variety, the torsion free part \(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}}\) of the second integral cohomology group of \(X\) is endowed with a nondegenerate bilinear form \(q_{X}\) of signature \((3,b_{2}(X)-3)\) (see [1, Section 5.1, Lemma 5.7]), that will be called _Beauville-Bogomolov-Fujiki form_ of \(X\). The pair \((\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}},q_{X})\) will be called the _Beauville-Bogomolov-Fujiki lattice_ of \(X\). In particular, if \(X\) is irreducible symplectic, then \(\operatorname{H}^{2}(X,\mathbb{Z})\) is torsion free and therefore endowed with the Beauville-Bogomolov-Fujiki form.
The Beauville-Bogomolov-Fujiki form is a locally trivial deformation invariant of a primitive symplectic variety (see [1, Lemma 5.7]). In particular, this implies that if \(X_{1}\) and \(X_{2}\) are two locally trivial deformation equivalent primitive symplectic varieties, there is an isometry between \(\operatorname{H}^{2}(X_{1},\mathbb{Z})_{\operatorname{tf}}\) and \(\operatorname{H}^{2}(X_{2},\mathbb{Z})_{\operatorname{tf}}\). We will use the notation \(\operatorname{O}(\operatorname{H}^{2}(X_{1},\mathbb{Z})_{\operatorname{tf}},\operatorname{H}^{2}(X_{2},\mathbb{Z})_{\operatorname{tf}})\) for the set of isometries from the Beauville-Bogomolov-Fujiki lattice of \(X_{1}\) to the one of \(X_{2}\); similarly we use the notation \(\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) for the group of isometries from the Beauville-Bogomolov-Fujiki lattice of \(X\) to itself.
The isometries arising from locally trivial families will be called locally trivial parallel transport operators. More precisely, we recall the following definition.
**Definition 1.11**.: Let \(X\), \(X_{1}\) and \(X_{2}\) be primitive symplectic varieties.
1. An isometry \(g\in\operatorname{O}(\operatorname{H}^{2}(X_{1},\mathbb{Z})_{\operatorname{tf }},\operatorname{H}^{2}(X_{2},\mathbb{Z})_{\operatorname{tf}})\) is a _locally trivial parallel transport operator from \(X_{1}\) to \(X_{2}\)_ if there exists a locally trivial family \(p\colon\mathcal{X}\to T\) of primitive symplectic varieties and two points \(t_{1},t_{2}\in T\), with \(\mathcal{X}_{t_{i}}:=p^{-1}(t_{i})=X_{i}\), such that \(g\) is the parallel transport along a path from \(t_{1}\) to \(t_{2}\) in the local system \(R^{2}p_{*}\mathbb{Z}\).
2. An isometry \(g\in\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) is a _locally trivial monodromy operator_ if it is a locally trivial parallel transport operator from \(X\) to itself.
If \(X_{1}\) and \(X_{2}\) are two primitive symplectic varieties, we will let
\[\operatorname{\mathsf{PT}}^{2}_{\operatorname{lt}}(X_{1},X_{2})\subset \operatorname{O}(\operatorname{H}^{2}(X_{1},\mathbb{Z})_{\operatorname{tf}}, \operatorname{H}^{2}(X_{2},\mathbb{Z})_{\operatorname{tf}})\]
be the set of locally trivial parallel transport operators from \(X_{1}\) to \(X_{2}\).
If \(X_{1}=X_{2}=X\), we put
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(X):=\operatorname{PT}^{2}_{ \operatorname{lt}}(X,X),\]
which is then the subset of \(\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) given by all locally trivial monodromy operators.
**Lemma 1.12**.: _Let \(X\) be a primitive symplectic variety. The set \(\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\) of locally trivial monodromy operators is a subgroup of \(\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\)._
Proof.: This follows as in the smooth case, see [11, footnote 3].
The group \(\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\) is called the _locally trivial monodromy group_ of \(X\) and, by construction, it is a locally trivial deformation invariant. If \(X\) is smooth, we will simply write \(\operatorname{Mon}^{2}(X)\) for the monodromy group, as all smooth deformations of \(X\) are locally trivial in this case.
The group \(\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) contains the subgroup \(\operatorname{O}^{+}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) of _orientation preserving isometries_. We refer to [11, Section 4] and [10] for a general account on orientations, but let us quickly recall the definition we use.
If \(\widetilde{\mathcal{C}}_{X}\subset\operatorname{H}^{2}(X,\mathbb{R})\) is the set of classes \(\alpha\) such that \(q_{X}(\alpha)>0\), then by [11, Lemma 4.1] we have \(\operatorname{H}^{2}(\widetilde{\mathcal{C}}_{X},\mathbb{Z})=\mathbb{Z}\), and \(\operatorname{O}^{+}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) is by definition the subgroup of \(\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) given by those isometries acting as the identity on \(\operatorname{H}^{2}(\widetilde{\mathcal{C}}_{X},\mathbb{Z})\). In general we will refer to a generator of \(\operatorname{H}^{2}(\widetilde{\mathcal{C}}_{X},\mathbb{Z})\) as an _orientation_: since for any positive three-dimensional subspace \(W\) of \(\operatorname{H}^{2}(X,\mathbb{R})\) the space \(W\setminus\{0\}\) is deformation retract of \(\widetilde{\mathcal{C}}_{X}\), it follows that an orientation is nothing else than an orientation on \(W\).
Among the orientation preserving isometries, we find all the locally trivial monodromy operators, as the following shows.
**Lemma 1.13**.: _Let \(X\) be a primitive symplectic variety. Then we have an inclusion \(\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\subset\operatorname{O}^{+}( \operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\)._
Proof.: The proof follows as in the classical case, and we sketch it here for the reader's convenience.
As in the smooth case (see [11, Corollary 8]) if \(\omega\in\operatorname{H}^{1}(\Omega^{[1]}_{X})\) is a Kahler class and \(\sigma\in\operatorname{H}^{0}(\Omega^{[2]}_{X})\) is the symplectic form, then the three-space \(W\) spanned by \(\omega\), \(\operatorname{Re}(\sigma)\) and \(\operatorname{Im}(\sigma)\) is positive definite. Notice that the choice of this basis (in this order) determines an orientation, i.e. a generator \(\mathsf{a}\) of \(\operatorname{H}^{2}(W\setminus\{0\},\mathbb{Z})=\operatorname{H}^{2}( \widetilde{\mathcal{C}}_{X},\mathbb{Z})\), which does not change if we replace \(\omega\) by another Kahler class \(\omega^{\prime}\) and \(\sigma\) by another symplectic form \(\sigma^{\prime}\).
Now if \(p\colon\mathcal{X}\to T\) is any locally trivial family of primitive symplectic varieties, then \(\omega\) and \(\sigma\) extend to sections of the respective local systems. This gives rise to a section \(t\mapsto\mathsf{a}_{t}\in\operatorname{H}^{2}(\mathcal{C}_{X_{t}},\mathbb{Z})\), where \(\mathsf{a}_{t}\) is the generator determined by \(\omega_{t}\), \(\operatorname{Re}(\sigma_{t})\) and \(\operatorname{Im}(\sigma_{t})\).
In particular the induced action on \(\operatorname{H}^{2}(\widetilde{\mathcal{C}}_{X},\mathbb{Z})\) is the identity and we are done.
Among locally trivial parallel transport operators one finds the pull-back by a birational morphism between projective primitive symplectic varieties. This result is well-known for irreducible holomorphic symplectic manifolds, and follows from Huybrechts' results (see [10] and [11, Section 3.1]).
**Proposition 1.14** ([1, Theorem 6.16]).: _Let \(X\) and \(Y\) be two projective primitive symplectic varieties and \(f\colon X\dasharrow Y\) a birational morphism. Suppose that \(f\) is defined in codimension \(1\) and that \(f^{*}\colon\operatorname{Pic}(Y)\otimes\mathbb{Q}\to\operatorname{Pic}(X) \otimes\mathbb{Q}\) is an isomorphism. Then the pullback_
\[f^{*}\colon\operatorname{H}^{2}(Y,\mathbb{Z})_{\operatorname{tf}}\longrightarrow \operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}}\]
_is well defined and a locally trivial parallel transport operator._
Proof.: This statement is implicit in [1, Theorem 6.16], see [11, Section 3.1] for a rigorous proof.
We conclude this section by recalling the notion of polarised parallel transport operators. Recall that a polarised variety is a pair \((X,H)\) where \(X\) is a projective variety and \(H\) is a polarisation, i.e. an ample line bundle on \(X\). If \(X\) is a projective symplectic variety (resp. primitive or irreducible), then the pair \((X,H)\) is a polarised symplectic variety (resp. primitive or irreducible).
**Definition 1.15**.: Let \((X,H)\), \((X_{1},H_{1})\) and \((X_{2},H_{2})\) be polarised primitive symplectic varieties.
1. An isometry \(g\in\operatorname{O}(\operatorname{H}^{2}(X_{1},\mathbb{Z})_{\operatorname{ tf}},\operatorname{H}^{2}(X_{2},\mathbb{Z})_{\operatorname{tf}})\) is a _polarised locally trivial parallel transport operator_ from \((X_{1},H_{1})\) to \((X_{2},H_{2})\) if it is a parallel transport operator arising from a family \(p\colon\mathcal{X}\to T\) as in Definition 1.11 such that there exists a relatively ample line bundle \(\mathcal{H}\) on \(\mathcal{X}\) such that \(\mathcal{H}_{t_{i}}=H_{i}\) on \(\mathcal{X}_{t_{i}}=X_{i}\), for \(t_{1},t_{2}\in T\).
2. An isometry \(g\in\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z})_{\operatorname{tf}})\) is a _polarised locally trivial monodromy operator_ if it is a polarised parallel transport operator from \((X,H)\) to itself.
As before we denote by
\[\operatorname{\mathsf{PT}}^{2}_{\operatorname{lt}}((X_{1},H_{1}),(X_{2},H_{2 }))\]
the set of polarised locally trivial parallel transport operators from \((X_{1},H_{1})\) to \((X_{2},H_{2})\) and by
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(X,H)=\operatorname{\mathsf{PT}}^{2 }_{\operatorname{lt}}((X,H),(X,H))\]
the group of polarised locally trivial monodromy operators of \((X,H)\).
Notice that by definition we have
\[\operatorname{\mathsf{PT}}^{2}_{\operatorname{lt}}((X_{1},H_{1}),(X_{2},H_{2 }))\subset\operatorname{\mathsf{PT}}^{2}_{\operatorname{lt}}(X_{1},X_{2})\]
and
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(X,H)\subset\operatorname{Mon}^{2}_ {\operatorname{lt}}(X).\]
**Remark 1.16**.: Definition 1.15 also appears in literature in a different form. Let \(p\colon\mathcal{X}\to T\) be a locally trivial family as in Definition 1.11: instead of asking for the existence of a relatively ample line bundle \(\mathcal{H}\) on \(\mathcal{X}\), one can ask for the existence of a flat section \(h\) of \(R^{2}p_{*}\mathbb{Z}\) such that \(h_{t}\) is the class of an ample line bundle on \(\mathcal{X}_{t}\) for every \(t\in T\) (see for example [11, Definition 1.1.(4)]).
Of course, if \(\mathcal{H}\) is a relatively ample line bundle on \(\mathcal{X}\), then the section \(c_{1}(\mathcal{H})\) satisfies the last condition. The vice versa is not true in general.
Nevertheless, we wish to point out that the parallel transport operators arising from these two definition are in fact the same, providing that the base is at least normal and quasi-projective.
Let us start with a locally trivial family \(f\colon\mathcal{X}\to T\), where \(T\) is normal, irreducible and quasi-projective; suppose that there is a flat section \(h\) of \(R^{2}p_{*}\mathbb{Z}\) such that \(h_{t}\) is the class of an ample line bundle for every \(t\in T\). By the Lefschetz hyperplane section Theorem (see for example [11, Theorem 1.1.(B)]), for a generic complete intersection curve \(C\subset T\) the morphism
\[\pi_{1}(C)\longrightarrow\pi_{1}(T)\]
is surjective: hence every parallel transport operator induced by \(f\) between fibers over \(C\) is also induced by the restriction of \(f\) over \(C\). Without loss of generality, we can suppose that \(C\) is irreducible and smooth. From now on we work with the restricted family \(f\colon\mathcal{X}\to C\) and show the existence of a holomorphic line bundle on its total space inducing the section \(h\) over \(C\).
By applying the functor \(f_{*}\) to the exponential sequence
\[0\rightarrow\mathbb{Z}\rightarrow\mathcal{O}_{\mathcal{X}}\rightarrow \mathcal{O}_{\mathcal{X}}^{*}\to 0\]
we obtain
\[0=R^{1}f_{*}\mathcal{O}_{\mathcal{X}}\to R^{1}f_{*}\mathcal{O}_{ \mathcal{X}}^{*}\to R^{2}f_{*}\mathbb{Z}\to R^{2}f_{*}\mathcal{O}_{ \mathcal{X}},\]
where in the first equality we used that all the fibres are primitive symplectic varieties. Taking global sections we get
\[0\rightarrow\mathrm{H}^{0}(R^{1}f_{*}\mathcal{O}_{\mathcal{X}}^{*}) \rightarrow\mathrm{H}^{0}(R^{2}f_{*}\mathbb{Z})\rightarrow\mathrm{H}^{0}(R^ {2}f_{*}\mathcal{O}_{\mathcal{X}}).\]
Since the Hodge structure on \(\mathrm{H}^{2}(\mathcal{X}_{t},\mathbb{Z})\) is pure for every \(t\), the map on the right hand side is the relative projection on the \((2,0)\)-part. In particular the section \(h\in\mathrm{H}^{0}(R^{2}f_{*}\mathbb{Z})\) is mapped to \(0\) and it therefore comes from a section \(\tilde{h}\in\mathrm{H}^{0}(R^{1}f_{*}\mathcal{O}_{\mathcal{X}}^{*})\).
From the Leray spectral sequence we get
\[\mathrm{H}^{1}(\mathcal{O}_{\mathcal{X}}^{*})\rightarrow\mathrm{H}^{0}(R^{1}f _{*}\mathcal{O}_{\mathcal{X}}^{*})\rightarrow\mathrm{H}^{2}(f_{*}\mathcal{O}_ {\mathcal{X}}^{*})=\mathrm{H}^{2}(\mathcal{O}_{C}^{*})=0,\]
where the last vanishing comes from the fact that \(C\) is a curve. In particular this means that there exists a holomorphic line bundle \(\mathcal{H}\) on \(\mathcal{X}\) lifting the section \(\tilde{h}\). Finally, since the composition
\[\mathrm{H}^{1}(\mathcal{X},\mathcal{O}_{\mathcal{X}}^{*})\rightarrow\mathrm{H }^{0}(R^{1}f_{*}\mathcal{O}_{\mathcal{X}}^{*})\rightarrow\mathrm{H}^{0}(R^{2} f_{*}\mathbb{Z})\]
is the morphism mapping a line bundle \(\mathcal{L}\) on \(\mathcal{X}\) to the section \(\{c_{1}(\mathcal{L}_{t})\}_{t}\in\mathrm{H}^{0}(R^{2}f_{*}\mathbb{Z})\) by construction, it follows that \(c_{1}(\mathcal{H}_{t})=h_{t}\) for every \(t\in T\).
**Remark 1.17**.: In Remark 1.16 we constructed a relatively ample holomorphic line bundle \(\mathcal{H}\) on the total space \(\mathcal{X}\) of the family \(f\colon\mathcal{X}\to C\) of irreducible symplectic varieties over the smooth irreducible curve \(C\) lifting the section
\(h\) of \(R^{2}p_{*}\mathbb{Z}\). We wish to point out though that if the family is smooth and algebraic, then \(\mathcal{H}\) can be chosen algebraic as well.
If \(\mathcal{X}\) and \(C\) are projective this follows from GAGA principle ([10]), otherwise, as \(C\) is quasi-projective, it is affine and there exist smooth and projective compactifications \(\overline{\mathcal{X}}\) of \(\mathcal{X}\) and \(\overline{C}\) of \(C\) and a projective morphism \(\overline{f}\colon\overline{\mathcal{X}}\to\overline{C}\).
Since for \(t_{0}\in C\) the class \(h_{t_{0}}\in\mathrm{H}^{2}(X_{t_{0}},\mathbb{Z})\) is a monodromy invariant class, by the global Invariant Cycle Theorem (see for example [13, Theorem 4.24]), there exists a class \(\hat{h}\in\mathrm{H}^{2}(\overline{\mathcal{X}},\mathbb{Q})\) extending \(h_{t_{0}}\). Moreover, as \(h_{t_{0}}\in\mathrm{H}^{1,1}(X_{t_{0}},\mathbb{Z})\) and \(\hat{h}_{t}=\ell h_{t}\) (for some \(\ell\in\mathbb{Q}\)) by construction we may suppose that \(\hat{h}\) is of Hodge type \((1,1)\) too. Moreover, by clearing the denominator, we find a class \(\hat{h}^{\prime}\in\mathrm{H}^{1,1}(\overline{\mathcal{X}},\mathbb{Z})\) such that \(h^{\prime}_{t_{0}}=nh_{t_{0}}\) for some \(n\in\mathbb{N}\). This implies that there exists an algebraic line bundle \(\mathcal{L}\) on the projective variety \(\overline{\mathcal{X}}\) such that \(c_{1}(\mathcal{L})=\hat{h}^{\prime}\), and hence the restriction \(c_{1}(\mathcal{L})_{t_{0}}\) of \(c_{1}(\mathcal{L})\) to the fiber \(\mathcal{X}_{t_{0}}\) is an integral multiple of \(h_{t_{0}}\).
Now let \(\mathcal{L}_{C}\) be the restriction of \(\mathcal{L}\) to \(\mathcal{X}\). Since the fibers of \(f\) are simply connected for every \(t\in C\) there is an isomorphism \(\mathcal{L}_{t}\simeq\mathcal{H}_{t}^{\otimes n}\) between the fibres of \(\mathcal{L}_{C}\) and \(\mathcal{H}^{\otimes n}\).
It follows that there exists a holomorphic line bundle \(\mathcal{N}\) on \(C\) and an isomorphism of holomorphic line bundles
\[\mathcal{H}^{\otimes m}\simeq\mathcal{L}_{C}\otimes f^{*}\mathcal{N}.\]
By [14, Theorem 30.4], every holomorphic line bundle on an affine curve is trivial. As a consequence the holomorphic line bundle \(\mathcal{H}^{\otimes m}\) is algebraic, and this implies that \(\mathcal{H}\) is algebraic as well.
**Remark 1.18**.: In the notation of Definition 1.15, the family \(\{\mathcal{H}_{t}\}_{t\in T}\) is a continuous family of ample line bundles on the fibres of \(p\colon\mathcal{X}\to T\). In particular any parallel transport operator must be constant on it, i.e. if \(g\in\mathsf{PT}^{2}_{\mathrm{lt}}((X_{1},H_{1}),(X_{2},H_{2}))\) and \(h_{i}=c_{1}(H_{i})\), then \(g(h_{1})=h_{2}\). In particular it follows that
\[\mathrm{Mon}^{2}_{\mathrm{lt}}(X,H)\subset\mathrm{O}^{+}(\mathrm{H}^{2}(X, \mathbb{Z})_{\mathrm{tf}})_{h},\]
where \(h=c_{1}(H)\) and the latter is the group of orientation preserving isometries \(g\) such that \(g(h)=h\). More precisely \(\mathrm{Mon}^{2}_{\mathrm{lt}}(X,H)\subset\mathrm{Mon}^{2}_{\mathrm{lt}}(X)_{h}\), where again the last group is the subgroup of monodromy operators fixing the polarisation. Arguing as in the proof of [11, Corollary 7.4], one can further prove that
\[\mathrm{Mon}^{2}_{\mathrm{lt}}(X,H)=\mathrm{Mon}^{2}_{\mathrm{lt}}(X)_{h}.\]
### Moduli spaces of sheaves on K3 surfaces
We conclude this first section by recalling the basic facts we will need about moduli spaces of sheaves on K3 surfaces, and refer the reader to [13] and [13] for a more detailed exposition about this.
Let \(S\) be a projective K3 surface. We denote by \(\widetilde{\mathrm{H}}(S,\mathbb{Z})\) the _Mukai lattice_ of \(S\): as a \(\mathbb{Z}\)-module it is \(\mathrm{H}^{\mathrm{even}}(S,\mathbb{Z})\), and the (nondegenerate) integral quadratic form on it is given by
\[(r,\xi,a)^{2}=\xi^{2}-2ra,\]
where \(r\in\mathrm{H}^{0}(S,\mathbb{Z})\), \(\xi\in\mathrm{H}^{2}(S,\mathbb{Z})\) and \(a\in\mathrm{H}^{4}(S,\mathbb{Z})\). The Mukai lattice of \(S\) inherits a pure weight two Hodge structure from the one on \(S\) by declaring \(\widetilde{\mathrm{H}}^{2,0}(S):=\mathrm{H}^{2,0}(S)\).
An element \(v\in\widetilde{\mathrm{H}}(S,\mathbb{Z})\) is called a _Mukai vector_ if \(v=(r,\xi,a)\) is such that \(r\geq 0\) and \(\xi\in\mathrm{NS}(S)\), and in the case when \(r=0\) we have that either \(\xi\) is the first Chern class of a strictly effective divisor, or \(\xi=0\) and \(a>0\). If \(v\) is a Mukai vector on \(S\), then there is a coherent sheaf \(\mathcal{F}\) on \(S\) such that \(\mathrm{ch}(\mathcal{F})\cdot\sqrt{td(S)}=v\): we will then say that \(v\) is the _Mukai vector_ of \(\mathcal{F}\). We notice that a Mukai vector \(v\) is of type \((1,1)\) with respect to the Hodge decomposition of the Mukai lattice.
**Definition 1.19** ([18, Section 2.1.2]).: Given a Mukai vector \(v\in\widetilde{\mathrm{H}}(S,\mathbb{Z})\) of the form \(v=(r,\xi,a)\), an ample line bundle \(H\) on \(S\) is _v-generic_ if it verifies one of the following two conditions:
1. If \(r>0\), then for every \(\mu_{H}\)-semistable sheaf \(E\) such that \(v(E)=v\) and every \(0\neq F\subseteq E\), we have that if \(\mu_{H}(E)=\mu_{H}(F)\) then \(c_{1}(F)/\operatorname{rk}(F)=c_{1}(E)/\operatorname{rk}(E)\).
2. If \(r=0\), then for every \(H\)-semistable sheaf \(E\) such that \(v(E)=v\) and every \(0\neq F\subseteq E\), if \(\chi(E)/(c_{1}(E)\cdot H)=\chi(F)/(c_{1}(F)\cdot H)\) then \(v(F)\in\mathbb{Q}v\).
Given a Mukai vector \(v\in\widetilde{\mathrm{H}}(S,\mathbb{Z})\) and a \(v\)-generic ample line bundle \(H\) on \(S\), we denote by \(M_{v}(S,H)\) (or simply \(M_{v}\) if the pair \((S,H)\) is clear from the context) the moduli space of Gieseker \(H\)-semistable sheaves \(F\) on \(S\) such that \(v(F)=v\).
**Remark 1.20**.: Let \(S\) be a projective K3 surface with Picard rank is at least \(2\), and let \(v=(r,\xi,a)\) be a Mukai vector on \(S\). If \(r\neq 0\), or if \(r=0\) and \(a\neq 0\), the ample cone of \(S\) has a decomposition in \(v\)-walls and \(v\)-chambers (see [18, Section 2.1.1]). As the \(v\)-chambers are subcones of the ample cone, it follows that if \(H^{\prime}=tH\) with \(t\in\mathbb{Z}\), then \(H^{\prime}\) belongs to the same \(v\)-chamber as \(H\).
By [18, Lemma 2.9] the primitive polarisations lying in a \(v\)-chamber are all \(v\)-generic according to Definition 1.19. Moreover, by [18, Lemma 2.9] if \(H_{1}\) and \(H_{2}\) are two \(v\)-generic polarisations (according to Definition 1.19) that belong to the closure of the same \(v\)-chamber, then there is an identification of the moduli spaces \(M_{v}(S,H_{1})=M_{v}(S,H_{2})\), meaning that a coherent sheaf \(F\) of Mukai vector \(v\) on \(S\) is \(H_{1}\)-(semi)stable if and only if it is \(H_{2}\)-(semi)stable.
From now on, we will always suppose that \(v^{2}>0\): under this hypothesis, the \(v\)-genericity of \(H\) implies that \(M_{v}(S,H)\neq\emptyset\), and that it is an irreducible normal projective variety of dimension \(v^{2}+2\) ([17, Theorem 4.1]), whose smooth locus admits a holomorphic symplectic form ([19]).
For simplicity, we make now the following definition.
**Definition 1.21**.: Given two strictly positive integers \(m,k\in\mathbb{N}\), a triple \((S,v,H)\) will be called _\((m,k)\)-triple_ if \(S\) is a projective K3 surface, \(v\) is a Mukai vector on \(S\) such that if \(v=mw\), with \(w\) primitive, then \(w^{2}=2k\), and \(H\) is a \(v\)-generic polarisation.
**Remark 1.22**.: If \((S,v,H)\) is an \((m,k)\)-triple and \(v=mw\), then \((S,w,H)\) is a \((1,k)\)-triple. The reason for this is that if \(H\) is \(v\)-generic, then it is \(w\)-generic. Indeed, if \(w=(r,\xi,a)\) with \(r>0\), \(E\) is a \(\mu_{H}\)-semistable sheaf with Mukai vector \(w\) and \(F\subseteq E\) is a proper coherent subsheaf with \(\mu_{H}(F)=\mu_{H}(E)\), then \(E^{\oplus m}\) is a \(\mu_{H}\)-semistable sheaf with Mukai vector \(v\) and \(F^{\oplus m}\) is a proper subsheaf of \(E^{\oplus m}\) such that \(\mu_{H}(F^{\oplus m})=\mu_{H}(E^{\oplus m})\). But since \(H\) is \(v\)-generic it follows that \(c_{1}(E^{\oplus m})/\operatorname{rk}(E^{\oplus m})=c_{1}(F^{\oplus m})/ \operatorname{rk}(F^{\oplus m})\), and hence \(c_{1}(E)/\operatorname{rk}(E)=c_{1}(F)/\operatorname{rk}(F)\). A similar proof works for \(r=0\).
For our purposes it is useful to introduce an equivalence relation that identifies \((m,k)\)-triples whose associated moduli spaces represent the same sheaves.
**Definition 1.23**.: Two \((m,k)\)-triples \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are called _congruent_ if \(S_{1}=S_{2}=S\), \(v_{1}=v_{2}=v\) and a coherent sheaf \(F\) of Mukai vector \(v\) on \(S\) is \(H_{1}\)-(semi)stable if and only if it is \(H_{2}\)-(semi)stable. In particular there is an equality \(M_{v}(S,H_{1})=M_{v}(S,H_{2})\) and we denote by
\[\chi_{H_{1},H_{2}}\colon M_{v}(S,H_{1})\longrightarrow M_{v}(S,H_{2}),\qquad F \mapsto F\]
the identity morphism.
**Remark 1.24**.: If \(H\) and \(H^{\prime}\) are two \(v\)-generic polarisations that lie in the closure of the same \(v\)-chamber, then \((S,v,H)\) and \((S,v,H^{\prime})\) are congruent by Remark 1.20.
The following result is the starting point of our paper.
**Theorem 1.25** ([18, Theorem 1.10]).: _Let \((S,v,H)\) be an \((m,k)\)-triple. Then \(M_{v}(S,H)\) is an irreducible symplectic variety of dimension \(2km^{2}+2\)._
**Remark 1.26**.: In the setting of Theorem 1.25, one may prove that the smallest stratum of the stratification of the singularities of \(M_{v}(S,H)\) is isomorphic to \(M_{w}(S,H)\). In particular we get a natural closed embedding
\[i_{w,m}\colon M_{w}(S,H)\longrightarrow M_{v}(S,H).\]
This has been done in the proof of [17, Theorem 4.4], but let us recall the main idea.
The singular locus of \(M_{v}(S,H)\) coincides with the locus of strictly semi-stable sheaves; moreover, any strictly semistable sheaf \(F\) is S-equivalent to a sheaf of the form \(F_{1}\oplus F_{2}\), where \(F_{i}\in M_{m_{i}w}(S,H)\), with \(m_{1}+m_{2}=m\). In particular it belongs to the image of the morphism
\[M_{m_{1}w}(S,H)\times M_{m_{2}w}(S,H)\to M_{v}(S,H),\quad(F_{1},F_{2}) \mapsto[F_{1}\oplus F_{2}]\]
which is an irreducible component of the strictly semistable locus. The intersection of all these components is then the locus
\[Y=\{E^{\oplus m}\in M_{v}(S,H)\mid E\in M_{w}(S,H)\}\cong M_{w}(S,H).\]
As we recalled in the previous section, since \(M_{v}(S,H)\) is an irreducible symplectic variety, the cohomology group \(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\) is a free \(\mathbb{Z}\)-module with both a pure weight two Hodge structure and a lattice structure. These structures have been made explicit in [18].
**Theorem 1.27** ([14, Theorem 1.6]).: _Let \((S,v,H)\) be an \((m,k)\)-triple. Then there exists a Hodge isometry_
\[\lambda_{(S,v,H)}\colon v^{\perp}\longrightarrow\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}),\]
_where \(v^{\perp}\) inherits the Hodge and lattice structures from those of \(\widetilde{\operatorname{H}}(S,\mathbb{Z})\), and \(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\) is endowed with the Beauville-Bogomolov-Fujiki form._
It is implicit in [14] that the isometries \(\lambda_{(S,v,H)}\) behave well under deformations of moduli spaces induced by deformations of K3 surfaces. We will expand this comment more precisely in Remark 2.4, after we will have carefully defined deformations of \((m,k)\)-triples.
For future reference, let us notice that the isometry \(\lambda_{(S,v,H)}\) induces an isomorphism between the orthogonal groups by conjugation,
\[\lambda_{(S,v,H)}^{\sharp}\colon\operatorname{O}(v^{\perp}) \longrightarrow\operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H), \mathbb{Z}))\] \[g \longmapsto\lambda_{(S,v,H)}\circ g\circ(\lambda_{(S,v,H)})^{-1}. \tag{3}\]
By Remark 1.26 there is a closed embedding
\[i_{w,m}\colon M_{w}(S,H)\longrightarrow M_{v}(S,H),\]
and we get a morphism
\[i_{w,m}^{*}\colon\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\longrightarrow \operatorname{H}^{2}(M_{w}(S,H),\mathbb{Z}).\]
Thanks to this morphism, we may now describe the relation between \(\lambda_{(S,w,H)}\) and \(\lambda_{(S,v,H)}\).
**Proposition 1.28**.: _Let \((S,v,H)\) be an \((m,k)\)-triple and write \(v=mw\). Then \(i_{w,m}^{*}\circ\lambda_{(S,v,H)}=m\lambda_{(S,w,H)}\)._
Proof.: In this proof, for every \(p>0\) we use the shortened notation \(M_{pw}\) for the moduli space \(M_{pw}(S,H)\) and \(\lambda_{pw}\) for the morphism \(\lambda_{(S,pw,H)}\).
By [14, Section 4.2], for every \(p>0\), we have a morphism
\[g_{p}\colon M_{pw}\times M_{w}\longrightarrow M_{(p+1)w},\ \ \ \ g_{p}(F,G):=F\oplus G.\]
We let \(f_{2}:=g_{1}\colon M_{w}\times M_{w}\to M_{2w}\), and for every \(p\geq 3\) we let
\[f_{p}:=g_{p-1}\circ(f_{p-1}\times\operatorname{id}_{M_{w}})\colon M_{w}^{p} \longrightarrow M_{pw},\]
so that \(f_{p}(F_{1},\cdots,F_{p}):=F_{1}\oplus\cdots\oplus F_{p}\). In particular, for \(p=m\) we get a morphism \(f_{m}\colon M_{w}^{m}\longrightarrow M_{v}\) such that \(f_{m}(F_{1},\cdots,F_{m}):=F_{1}\oplus\cdots\oplus F_{m}\).
We claim that for every \(p>0\) the following diagram
\[\begin{CD}(pw)^{\perp}=w^{\perp}@>{\lambda_{pw}}>{}>\operatorname{H}^{2}(M_{ pw},\mathbb{Z})\\ @V{(\lambda_{w},\cdots,\lambda_{w})}V{}V@V{}V{f_{p}^{*}}V\\ @V{\oplus}V{i=1}V@V{}V{\operatorname{H}^{2}(M_{w},\mathbb{Z})}>\sum_{i=1}^{p} \pi_{i,p}^{*}\end{CD} \tag{4}\]
\(\mathrm{H}^{2}(M_{w}^{p},\mathbb{Z})\)
is commutative, where \(\pi_{i,p}\colon M_{w}^{p}\to M_{w}\) is the projection onto the \(i\)-th factor. We prove it by induction on \(p\).
First of all we remark that by [11, Proposition 4.11(2)] for every \(d>0\) we have a commutative diagram
\[\begin{CD}((d+1)w)^{\perp}=w^{\perp}=(dw)^{\perp}@>{\lambda_{(d+1)w}}>{}>\mathrm{ H}^{2}(M_{(d+1)w},\mathbb{Z})\\ @V{(\lambda_{dw},\lambda_{w})}V{}V@V{}V{g_{d}^{*}}V\\ \mathrm{H}^{2}(M_{dw},\mathbb{Z})\oplus\mathrm{H}^{2}(M_{w},\mathbb{Z})@>{}>{ \overline{q_{1,d}^{*}+q_{2,d}^{*}}}>\mathrm{H}^{2}(M_{dw}\times M_{w},\mathbb{ Z})\end{CD} \tag{5}\]
where \(q_{1,d}\colon M_{dw}\times M_{w}\to M_{dw}\) and \(q_{2,d}\colon M_{dw}\times M_{w}\to M_{w}\) are the two projections. For \(d=1\) we then get a commutative diagram
\[\begin{CD}w^{\perp}@>{\lambda_{2w}}>{}>\mathrm{H}^{2}(M_{2w},\mathbb{Z})\\ @V{(\lambda_{w},\lambda_{w})}V{}V@V{}V{f_{2}^{*}}V\\ \mathrm{H}^{2}(M_{w},\mathbb{Z})\oplus\mathrm{H}^{2}(M_{w},\mathbb{Z})@>{}>{ \overline{\pi_{1,1}^{*}+\pi_{2,1}^{*}}}>\mathrm{H}^{2}(M_{w}\times M_{w}, \mathbb{Z})\end{CD}\]
that proves the commutativity of diagram (4) for \(p=2\), i.e. for the initial step of the induction.
Let us now consider any \(p\geq 2\), and notice that we have a commutative diagram
\[\begin{CD}\mathrm{H}^{2}(M_{(p-1)w},\mathbb{Z})\oplus\mathrm{H}^{2}(M_{w}, \mathbb{Z})@>{q_{1,p-1}^{*}+q_{2,p-1}^{*}}>{}>\mathrm{H}^{2}(M_{(p-1)w,\mathbb{ Z}}\times M_{w},\mathbb{Z})\\ @V{f_{p-1}^{*}\times\mathrm{id}_{\mathrm{H}^{2}(M_{w},\mathbb{Z})}}V{}V@V{}V{(f_{p-1} \times\mathrm{id}_{M_{w}})^{*}}V\\ \mathrm{H}^{2}(M_{w}^{p-1},\mathbb{Z})\oplus\mathrm{H}^{2}(M_{w},\mathbb{Z}) @V{}V{\overline{\mathrm{pr}_{1,p}^{*}+\mathrm{pr}_{2,p}^{*}}}V@V{}V{\mathrm{H}^{2}(M_{w}^{p}, \mathbb{Z})}\end{CD} \tag{6}\]
where \(\mathrm{pr}_{1,p}\colon M_{w}^{p}=M_{w}^{p-1}\times M_{w}\to M_{w}^{p-1}\) and \(\mathrm{pr}_{2,p}\colon M_{w}^{p}=M_{w}^{p-1}\times M_{w}\to M_{w}\) are the two projections.
Putting diagram (5) (for \(d=p-1\)) and diagram (6) together, we get a commutative diagram
\[\begin{CD}(pw)^{\perp}=w^{\perp}@>{\lambda_{pw}}>{}>\mathrm{H}^{2}(M_{pw}, \mathbb{Z})\\ @V{(f_{p-1}^{*}\circ\lambda_{(p-1)w},\lambda_{w})}V{}V@V{}V{f_{p}^{*}}V\\ \mathrm{H}^{2}(M_{w}^{p-1},\mathbb{Z})\oplus\mathrm{H}^{2}(M_{w},\mathbb{Z}) @V{}V{\mathrm{pr}_{1,p}^{*}+\mathrm{pr}_{2,p}^{*}}>\mathrm{H}^{2}(M_{w}^{p}, \mathbb{Z})\end{CD} \tag{7}\]
By induction, the commutativity of diagram (4) for \(p-1\) reads as
\[f_{p-1}^{*}\circ\lambda_{(p-1)w}=\bigg{(}\sum_{i=1}^{p-1}\pi_{i,p-1}^{*}\bigg{)} \circ(\lambda_{w},\cdots,\lambda_{w}),\]
so we get a commutative diagram
\[\begin{CD}(pw)^{\perp}=w^{\perp}@>{\lambda_{pw}}>{}>\mathrm{H}^{2}(M_{pw}, \mathbb{Z})\\ @V{(\lambda_{w},\cdots,\lambda_{w})}V{}V@V{}V{f_{p}^{*}}V\\ @V{}V{\oplus_{i=1}^{p}\mathrm{H}^{2}(M_{w},\mathbb{Z})}@>{}>{(\mathrm{pr}_{1,p }^{*}+\mathrm{pr}_{2,p}^{*})\circ((\sum_{i=1}^{p-1}\pi_{i,p-1}^{*})\times \mathrm{id})}>\mathrm{H}^{2}(M_{w}^{p},\mathbb{Z})\end{CD} \tag{8}\]
Notice that
\[(\mathrm{pr}^{*}_{1,p}+\mathrm{pr}^{*}_{2,p})\circ\bigg{(}\bigg{(}\sum_{i=1}^{p-1} \pi^{*}_{i,p-1}\bigg{)}\times\mathrm{id}_{\mathrm{H}^{2}(M_{w})}\,\bigg{)}=\sum_{ i=1}^{p}\pi^{*}_{i,p},\]
hence diagram (4) is commutative for \(p\), proving the claim.
In particular, diagram (4) is commutative for \(p=m\). As a consequence, for every \(a\in v^{\perp}=w^{\perp}\) we have
\[f^{*}_{m}(\lambda_{v}(a))=\sum_{i=1}^{m}\pi^{*}_{i,m}(\lambda_{w}(a)).\]
Let us now consider \(\delta\colon M_{w}\to M_{w}^{m}\) the diagonal morphism, mapping \([F]\) to \(([F],\cdots,[F])\). We then have \(i_{w,m}:=f_{m}\circ\delta\colon M_{w}\to M_{v}\).
Now, let \(\alpha\in\mathrm{H}^{2}(M_{v},\mathbb{Z})\), so that there is a unique \(a\in v^{\perp}\) such that \(\alpha=\lambda_{v}(a)\). Then we have
\[i^{*}_{w,m}(\alpha)=\delta^{*}f^{*}_{m}(\lambda_{v}(a))=\delta^{*}\bigg{(} \sum_{i=1}^{m}\pi^{*}_{i,m}(\lambda_{w}(a))\bigg{)}=\sum_{i=1}^{m}\lambda_{w}( a)=m\lambda_{w}(a).\]
This concludes the proof.
In the following corollary we use the notation \(q_{v}\) (resp. \(q_{w}\)) for the Beauville-Bogomolov-Fujiki form on \(\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z})\) (resp. \(\mathrm{H}^{2}(M_{w}(S,H),\mathbb{Z})\)).
**Corollary 1.29**.: _Let \((S,v,H)\) be an \((m,k)\)-triple and write \(v=mw\). Then the restriction morphism_
\[i^{*}_{w,m}\colon\,\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z})\longrightarrow \mathrm{H}^{2}(M_{w}(S,H),\mathbb{Z})\]
_is a similitude of lattices, i.e., more explicitly, for every \(\alpha,\beta\in\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z})\) we have_
\[q_{w}(i^{*}_{w,m}(\alpha),i^{*}_{w,m}(\beta))=m^{2}\,q_{v}(\alpha,\beta).\]
Proposition 1.28 and Corollary 1.29 allow us to get an isomorphism between the ortogohal groups \(\mathrm{O}(\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z}))\) and \(\mathrm{O}(\mathrm{H}^{2}(M_{w}(S,H),\mathbb{Z}))\). To this purpose let us denote by
\[i^{*}_{w,m,\mathbb{Q}}\colon\,\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Q}) \longrightarrow\mathrm{H}^{2}(M_{w}(S,H),\mathbb{Q})\]
the \(\mathbb{Q}\)-linear extension of \(i^{*}_{w,m}\) and by
\[i^{\sharp}_{w,m,\mathbb{Q}}\colon\,\mathrm{O}(\mathrm{H}^{2}(M_{ v}(S,H),\mathbb{Q})) \longrightarrow\mathrm{O}(\mathrm{H}^{2}(M_{w}(S,H),\mathbb{Q}))\] \[g \longmapsto i^{*}_{w,m,\mathbb{Q}}\circ g\circ(i^{*}_{w,m,\mathbb{Q}})^{-1}.\]
the induced isomorphism.
**Lemma 1.30**.: _Let \((S,v,H)\) be an \((m,k)\)-triple and write \(v=mw\). Then:_
1. \(i^{\sharp}_{w,m,\mathbb{Q}}\) _sends integral isometries to integral isometries bijectively, i.e. it restricts to an isomorphism_ \[i^{\sharp}_{w,m}\colon\,\mathrm{O}(\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z})) \longrightarrow\mathrm{O}(\mathrm{H}^{2}(M_{w}(S,H),\mathbb{Z}));\]
_,_
2. _using the equality_ \(v^{\perp}=w^{\perp}\)_, we have that_ \[i^{\sharp}_{w,m}=\lambda^{\sharp}_{(S,w,H)}\circ(\lambda^{\sharp}_{(S,v,H)})^{-1} \colon\operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}))\longrightarrow \operatorname{O}(\operatorname{H}^{2}(M_{w}(S,H),\mathbb{Z})),\] _where_ \(\lambda^{\sharp}_{(S,v,H)}\) _and_ \(\lambda^{\sharp}_{(S,w,H)}\) _are the isomorphisms defined in the formula (_3_)._
Proof.: For any \((l,k)\)-triple \((S,u,H)\), let
\[\lambda_{(S,u,H),\mathbb{Q}}\colon u^{\perp}\otimes\mathbb{Q}\to\operatorname {H}^{2}(M_{u}(S,H),\mathbb{Q})\]
be the \(\mathbb{Q}\)-linear extension of \(\lambda_{(S,u,H)}\) and let
\[\lambda^{\sharp}_{(S,u,H),\mathbb{Q}}\colon\operatorname{O}(v^{\perp}\otimes \mathbb{Q})\longrightarrow\operatorname{O}(\operatorname{H}^{2}(M_{u}(S,H), \mathbb{Q}))\]
be the group isomorphism sending a \(\mathbb{Q}\)-linear isometry \(g\in\operatorname{O}(v^{\perp}\otimes\mathbb{Q})\) to \(\lambda_{(S,u,H),\mathbb{Q}}\circ g\circ(\lambda_{(S,u,H),\mathbb{Q}})^{-1}\) so that \(\lambda^{\sharp}_{(S,u,H)}\) is the restriction of \(\lambda^{\sharp}_{(S,v,H),\mathbb{Q}}\) to \(\operatorname{O}(v^{\perp})\).
We claim that
\[i^{\sharp}_{w,m,\mathbb{Q}}=\lambda_{(S,w,H),\mathbb{Q}}\circ(\lambda^{\sharp} _{(S,v,H),\mathbb{Q}})^{-1}\]
Since \(v^{\perp}=w^{\perp}\) and the isomorphism \(\lambda^{\sharp}_{(S,v,H)}\) and \(\lambda^{\sharp}_{(S,w,H)}\) are the restrictions of \(\lambda^{\sharp}_{(S,v,H),\mathbb{Q}}\) and \(\lambda^{\sharp}_{(S,w,H),\mathbb{Q}}\), our claim implies that \(i^{\sharp}_{w,m,\mathbb{Q}}\) sends \(\operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}))\) onto \(\operatorname{O}(\operatorname{H}^{2}(M_{w}(S,H),\mathbb{Z}))\) and its restriction
\[i^{\sharp}_{w,m}\colon\operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H), \mathbb{Z}))\longrightarrow\operatorname{O}(\operatorname{H}^{2}(M_{w}(S,H),\mathbb{Z}))\]
is an isomorphism and satisfies
\[i^{\sharp}_{w,m}=\lambda^{\sharp}_{(S,w,H)}\circ(\lambda^{\sharp}_{(S,v,H)})^ {-1}.\]
Let us then prove the claim. By Proposition 1.28 we know that \(i^{*}_{w,m,\mathbb{Q}}=m\lambda_{(S,w,H),\mathbb{Q}}\circ\lambda^{-1}_{(S,v,H ),\mathbb{Q}}\); it follows that \((i^{*}_{w,m,\mathbb{Q}})^{-1}=\frac{1}{m}\lambda_{(S,v,H),\mathbb{Q}}\circ \lambda^{-1}_{(S,w,H),\mathbb{Q}}\). Therefore, by definition, we have that for every \(g\in\operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Q}))\)
\[i^{\sharp}_{w,m,\mathbb{Q}}(g) =i^{*}_{w,m,\mathbb{Q}}\circ g\circ(i^{*}_{w,m,\mathbb{Q}})^{-1}\] \[=(m\lambda_{(S,w,H),\mathbb{Q}}(\circ\lambda^{-1}_{(S,v,H), \mathbb{Q}})\circ g\circ(\frac{1}{m}\lambda_{(S,v,H),\mathbb{Q}}\circ\lambda^ {-1}_{(S,w,H),\mathbb{Q}})\] \[=\lambda_{(S,w,H),\mathbb{Q}}\circ(\lambda^{-1}_{(S,v,H),\mathbb{ Q}}\circ g\circ\lambda_{(S,v,H),\mathbb{Q}})\circ\lambda^{-1}_{(S,w,H), \mathbb{Q}}\] \[=\lambda^{\sharp}_{(S,w,H),\mathbb{Q}}\circ(\lambda^{\sharp}_{(S, v,H),\mathbb{Q}})^{-1}(g),\]
where in the third equality we used that \(g\) is \(\mathbb{Q}\)-linear. The claim follows and the proof is concluded.
For later use, let us extend the last results to the case in which we have two \((m,k)\)-triples \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\). As usual, let us write \(v_{i}=mw_{i}\), so that \((S_{1},w_{1},H_{1})\) and \((S_{2},w_{2},H_{2})\) are \((1,k)\)-triples. For sake of simplicity, in the following we use the notation \(M_{v_{1}}\) for \(M_{v_{1}}(S_{1},H_{1})\) and \(M_{v_{2}}\) for \(M_{v_{2}}(S_{2},H_{2})\); similarly we write \(M_{w_{1}}\) for \(M_{w_{1}}(S_{1},H_{1})\) and \(M_{w_{2}}\) for \(M_{w_{2}}(S_{2},H_{2})\).
Let us consider the following bijective map of sets
\[i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}\colon\operatorname{O}(\operatorname{H}^{2}( M_{v_{1}},\mathbb{Q}),\operatorname{H}^{2}(M_{v_{2}},\mathbb{Q})) \longrightarrow\operatorname{O}(\operatorname{H}^{2}(M_{w_{1}}, \mathbb{Q}),\operatorname{H}^{2}(M_{w_{2}},\mathbb{Q}))\] \[g\longmapsto i^{*}_{w_{2},m,\mathbb{Q}}\circ g\circ(i^{*}_{w_{1},m,\mathbb{Q}})^{-1}.\]
**Lemma 1.31**.: _The bijection \(i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}\) sends integral isometries to integral isometries bijectively, i.e. it restricts to a bijection_
\[i^{\sharp}_{w_{1},w_{2},m}\colon\operatorname{O}(\operatorname{H}^{2}(M_{v_{1} },\mathbb{Z}),\operatorname{H}^{2}(M_{v_{2}},\mathbb{Z}))\longrightarrow \operatorname{O}(\operatorname{H}^{2}(M_{w_{1}},\mathbb{Z}),\operatorname{H}^ {2}(M_{w_{2}},\mathbb{Z}))\]
_More esplicity, we have_
\[i^{\sharp}_{w_{1},w_{2},m}(g)=(\lambda_{(S_{2},w_{2},H_{2})}\circ\lambda_{(S_{ 2},v_{2},H_{2})})\circ g\circ(\lambda_{(S_{1},w_{1},H_{1})}\circ\lambda_{(S_{ 1},v_{1},H_{1})})^{-1}\]
_for every \(g\in\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z}),\operatorname{H}^ {2}(M_{v^{\prime}},\mathbb{Z}))\)._
Proof.: Let \(g\in\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z}),\operatorname{H}^ {2}(M_{v^{\prime}},\mathbb{Z}))\) be an isometry and \(g_{\mathbb{Q}}\) its rational extension. As in the proof of Lemma 1.30, by using Proposition 1.28 and the \(\mathbb{Q}\)-linearity of \(g_{\mathbb{Q}}\), we can see that
\[i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}(g_{\mathbb{Q}})=(\lambda_{(S_{2},w_{2}, H_{2}),\mathbb{Q}}\circ\lambda_{(S_{2},v_{2},H_{2}),\mathbb{Q}})\circ g_{ \mathbb{Q}}\circ(\lambda_{(S_{1},w_{1},H_{1}),\mathbb{Q}}\circ\lambda_{(S_{1},v_{1},H_{1}),\mathbb{Q}})^{-1}.\]
Since all the isometries in the right hand side of the equality are rational extensions of integral isometries, we get that also \(i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}(g_{\mathbb{Q}})\) is a rational extension of an integral isometry. By construction this isometry is
\[i^{\sharp}_{w_{1},w_{2},m}(g):=(\lambda_{(S_{2},w_{2},H_{2})}\circ\lambda_{(S _{2},v_{2},H_{2})})\circ g\circ(\lambda_{(S_{1},w_{1},H_{1})}\circ\lambda_{(S_ {1},v_{1},H_{1})})^{-1},\]
from which the claim follows.
## 2. A groupoid representation
Aim of this section is to define a groupoid \(\mathcal{G}^{m,k}\) of \((m,k)\)-triples and a \(\mathcal{G}^{m,k}\)-representation with values in a groupoid of free \(\mathbb{Z}\)-modules.
The groupoid \(\mathcal{G}^{m,k}\) will be defined starting from two groupoids, \(\mathcal{G}^{m,k}_{\operatorname{def}}\) and \(\mathcal{G}^{m,k}_{\operatorname{FM}}\), whose constructions will be explained in Sections 2.2 and 2.3, respectively. These two groupoids will have the same objects; moreover, the morphisms in \(\mathcal{G}^{m,k}_{\operatorname{def}}\) come from deformations of \((m,k)\)-triples, while the morphisms in \(\mathcal{G}^{m,k}_{\operatorname{FM}}\) come from Fourier-Mukai transforms.
We start by quickly recalling the main definitions and notation about groupoids we will use.
### Groupoids
We refer to the lecture notes [10] for the definitions and constructions about groupoids. First of all a groupoid \(\mathcal{G}\) is a (small) category whose morphisms are all isomorphisms, i.e. for any two objects \(x,y\in\mathcal{G}\), any \(f\in\operatorname{Hom}_{\mathcal{G}}(x,y)\) is an isomorphism. If \(\mathcal{G}\) is a groupoid and \(x\in\mathcal{G}\) is an object, we let
\[\operatorname{Aut}_{\mathcal{G}}(x):=\operatorname{Hom}_{\mathcal{G}}(x,x),\]
often called the isotropy group of the object \(x\) in \(\mathcal{G}\). If the groupoid \(\mathcal{G}\) is clear from the context, then we will simply write \(\operatorname{Aut}(x)\).
Moreover, if \(\mathcal{G}\) and \(\mathcal{H}\) are two groupoids, and \(F\colon\mathcal{G}\to\mathcal{H}\) is a functor, for every object \(x\) of \(\mathcal{G}\) we let
\[F_{x}\colon\operatorname{Aut}_{\mathcal{G}}(x)\longrightarrow\operatorname{ Aut}_{\mathcal{H}}(F(x))\]
be the group morphism mapping an automorphism of \(x\) in \(\mathcal{G}\) to its image under the functor \(F\).
If \(\mathcal{G}\) is a groupoid, by a _representation_ of \(\mathcal{G}\) we mean a functor \(F\colon\mathcal{G}\to\mathcal{A}\) where \(\mathcal{A}\) is a suitable groupoid of \(\mathbb{Z}\)-modules (or vector spaces).
Finally, let us recall the notion of free product of groupoids. Let \(\mathcal{G}\) and \(\mathcal{H}\) be two groupoids, and \(Y\) a set of common objects of \(\mathcal{G}\) and \(\mathcal{H}\). In all the situations of interest for us \(\mathcal{G}\) and \(\mathcal{H}\) will have the same set of objects.
**Definition 2.1**.: The _free product_ of \(\mathcal{G}\) and \(\mathcal{H}\) along \(Y\) is the groupoid \(\mathcal{G}\ast_{Y}\mathcal{H}\) such that:
* the objects of \(\mathcal{G}\ast_{Y}\mathcal{H}\) are the elements of \(Y\);
* if \(x,y\in Y\), then a morphism \(f\in\operatorname{Hom}_{\mathcal{G}\ast_{Y}\mathcal{H}}(x,y)\) is a formal combination (with usual cancellation properties) \[f=g_{1}\circ\cdots\circ g_{k}\] where each \(g_{i}\) is a morphism from \(x_{i}\in Y\) to \(x_{i+1}\in Y\) in either \(\mathcal{G}\) or \(\mathcal{H}\) and such that \(x_{1}=x\) and \(x_{k+1}=y\).
If \(\mathcal{G}\) and \(\mathcal{H}\) have the same sets of objects, then we simply refer to \(\mathcal{G}\ast\mathcal{H}\) as the free product of \(\mathcal{G}\) and \(\mathcal{H}\), assuming that \(Y\) is the whole set of objects.
The fact that free products exist is the content of [11, Proposition 21]. In literature the free product is also defined as the pushout, in the category of small categories, of \(\mathcal{G}\) and \(\mathcal{H}\) along a common set \(Y\) of objects of \(\mathcal{G}\) and \(\mathcal{H}\).
### Deformations of \((m,k)\)-triples and their groupoid
We start by recalling the construction of a deformation of a moduli space of sheaves on a K3 surface induced by the deformation of the surface itself, following [10, 11].
Let \((S,v,H)\) be an \((m,k)\)-triple, and write \(v=(r,\xi,a)\). We let \(L\) be a line bundle on \(S\) such that \(c_{1}(L)=\xi\). We will moreover use the following notation: if \(T\) is a smooth, connected algebraic variety, \(f\colon Y\to T\) is a morphism and \(\mathcal{L}\in\operatorname{Pic}(Y)\), for every \(t\in T\) we let \(Y_{t}:=f^{-1}(t)\) and \(\mathcal{L}_{t}:=\mathcal{L}_{|Y_{t}}\).
**Definition 2.2**.: Let \((S,v,H)\) be an \((m,k)\)-triple, and \(T\) a smooth, connected algebraic variety. A _deformation of \((S,v,H)\) along \(T\)_ is a triple \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\), where:
1. \(f\colon\mathcal{S}\to T\) is a smooth, projective deformation of \(S\), and we let \(0\in T\) be such that \(\mathcal{S}_{0}\simeq S\);
2. \(\mathcal{L}\) is a line bundle on \(\mathcal{S}\) such that \(\mathcal{L}_{0}\simeq L\).
3. \(\mathcal{H}\) is a line bundle on \(\mathcal{S}\) such that \(\mathcal{H}_{t}\) is a \(v_{t}\)-generic polarisation on \(\mathcal{S}_{t}\) for every \(t\in T\) and such that \(\mathcal{H}_{0}\simeq H\)
where for every \(t\in T\) we let \(v_{t}:=(r,c_{1}(\mathcal{L}_{t}),a)\).
**Remark 2.3**.: Notice that if \((f\colon\mathcal{S}\to T,\mathcal{L}^{\otimes m},\mathcal{H})\) is a deformation of an \((m,k)\)-triple \((S,v,H)\) along \(T\), and if we let \(v=mw\), then \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\) is a deformation of \((S,w,H)\) along \(T\).
Conversely, if \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\) is a deformation of a \((1,k)\)-triple \((S,w,H)\) along \(T\), and if we let \(v=mw\), then by [10] there is a Zariski-closed subset \(Z\) of \(T\) for which \((f\colon\mathcal{S}_{|T^{\prime}}\to T^{\prime},\mathcal{L}_{|T^{\prime}}^{ \otimes m},\mathcal{H}_{|T^{\prime}})\) is a deformation of \((S,v,H)\) along
\(T^{\prime}:=T\setminus Z\) (here \(Z\) is the subset of \(T\) of those \(t\in T\) for which \(H_{t}\) is not \(v_{t}\)-generic). Notice that it may happen that \(Z=T\).
If \((S,v,H)\) is an \((m,k)\)-triple and \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\) is a deformation of \((S,v,H)\) along a smooth, connected algebraic variety \(T\), we let \(p_{v}\colon\mathcal{M}_{v}\to T\) be the relative moduli space of semistable sheaves so that for every \(t\in T\) we have \(\mathcal{M}_{v,t}=M_{v_{t}}(\mathcal{S}_{t},\mathcal{H}_{t})\). As shown in [11, Lemma 2.21], the family \(p_{v}\colon\mathcal{M}_{v}\to T\) is a locally trivial deformation of \(M_{v}(S,H)\) along \(T\).
**Remark 2.4**.: The isometry \(\lambda_{(S,v,H)}\) in Theorem 1.27 behaves well in families of \((m,k)\)-triples, i.e. it extends to an isometry of local systems as we will now explain.
Let \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\) be a deformation of \((m,k)\)-triples and \(p_{v}\colon\mathcal{M}_{v}\to T\) the associated locally trivial family of moduli spaces.
Let \(\mathcal{M}_{v}^{s}\) be the smooth locus of \(\mathcal{M}_{v}\) and and let \(p_{v}^{s}:\mathcal{M}_{v}^{s}\to T\) be the restriction of \(p_{v}\). By [11, Proposition 3.5(2)], the inclusion \(\mathcal{M}_{v}^{s}\subset\mathcal{M}_{v}\) induces an isomorphism \(\iota\colon R^{2}p_{v*}^{s}\mathbb{Z}\to R^{2}p_{v*}\mathbb{Z}\) of local systems. By [11, Proposition 4.4(2)], a relative quasi-universal family for \(\mathcal{M}_{v}^{s}\) induces an isomorphism \(\lambda^{s}\colon\mathsf{v}^{\perp}\to R^{2}p_{v*}^{s}\mathbb{Z}\) such that the morphism over a point \(t\in T\) of the composition
\[\lambda:=\iota\circ\lambda^{s}\colon\mathsf{v}^{\perp}\longrightarrow R^{2}p_ {v*}\mathbb{Z}\]
is just \(\lambda_{(S_{t},v_{t},H_{t})}\).
**Definition 2.5**.: Let \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) be two \((m,k)\)-triples. A _deformation path_ from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\) is a \(6\)-tuple
\[\alpha:=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\]
where:
* the triple \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\) is a deformation of both the \((m,k)\)-triples \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\);
* for \(i=1,2\) the point \(t_{i}\in T\) is such that \((\mathcal{S}_{t_{i}},v_{t_{i}},H_{t_{i}})=(S_{i},v_{i},H_{i})\);
* we have that \(\gamma\) is a continuous path in \(T\) such that \(\gamma(0)=t_{1}\) and \(\gamma(1)=t_{2}\).
Given two \((m,k)\)-triples \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) and \(\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\) a deformation path from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\), let \(p_{v}\colon\mathcal{M}_{v}\to T\) be the relative moduli space associated to the deformation \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\). There are two natural locally trivial parallel transport operators that can be defined starting from \(\alpha\).
1. The first one is the parallel transport operator \[p_{\alpha}\colon\widetilde{\mathrm{H}}(S_{1},\mathbb{Z})\longrightarrow \widetilde{\mathrm{H}}(S_{2},\mathbb{Z})\] along \(\gamma\) inside the local system \(R^{\bullet}f_{*}\mathbb{Z}\).
2. The second one is the locally trivial parallel transport operator \[g_{\alpha}\colon\operatorname{H}^{2}(M_{v_{1}}(S_{1},H_{1}),\mathbb{Z}) \longrightarrow\operatorname{H}^{2}(M_{v_{2}}(S_{2},H_{2}),\mathbb{Z})\] along \(\gamma\) inside the local system \(R^{2}p_{v*}\mathbb{Z}\).
**Definition 2.6**.: Let \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) be two \((m,k)\)-triples. Two deformations paths \(\alpha\) and \(\alpha^{\prime}\) from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\) are _equivalent_ if \(p_{\alpha}=p_{\alpha^{\prime}}\) and \(g_{\alpha}=g_{\alpha^{\prime}}\). The equivalence class of \(\alpha\) will be denoted \(\overline{\alpha}\).
**Remark 2.7**.: In Definition 2.6 we used both the parallel transport operators \(p_{\alpha}\) in the local system \(R^{\bullet}f_{*}\mathbb{Z}\) and the parallel transport operators \(g_{\alpha}\) in the local system \(R^{2}p_{\mathsf{v}_{\alpha}}\mathbb{Z}\). In fact, only \(p_{\alpha}\) is needed. This is because the local system \(R^{\bullet}f_{*}\mathbb{Z}\) comes with a flat section \(\mathsf{v}\) corresponding to the Mukai vectors on the fibres. We can then consider the sub-local system
\[\mathsf{v}^{\perp}\subset R^{\bullet}f_{*}\mathbb{Z}.\]
The parallel transport operator \(p_{\alpha}\) is constant along \(\mathsf{v}\) by definition. The restriction \(p_{\alpha}|_{v^{\perp}}\) can then be seen as the parallel transport operator inside the local system \(\mathsf{v}^{\perp}\). Since by Remark 2.4 there is an isomorphism of local systems
\[\lambda\colon\mathsf{v}^{\perp}\longrightarrow R^{2}p_{v*}\mathbb{Z},\]
and \(g_{\alpha}\) and \(p_{\alpha}|_{v^{\perp}}\) are parallel transport operators over the same path \(\gamma\), the morphism \(g_{\alpha}\) is uniquely determined by \(p_{\alpha}\) and vice versa.
Consider three \((m,k)\)-triples \((S_{1},v_{1},H_{1})\), \((S_{2},v_{2},H_{2})\) and \((S_{3},v_{3},H_{3})\), and let \(\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\) be a deformation path from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\) and \(\alpha^{\prime}=(f^{\prime}\colon\mathcal{S}^{\prime}\to T^{\prime}, \mathcal{L}^{\prime},\mathcal{H}^{\prime},t^{\prime}_{1},t^{\prime}_{2},\gamma ^{\prime})\) a deformation path from \((S_{2},v_{2},H_{2})\) to \((S_{3},v_{3},H_{3})\).
**Definition 2.8**.: The _concatenation_ of \(\alpha\) with \(\alpha^{\prime}\) is the \(6\)-tuple
\[\alpha^{\prime}\star\alpha:=(f^{\prime\prime}\colon\mathcal{S}^{\prime\prime }\to T^{\prime\prime},\mathcal{L}^{\prime\prime},\mathcal{H}^{\prime\prime}, t^{\prime\prime}_{1},t^{\prime\prime}_{2},\gamma^{\prime\prime})\]
where
* \(T^{\prime\prime}\) is obtained by glueing \(T\) and \(T^{\prime}\) along \(t_{2}\) and \(t^{\prime}_{1}\)
* \(\mathcal{S}^{\prime\prime}\) is obtained by glueing \(\mathcal{S}\) and \(\mathcal{S}^{\prime}\) along \(\mathcal{S}_{t_{2}}\) and \(\mathcal{S}^{\prime}_{t^{\prime}_{1}}\)
* \(f^{\prime\prime}\) is obtained by glueing \(f\) and \(f^{\prime}\) along \(\mathcal{S}_{t_{2}}\) and \(\mathcal{S}^{\prime}_{t^{\prime}_{1}}\)
* \(\mathcal{L}^{\prime\prime}\) is obtained by glueing \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) along \(\mathcal{S}_{t_{2}}\) and \(\mathcal{S}^{\prime}_{t^{\prime}_{1}}\)
* \(\mathcal{H}^{\prime\prime}\) is obtained by glueing \(\mathcal{H}\) and \(\mathcal{H}^{\prime}\) along \(\mathcal{S}_{t_{2}}\) and \(\mathcal{S}^{\prime}_{t^{\prime}_{1}}\)
* \(t^{\prime\prime}_{1}\) is the image of \(t_{1}\) in \(T^{\prime\prime}\)
* \(t^{\prime\prime}_{2}\) is the image of \(t^{\prime}_{2}\) in \(T^{\prime\prime}\)
* \(\gamma^{\prime\prime}\) is the concatenation of the image of the path \(\gamma\) in \(T^{\prime\prime}\) with the image of the path \(\gamma^{\prime}\) in \(T^{\prime\prime}\).
It is immediate to notice that if \(\alpha\) is a deformation path from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\) and \(\alpha^{\prime}\) is a deformation path from \((S_{2},v_{2},H_{2})\) to \((S_{3},v_{3},H_{3})\), then \(\alpha\star\alpha^{\prime}\) is a deformation path from \((S_{1},v_{1},H_{1})\) to \((S_{3},v_{3},H_{3})\).
**Remark 2.9**.: Notice that
\[p_{\alpha\star\alpha^{\prime}}=p_{\alpha^{\prime}}\circ p_{\alpha},\ \ \ \ g_{\alpha\star\alpha^{\prime}}=g_{\alpha^{\prime}}\circ g_{\alpha},\]
so if \(\alpha\) is equivalent to \(\beta\) and \(\alpha^{\prime}\) is equivalent to \(\beta^{\prime}\), then \(\alpha\star\alpha^{\prime}\) is equivalent to \(\beta\star\beta^{\prime}\).
The previous notions allow us to define the groupoid \(\mathcal{G}^{m,k}_{\mathrm{def}}\) of deformations of \((m,k)\)-triples. Before doing this, we need to introduce two groupoids \(\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}}\) and \(\mathcal{P}^{m,k}\) that we will use to define \(\mathcal{G}^{m,k}_{\mathrm{def}}\).
We start with the groupoid \(\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}}\).
**Definition 2.10**.: Given two strictly positive integers \(m\) and \(k\), the groupoid \(\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}}\) is defined as follows:
* the objects of \(\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}}\) are the \((m,k)\)-triples;
* for every two \((m,k)\)-triples \((S_{1},v_{1},H_{1}),(S_{2},v_{2},H_{2})\), a morphism from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\) in \(\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}}\) is an equivalence class of deformation paths from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\);
* for every \((m,k)\)-triple \((S,v,H)\), where \(v=(r,c_{1}(L),a)\), the identity of \((S,v,H)\) is the equivalence class of the deformation path \((S\to\{p\},L,H,p,p,\kappa_{p})\) where \(\kappa_{p}\) is the constant path;
* for every three \((m,k)\)-triples \((S_{1},v_{1},H_{1})\), \((S_{2},v_{2},H_{2})\) and \((S_{3},v_{3},H_{3})\) and every pair of morphisms \(\overline{\alpha}\colon(S_{1},v_{1},H_{1})\to(S_{2},v_{2},H_{2})\) and \(\overline{\alpha^{\prime}}\colon(S_{2},v_{2},H_{2})\to(S_{3},v_{3},H_{3})\), the composition of \(\alpha\) with \(\alpha^{\prime}\) is \(\overline{\alpha^{\prime}}\circ\overline{\alpha}:=\overline{\alpha\star \alpha^{\prime}}\).
**Remark 2.11**.: The fact that the composition is well defined follows from Remark 2.9, and the fact that the identity is the prescribed one is immediate. Moreover, the fact that \(\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}}\) is a groupoid may be proved as follows: given two \((m,k)\)-triples \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) and a morphism \(\overline{\alpha}\) between them, where \(\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\), then \(\overline{\alpha}^{-1}\) is \(\overline{\alpha^{-1}}\), where
\[\alpha^{-1}:=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{2},t_{1}, \gamma^{-1}).\]
Indeed \(p_{\alpha^{-1}}=p_{\alpha}^{-1}\) and \(g_{\alpha^{-1}}=g_{\alpha}^{-1}\), so \(\overline{\alpha^{-1}}=\overline{\alpha}^{-1}\) by Remark 2.9. Finally, the associativity property also holds since the operation of concatenating paths is associative.
We now define the groupoid \(\mathcal{P}^{m,k}\). Recall that two \((m,k)\)-triples \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are congruent (see Definition 1.23) when \(S_{1}=S_{2}=S\), \(v_{1}=v_{2}=v\) and a coherent sheaf \(F\) on \(S\) with Mukai vector \(v\) is \(H_{1}\)-(semi)stable sheaf if and only if it is \(H_{2}\)-(semi)stable. In this case we denote by
\[\chi_{H_{1},H_{2}}\colon M_{v}(S,H_{1})\longrightarrow M_{v}(S,H_{2}),\qquad F\mapsto F\]
the identity map.
**Definition 2.12**.: Given two strictly positive integers \(m\) and \(k\), the groupoid \(\mathcal{P}^{m,k}\) is defined as follows:
* the objects of \(\mathcal{P}^{m,k}\) are the \((m,k)\)-triples;
* for every two \((m,k)\)-triples \((S_{1},v_{1},H_{1})\), \((S_{2},v_{2},H_{2})\) we set \[\operatorname{Hom}_{\mathcal{P}^{m,k}}((S_{1},v_{1},H_{1}),(S_{2},v_{2},H_{2}) ):=\{\chi_{H_{1},H_{2}}\}\] if \((S,v,H_{1})\) and \((S_{2},v_{2},H_{2})\) are congruent, and otherwise \[\operatorname{Hom}_{\mathcal{P}^{m,k}}((S_{1},v_{1},H_{1}),(S_{2},v_{2},H_{2}) ):=\emptyset.\]
We are now finally in the position to define the groupoid \(\mathcal{G}^{m,k}_{\mathrm{def}}\).
**Definition 2.13**.: Given two strictly positive integers \(m\) and \(k\), the groupoid
\[\mathcal{G}^{m,k}_{\mathrm{def}}:=\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}} \ast\mathcal{P}^{m,k}\]
is the free product of \(\widetilde{\mathcal{G}}^{m,k}_{\mathrm{def}}\) and \(\mathcal{P}^{m,k}\) (see Definition 2.1).
### Fourier-Mukai equivalences and their groupoid
The purpose of this section is to define the groupoid \(\mathcal{G}_{\mathrm{FM}}^{m,k}\). To do so, we need to recall some isomorphisms of moduli spaces of sheaves induced by Fourier-Mukai transforms.
Let us start with the tensorization with a line bundle. Let \(S\) be a projective K3 surface and \(L\in\mathrm{Pic}(S)\) a line bundle. Consider the derived equivalence
\[\mathsf{L}\colon\operatorname{D}^{b}(S) \longrightarrow\operatorname{D}^{b}(S)\] \[F^{\bullet} \longmapsto F^{\bullet}\otimes L. \tag{9}\]
If a sheaf \(F\) has Mukai vector \(v(F)=v\), then \(v(\mathsf{L}(F))=v\cdot\operatorname{ch}(L)=:v_{L}\). With an abuse of notation we keep denoting by
\[\mathsf{L}\colon M_{v}(S,H)\longrightarrow M_{v_{L}}(S,H)\]
the induced morphism between moduli spaces of sheaves. This morphism is known to be an isomorphism in some cases, as the following states.
**Lemma 2.14** ([11, Lemma 2.24]).: _Let \(S\) be a projective K3 surface, \(v=(r,\xi,a)\) a Mukai vector and \(H\) an ample line bundle._
1. _For any_ \(d\in\mathbb{Z}\)_, the morphism_ \(\mathsf{dH}\colon M_{v}(S,H)\to M_{v_{\mathsf{dH}}}(S,H)\) _is an isomorphism._
2. _If_ \(r>0\) _and_ \(H\) _is_ \(v\)_-generic, the morphism_ \(\mathsf{L}\colon M_{v}(S,H)\to M_{v_{L}}(S,H)\) _is an isomorphism for any_ \(L\in\mathrm{Pic}(S)\)_._
We now consider the Fourier-Mukai transform whose kernel is the ideal of the diagonal. Let \(S\) be a projective K3 surface and \(\Delta\subset S\times S\) be the diagonal. If \(I_{\Delta}\in\operatorname{Coh}(S\times S)\) denotes the ideal sheaf of \(\Delta\), then we consider the Fourier-Mukai equivalence
\[\operatorname{FM}_{\Delta}\colon\operatorname{D}^{b}(S)\longrightarrow \operatorname{D}^{b}(S)\quad\quad\operatorname{FM}_{\Delta}(F^{\bullet}):=R \pi_{2*}(I_{\Delta}\stackrel{{ L}}{{\otimes}}\pi_{1}^{*}F^{ \bullet}),\]
where \(\pi_{1}\) and \(\pi_{2}\) are the two projections from \(S\times S\) to \(S\), and the composite equivalence
\[\operatorname{FM}_{\Delta}^{\vee}\colon\operatorname{D}^{b}(S)\longrightarrow \operatorname{D}^{b}(S)\quad\quad\operatorname{FM}_{\Delta}(F^{\bullet}):= \left(R\pi_{2*}(I_{\Delta}\stackrel{{ L}}{{\otimes}}\pi_{1}^{*}F^{ \bullet})\right)^{\vee},\]
If \(F\) is a sheaf on \(S\) with Mukai vector \(v(F)=(r,\xi,a)=v\), then by a direct check one can see that
\[v(\operatorname{FM}_{\Delta}(F))=(a,-\xi,r)=:\tilde{v}\quad\text{ and }\quad v( \operatorname{FM}_{\Delta}^{\vee}(F))=(a,\xi,r)=:\hat{v}\]
**Lemma 2.15** ([13, 11]).: _Let \(S\) be a projective K3 surface, \(H\) an ample line bundle and \(n,a\in\mathbb{Z}\)._
1. _Suppose that_ \(\mathrm{Pic}(S)=\mathbb{Z}\,H\)_. Put_ \(v=(r,nH,a)\)_, with_ \(r>0\)_. Then there exists an integer_ \(n_{0}\gg 0\) _such that for every_ \(n>n_{0}\) _the Fourier-Mukai transform_ \(\operatorname{FM}_{\Delta}\) _induces an isomorphism_ \[\operatorname{FM}_{\Delta,v}\colon M_{v}(S,H)\longrightarrow M_{\tilde{v}}(S,H).\]
2. _Put_ \(v=(0,\xi,a)\) _and suppose that_ \(H\) _is both_ \(v\) _and_ \(\tilde{v}\)_-generic. Then there exists an integer_ \(a_{0}\gg 0\) _such that for every_ \(a>a_{0}\) _the Fourier-Mukai transform_ \(\operatorname{FM}_{\Delta}\) _induces an isomorphism_ \[\operatorname{FM}_{\Delta,v}\colon M_{v}(S,H)\longrightarrow M_{\tilde{v}}(S,H).\]
_._
3. _In any of the two cases above, the dual Fourier-Mukai transform_ \(\operatorname{FM}_{\Delta}^{\vee}\) _induces an isomorphism_ \[\operatorname{FM}_{\Delta,v}^{\vee}\colon M_{v}(S,H)\longrightarrow M_{\hat{v}} (S,H).\]
Proof.: The proof of the first two items is [11, Proposition 2.29, Proposition 2.33] (but see also [20, Theorem 3.18] for a more general statement). Moreover in [11] it is further shown that the sheaf \(\operatorname{FM}_{\Delta}(F)\) is locally free (see [11, Lemma 2.28]). The proof of the third item now follows from the first two just by noticing that a locally free sheaf \(F\) is (semi)stable if and only if its dual \(F^{\vee}\) is (semi)stable.
The last Fourier-Mukai transform we consider is associated to elliptic K3 surfaces and it has been studied in [10]; the result we need is contained in [20].
Let \(p\colon S\to\mathbb{P}^{1}\) be an elliptic K3 surface. We assume that there is a section \(s\colon\mathbb{P}^{1}\to S\) and we denote by \(\ell\in\operatorname{Pic}(S)\) its cohomology class. If we put \(f=p^{*}\mathcal{O}(1)\), then the lattice spanned by \(\ell\) and \(f\) is the unimodular hyperbolic plane \(U\); more precisely, \(\ell^{2}=-2\), \((\ell,f)=1\) and \(f^{2}=0\). Let us assume that \(\operatorname{Pic}(S)\) coincides with this hyperbolic plane (i.e. that \(S\) is generic with this property). It is known that the class \(H=\ell+tf\) is ample for \(t\gg 0\) and we fix once and for all such an ample class.
Consider the moduli space \(M_{(0,f,0)}(S,H)\). For \(t\gg 0\), the polarisation \(H\) is generic with respect to \((0,f,0)\) and \(M_{(0,f,0)}(S,H)\cong S\) (see [10, Section 4]). Let \(\mathcal{P}\) be the relative Poincare bundle that we see as a coherent sheaf over the product \(S\times S\). The Fourier-Mukai transform
\[\operatorname{FM}_{\mathcal{P}}\colon\operatorname{D}^{b}(S)\longrightarrow \operatorname{D}^{b}(S),\quad\operatorname{FM}_{\mathcal{P}}(F^{\bullet}):=( R\pi_{2*}(\mathcal{P}\stackrel{{ L}}{{\otimes}}\pi_{1}^{*}F^{\bullet}))[1]\]
is proved to be an equivalence in [10, Theorem 1.2].
**Lemma 2.16** ([20, Theorem 3.15]).: _Let \(H=\ell+tf\) with \(t\gg 0\). Then \(\operatorname{FM}_{\mathcal{P}}\) induces an isomorphism_
\[\operatorname{FM}_{\mathcal{P}}\colon M_{(m,0,-mk)}(S,H)\longrightarrow M_{( 0,m(\ell+(k+1)f),m)}(S,H)\]
_for every \(m,k>0\)._
We conclude this section by defining the groupoid \(\mathcal{G}_{\operatorname{FM}}^{m,k}\).
**Definition 2.17**.: Given two strictly positive integers \(m\) and \(k\), the groupoid \(\mathcal{G}_{\operatorname{FM}}^{m,k}\) is defined as follows:
* the objects of \(\mathcal{G}_{\operatorname{FM}}^{m,k}\) are the \((m,k)\)-triples;
* given two \((m,k)\)-triples \((S,v_{1},H)\) and \((S,v_{2},H)\), an _elementary morphism_ between them is one of the following:
* the equivalence \(\mathsf{L}\), if \((S,v_{1},H)=(S,v,H)\) and \((S,v_{2},H)=(S,v_{L},H)\) are as in Lemma 2.14;
* the equivalence \(\operatorname{FM}_{\Delta}\), if \((S,v_{1},H)=(S,v,H)\) and \((S,v_{2},H)=(S,\tilde{v},H)\) verify the hypotheses of Lemma 2.15 (1) or (2);
* the equivalence \(\operatorname{FM}_{\Delta}^{\vee}\), if \((S,v_{1},H)=(S,v,H)\) and \((S,v_{2},H)=(S,\tilde{v},H)\) verify the hypotheses of Lemma 2.15 (3);
* the equivalence \(\operatorname{FM}_{\mathcal{P}}\), if \((S,v_{1},H)=(S,(m,0,m-mk),H)\) and \((S,v_{2},H)=(S,(0,m(\ell+kf),m),H)\) verify the hypotheses of Lemma 2.16;
* for every two \((m,k)\)-triples \((S_{1},v_{1},H_{1}),(S_{2},v_{2},H_{2})\), a morphisms from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\) is a formal concatenation of elementary morphisms and their formal inverses, subject to the usual cancellation rules.
### The groupoid \(\mathcal{G}^{m,k}\) and its representations
For every two strictly positive integers \(m,k\) we now define the groupoid \(\mathcal{G}^{m,k}\) as the groupoid generated by \(\mathcal{G}^{m,k}_{\mathrm{def}}\) and \(\mathcal{G}^{m,k}_{\mathrm{FM}}\).
**Definition 2.18**.: Given two strictly positive integers \(m\) and \(k\), the groupoid
\[\mathcal{G}^{m,k}:=\mathcal{G}^{m,k}_{\mathrm{def}}*\mathcal{G}^{m,k}_{\mathrm{ FM}}\]
is the free product of \(\mathcal{G}^{m,k}_{\mathrm{def}}\) and \(\mathcal{G}^{m,k}_{\mathrm{FM}}\) (see Definition 2.1).
**Remark 2.19**.: Let \((S_{1},v_{1},H_{1}),(S_{2},v_{2},H_{2})\in\mathcal{G}^{m,k}\) be two objects; then
\[\mathrm{Hom}_{\mathcal{G}^{m,k}}((S_{1},v_{1},H_{1}),(S_{2},v_{2},H_{2}))\neq\emptyset.\]
More precisely, in the proof of [18, Theorem 1.7] the authors show that one can go from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\) only using the following morphisms of \(\mathcal{G}^{m,k}\):
* equivalence classes of deformation paths of \((m,k)\)-triples;
* the morphisms \(\chi_{H,H^{\prime}}\) of \(\mathcal{P}^{m,k}\);
* derived equivalences of the form \(\mathsf{L}\), for some line bundle \(L\);
* the Fourier-Mukai transform \(\mathrm{FM}_{\Delta}\).
4.1. The \(\widetilde{\mathcal{H}}\)-representation \(\widetilde{\Phi}^{m,k}\) of \(\mathcal{G}^{m,k}\)
Let \(\widetilde{\Lambda}\) be an even unimodular lattice of signature \((4,20)\). Notice that the isometry class of \(\widetilde{\Lambda}\) is uniquely determined, so that \(\widetilde{\Lambda}\) is isometric to the lattice
\[U^{\oplus 4}\oplus E_{8}(-1)^{\oplus 2},\]
where \(U\) is the unimodular even rank \(2\) lattice and \(E_{8}(-1)\) is the negative definite Dynkin lattice of type \(E_{8}\).
If \(\widetilde{\mathcal{C}}\subset\widetilde{\Lambda}\otimes_{\mathbb{Z}}\mathbb{R}\) is the cone of (strictly) positive classes, then by [11, Lemma 4.1] we have that \(\mathrm{H}^{3}(\widetilde{\mathcal{C}},\mathbb{Z})=\mathbb{Z}\). The choice of a generator of \(\mathrm{H}^{3}(\widetilde{\mathcal{C}},\mathbb{Z})\) is an _orientation_ of \(\widetilde{\Lambda}\). Notice that there are only two orientations, corresponding to the generators \(\pm 1\) of \(\mathbb{Z}\). Again by [11, Lemma 4.1], if \(W\subset\widetilde{\Lambda}\otimes_{\mathbb{Z}}\mathbb{R}\) is a positive real subspace of dimension \(4\), then the space \(W\setminus\{0\}\) is a deformation retract of \(\widetilde{\mathcal{C}}\), so that an orientation on \(\widetilde{\Lambda}\) corresponds to an orientation on a positive real subspace of maximal dimension.
If \(g\in\mathrm{O}(\widetilde{\Lambda})\) is an isometry, then \(g\) induces an action on \(\mathrm{H}^{3}(\widetilde{\mathcal{C}},\mathbb{Z})\) that either preserves a generator or it maps it to its opposite. Therefore we get an orientation character
\[\text{or}\colon\ \mathrm{O}(\widetilde{\Lambda})\longrightarrow\mathbb{Z}/2 \mathbb{Z}. \tag{10}\]
The subgroup \(\mathrm{O}^{+}(\widetilde{\Lambda})=\ker(\text{or})\) is the group of orientation preserving isometries.
Similarly, if \((\widetilde{\Lambda}_{1},\epsilon_{1})\) and \((\widetilde{\Lambda}_{2},\epsilon_{2})\) are two pairs composed by a unimodular even lattice \(\widetilde{\Lambda}_{i}\) of signature \((4,20)\) and an orientation \(\epsilon_{i}\in\mathrm{H}^{3}(\widetilde{\mathcal{C}}_{i},\mathbb{Z})\) on \(\widetilde{\Lambda}_{i}\), then we get an orientation map
\[\text{or}\colon\ \mathrm{O}((\widetilde{\Lambda}_{1},\epsilon_{1}),( \widetilde{\Lambda}_{2},\epsilon_{2}))\longrightarrow\mathbb{Z}/2\mathbb{Z}. \tag{11}\]
Again, \(\mathrm{O}^{+}((\widetilde{\Lambda}_{1},\epsilon_{1}),(\widetilde{\Lambda}_{2}, \epsilon_{2}))\coloneqq\mathrm{or}^{-1}(0)\) is the set of orientation preserving isometries.
**Definition 2.20**.: Given two strictly positive integers \(m\) and \(k\), the groupoid \(\widetilde{\mathcal{H}}^{m,k}\) is defined as follows:
* the objects are triples \((\widetilde{\Lambda},v,\epsilon)\), where \(\widetilde{\Lambda}\) is a unimodular even lattices of signature \((4,20)\), \(v\in\widetilde{\Lambda}\) is of the form \(v=mw\), where \(w\) is primitive and \(w^{2}=2k\), and where \(\epsilon\) is an orientation on \(\widetilde{\Lambda}\);
* for any two objects \((\widetilde{\Lambda}_{1},v_{1},\epsilon_{1})\) and \((\widetilde{\Lambda}_{2},v_{2},\epsilon_{2})\), the set \[\mathrm{Hom}_{\widetilde{\mathcal{H}}^{m,k}}((\widetilde{\Lambda}_{1},v_{1}, \epsilon_{1}),(\widetilde{\Lambda}_{1},v_{1},\epsilon_{2}))\] is the set of isometries \(g\colon\widetilde{\Lambda}_{1}\to\widetilde{\Lambda}_{2}\) such that \(g(v_{1})=v_{2}\).
**Remark 2.21**.: Notice that morphisms in \(\widetilde{\mathcal{H}}^{m,k}\) are not necessarily orientation preserving.
**Example 2.22**.: If \(S\) is a K3 surface, then the Mukai lattice \(\widetilde{\mathrm{H}}(S,\mathbb{Z})\) is a unimodular even lattice of signature \((4,20)\). As it is explained in [10, Section 4.1], \(\widetilde{\mathrm{H}}(S,\mathbb{Z})\) comes with a distinguished orientation, which we denote by \(\epsilon_{S}\). Such an orientation is associated to the distinguished positive real 4-space \(W\) with a chosen basis given by a Kahler class, the real and imaginary parts of a symplectic form and the vector \((1,0,-1)\).
We also point out that, as we have seen in the proof of Lemma 1.13, the positive real 3-space with basis given by a Kahler class and the real and imaginary part of a symplectic form determines an orientation on the lattice \(\mathrm{H}^{2}(S,\mathbb{Z})\).
Before continuing, let us recall the following definition.
**Definition 2.23**.: Let \(f\colon\mathcal{S}\to T\) be a smooth family of K3 surfaces. Take \(t_{1},t_{2}\in T\) and a path \(\gamma\) from \(t_{1}\) to \(t_{2}\). A _parallel transport operator_
\[g\colon\widetilde{\mathrm{H}}(\mathcal{S}_{t_{1}},\mathbb{Z})\to\widetilde{ \mathrm{H}}(\mathcal{S}_{t_{2}},\mathbb{Z})\]
is an isometry induced by parallel transport inside the local system \(R^{\bullet}f_{*}\mathbb{Z}\).
This allows us to define the following representation of the groupoid \(\mathcal{G}^{m,k}_{\mathrm{def}}\).
**Definition 2.24**.: Given two strictly positive integers \(m\) and \(k\), the representation
\[\widetilde{\Phi}^{m,k}_{\mathrm{def}}\colon\mathcal{G}^{m,k}_{\mathrm{def}} \longrightarrow\widetilde{\mathcal{H}}^{m,k}\]
is defined as follows:
* if \((S,v,H)\in\mathcal{G}^{m,k}_{\mathrm{def}}\) is an object, then \(\widetilde{\Phi}^{m,k}_{\mathrm{def}}(S,v,H)=(\widetilde{\mathrm{H}}(S, \mathbb{Z}),v,\epsilon_{S})\) (see Example 2.22);
* if \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are two objects and \(\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\) is a deformation path from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\), then \[\widetilde{\Phi}^{m,k}_{\mathrm{def}}(\overline{\alpha}):=p_{\alpha},\] the parallel transport operator in the local system \(R^{\bullet}f_{*}\mathbb{Z}\) along the path \(\gamma\);
* if \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are congruent, then \[\widetilde{\Phi}^{m,k}_{\mathrm{def}}(\chi_{H_{1},H_{2}}):=\mathrm{id}_{ \widetilde{\mathrm{H}}(S,\mathbb{Z})},\] where \(\chi_{H_{1},H_{2}}\) is the identification \(M_{v_{1}}(S_{1},H_{1})=M_{v_{2}}(S_{2},H_{2})\) (see Definitions 1.23 and 2.12).
**Remark 2.25**.: Notice that if \(\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\) is a deformation path from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\), then by definition the Mukai vectors \(v_{1}\) and \(v_{2}\) belong to the same flat section of the local system \(R^{\bullet}f_{*}\mathbb{Z}\), so the parallel transport operator \(p_{\alpha}\) maps \(v_{1}\) to \(v_{2}\) and the representation \(\widetilde{\Phi}^{m,k}_{\mathrm{def}}\) is well defined.
**Remark 2.26**.: Because of Lemma 1.13 and the fact that the vector \((1,0,-1)\) is preserved, we see that a parallel transport operator \(g\colon\widetilde{\mathrm{H}}(S_{1},\mathbb{Z})\to\widetilde{\mathrm{H}}(S_{2 },\mathbb{Z})\) is orientation preserving. In particular the morphisms in the image of \(\widetilde{\Phi}^{m,k}_{\mathrm{def}}\) are always orientation preserving.
Let us now define the representation
\[\widetilde{\Phi}^{m,k}_{\mathrm{FM}}\colon\mathcal{G}^{m,k}_{\mathrm{FM}} \longrightarrow\widetilde{\mathcal{H}}^{m,k}.\]
**Definition 2.27**.: Given two strictly positive integers \(m\) and \(k\), the representation \(\widetilde{\Phi}^{m,k}_{\mathrm{FM}}\) is defined as follows:
* if \((S,v,H)\in\mathcal{G}^{m,k}_{\mathrm{FM}}\) is an object, then \(\widetilde{\Phi}^{m,k}_{\mathrm{FM}}(S,v,H)=(\widetilde{\mathrm{H}}(S, \mathbb{Z}),v,\epsilon_{S})\);
* if \(\phi\in\mathrm{Aut}(\mathrm{D}^{b}(S))\) corresponds to a morphism in \(\mathcal{G}^{m,k}_{\mathrm{FM}}\), then \(\widetilde{\Phi}^{m,k}_{\mathrm{FM}}(\phi)=\phi^{\mathrm{H}}\) is the isometry induced by \(\phi\) on the Mukai lattice.
**Remark 2.28**.: The equivalence \(\phi\) is composition of the equivalences introduced in Section 2.3 and, by definition, it induces an isomorphism between the corresponding moduli spaces. It follows in particular that the Mukai vector must be preserved, so that again \(\widetilde{\Phi}^{m,k}_{\mathrm{FM}}\) is well defined.
**Remark 2.29**.: Notice that morphisms in the image of \(\widetilde{\Phi}^{m,k}_{\mathrm{FM}}\) are not necessarily orientation preserving.
**Definition 2.30**.: Define the representation
\[\widetilde{\Phi}^{m,k}\colon\mathcal{G}^{m,k}\longrightarrow\widetilde{ \mathcal{H}}^{m,k}\]
as the unique representation restricting to \(\widetilde{\Phi}^{m,k}_{\mathrm{def}}\) on \(\mathcal{G}^{m,k}_{\mathrm{def}}\) and to \(\widetilde{\Phi}^{m,k}_{\mathrm{FM}}\) on \(\mathcal{G}^{m,k}_{\mathrm{FM}}\).
**Remark 2.31**.: The existence and uniqueness of \(\widetilde{\Phi}^{m,k}\) are a consequence of the fact that the objects of \(\mathcal{G}^{m,k}\) are the same as those of \(\mathcal{G}^{m,k}_{\mathrm{def}}\) and \(\mathcal{G}^{m,k}_{\mathrm{FM}}\), and the representations \(\widetilde{\Phi}^{m,k}_{\mathrm{def}}\) and \(\widetilde{\Phi}^{m,k}_{\mathrm{FM}}\) coincide on the objects. Moreover, as morphisms in \(\mathcal{G}^{m,k}\) are formal concatenations of morphisms in \(\mathcal{G}^{m,k}_{\mathrm{def}}\) and \(\mathcal{G}^{m,k}_{\mathrm{FM}}\), there is a unique way to define \(\widetilde{\Phi}^{m,k}\) on morphisms.
#### 2.4.2. The \(\mathcal{A}_{k}\)-representation \(\mathfrak{pt}^{m,k}\) of \(\mathcal{G}^{m,k}\)
**Definition 2.32**.: For every \(k>0\) we define the groupoid \(\mathcal{A}_{k}\) as follows:
* the objects of \(\mathcal{A}_{k}\) are even lattices \(\Lambda\) of signature \((3,20)\) isometric to the lattice \(U^{\oplus 3}\oplus E_{8}(-1)^{\oplus 2}\oplus\langle-2k\rangle\);
* if \(\Lambda_{1}\) and \(\Lambda_{2}\) are two objects, then \[\operatorname{Hom}_{\mathcal{A}_{k}}(\Lambda_{1},\Lambda_{2}):=\operatorname{O} (\Lambda_{1},\Lambda_{2}).\]
As before we first define \(\mathcal{A}_{k}\)-representations for both \(\mathcal{G}_{\operatorname{def}}^{m,k}\) and \(\mathcal{G}_{\operatorname{FM}}^{m,k}\). We start by defining the representation \(\mathfrak{pt}_{\operatorname{def}}^{m,k}\colon\mathcal{G}_{\operatorname{ def}}^{m,k}\to\mathcal{A}_{k}\).
**Definition 2.33**.: Given two strictly positive integers \(m\) and \(k\) we define the functor \(\mathfrak{pt}_{\operatorname{def}}^{m,k}\) as follows:
* if \((S,v,H)\in\mathcal{G}_{\operatorname{def}}^{m,k}\) is an object, then \[\mathfrak{pt}_{\operatorname{def}}^{m,k}((S,v,H))=\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z});\]
* if \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are two objects and \(\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\) is a deformation path from \((S_{1},v_{1},H_{1})\) to \((S_{2},v_{2},H_{2})\), then \[\mathfrak{pt}_{\operatorname{def}}^{m,k}(\overline{\alpha}):=g_{\alpha},\] the parallel transport operator in the local system \(R^{2}p_{v,*}\mathbb{Z}\) along the path \(\gamma\);
* if \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are congruent, then \[\mathfrak{pt}_{\operatorname{def}}^{m,k}(\chi_{H_{1},H_{2}}):=\operatorname{ id}_{\operatorname{H}^{2}(M_{v}(S,H_{1}),\mathbb{Z})},\] where \(\chi_{H_{1},H_{2}}\) is the identification \(M_{v_{1}}(S_{1},H_{1})=M_{v_{2}}(S_{2},H_{2})\) (see Definitions 1.23 and 2.12).
**Remark 2.34**.: By Theorem 1.27 there is an isometry \(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\cong v^{\perp}\); since \(v=mw\), with \(w^{2}=2k\), it follows that \(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\) is isometric to the lattice \(U^{\oplus 3}\oplus E_{8}(-1)^{\oplus 2}\oplus\langle-2k\rangle\).
Next, let us define the functor
\[\mathfrak{pt}_{\operatorname{FM}}^{m,k}\colon\mathcal{G}_{\operatorname{FM}}^{ m,k}\to\mathcal{A}_{k}.\]
**Definition 2.35**.: Given two strictly positive integers \(m\) and \(k\), we define the functor \(\mathfrak{pt}_{\operatorname{FM}}^{m,k}\) as follows:
* if \((S,v,H)\in\mathcal{G}_{\operatorname{FM}}^{m,k}\) is an object, then \[\mathfrak{pt}_{\operatorname{FM}}^{m,k}(S,v,H)=\operatorname{H}^{2}(M_{v}(S,H ),\mathbb{Z});\]
* if \(\phi\in\operatorname{Hom}_{\mathcal{G}_{\operatorname{FM}}^{m,k}}((S,v_{1},H ),(S,v_{2},H))\) is an elementary morphism, then by definition \(\phi\) induces an isomorphism \[\phi_{v_{1}}\colon M_{v_{1}}(S,H)\to M_{v_{2}}(S,H)\] of the moduli spaces. We then define \[\mathfrak{pt}_{\operatorname{FM}}^{m,k}(\phi)=\phi_{v_{1},*},\] where the latter is the pushforward action on the second integral cohomology groups of the moduli spaces;
* if \(\phi\in\operatorname{Hom}_{\mathcal{G}_{\operatorname{FM}}^{m,k}}((S,v_{1},H ),(S,v_{2},H))\) is a morphism, i.e. a concatenation of elementary morphisms and their inverses, then we define \(\mathfrak{pt}_{\operatorname{FM}}^{m,k}(\phi)\) as the composition of the corresponding isometries.
Similarly to the case of \(\widetilde{\Phi}^{m,k}\), we now give the following definition.
**Definition 2.36**.: Define
\[\mathsf{pt}^{m,k}\colon\mathcal{G}^{m,k}\to\mathcal{A}_{k} \tag{12}\]
as the unique representation restricting to \(\mathsf{pt}^{m,k}_{\mathrm{def}}\) on \(\mathcal{G}^{m,k}_{\mathrm{def}}\) and to \(\mathsf{pt}^{m,k}_{\mathrm{FM}}\) on \(\mathcal{G}^{m,k}_{\mathrm{FM}}\).
**Proposition 2.37**.: _Let \(A_{1}=(S_{1},v_{1},H_{1}),A_{2}=(S_{2},v_{2},H_{2})\in\mathcal{G}^{m,k}\) be two objects. Then_
\[\mathsf{pt}^{m,k}(\mathrm{Hom}_{\mathcal{G}^{m,k}}(A_{1},A_{2}))\subset \mathsf{PT}_{\mathrm{lt}}(M_{v_{1}}(S_{1},H_{1}),M_{v_{2}}(S_{2},H_{2})).\]
Proof.: First of all, we notice that
\[\mathsf{pt}^{m,k}_{\mathrm{def}}(\mathrm{Hom}_{\mathcal{G}^{m,k}_{\mathrm{ def}}}(A_{1},A_{2}))\subset\mathsf{PT}_{\mathrm{lt}}(M_{v_{1}}(S_{1},H_{1}),M_{v_{2} }(S_{2},H_{2}))\]
by definition. On the other hand, by Proposition 1.14 we also have that
\[\mathsf{pt}^{m,k}_{\mathrm{FM}}(\mathrm{Hom}_{\mathcal{G}^{m,k}_{\mathrm{FM }}}(A_{1},A_{2}))\subset\mathsf{PT}_{\mathrm{lt}}(M_{v_{1}}(S_{1},H_{1}),M_{v_ {2}}(S_{2},H_{2}))\]
so that the claim follows.
Using the notation set in Section 2.1, we state the following corollary of Proposition 2.37.
**Corollary 2.38**.: _Let \((S,v,H)\in\mathcal{G}^{m,k}\) be an object. Then_
\[\mathrm{Im}\left(\mathsf{pt}^{m,k}_{(S,v,H)}\colon\,\mathrm{Aut}(S,v,H)\to \mathrm{Aut}(\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z})\right)\subset\mathrm{Mon}^ {2}_{\mathrm{lt}}(M_{v}(S,H)).\]
2.4.3. Relation between the two representations \(\widetilde{\Phi}^{m,k}\) and \(\mathsf{pt}^{m,k}\)
We now connect the representation \(\mathsf{pt}^{m,k}\) of \(\mathcal{G}^{m,k}\) in \(\mathcal{A}_{k}\) with the the representation \(\widetilde{\Phi}^{m,k}\) of \(\mathcal{G}^{m,k}\) in \(\widetilde{\mathcal{H}}^{m,k}\) we defined before.
**Definition 2.39**.: Given two strictly positive integers \(m\) and \(k\), define the functor
\[\Psi\colon\widetilde{\mathcal{H}}^{m,k}\longrightarrow\mathcal{A}_{k}\]
as follows:
* if \((\widetilde{\Lambda},v,\epsilon)\) is an object in \(\widetilde{\mathcal{H}}^{m,k}\), then \[\Psi(\widetilde{\Lambda},v)=v^{\perp};\]
* if \((\widetilde{\Lambda}_{i},v_{i},\epsilon_{i})\) are two objects and \(g\colon\widetilde{\Lambda}_{1}\to\widetilde{\Lambda}_{2}\) an isometry such that \(g(v_{1})=v_{2}\), then \[\Psi(g)=(-1)^{\mathrm{or}(g)}g|_{v_{1}^{\perp}}\colon v_{1}^{\perp} \longrightarrow v_{2}^{\perp},\] where or is the orientation character (11).
Put
\[\Phi^{m,k}=\Psi\circ\widetilde{\Phi}^{m,k}\colon\mathcal{G}^{m,k}\longrightarrow \mathcal{A}_{k}.\]
**Proposition 2.40**.: _There exists an isomorphism of functors_
\[\lambda\colon\Phi^{m,k}\longrightarrow\mathsf{pt}^{m,k}.\]
We recall that if \(\mathcal{A}\) and \(\mathcal{B}\) are two categories and \(F,G\colon\mathcal{A}\to\mathcal{B}\) are two functors, an isomorphism of functors \(\lambda\colon F\to G\) is a natural transformation such that for each \(A\in\mathcal{A}\) the morphism \(\lambda(A)\colon F(A)\to G(A)\) is an isomorphism in \(\mathcal{B}\).
Proof.: First of all, let us define the representations
\[\Phi^{m,k}_{\mathrm{def}}=\Psi\circ\widetilde{\Phi}^{m,k}_{\mathrm{def}}\colon \mathcal{G}^{m,k}_{\mathrm{def}}\longrightarrow\mathcal{A}_{k}\]
and
\[\Phi^{m,k}_{\mathrm{FM}}=\Psi\circ\widetilde{\Phi}^{m,k}_{\mathrm{FM}}\colon \mathcal{G}^{m,k}_{\mathrm{FM}}\longrightarrow\mathcal{A}_{k}.\]
We will prove the existence of two isomorphisms of functors
\[\lambda_{\mathrm{def}}\colon\Phi^{m,k}_{\mathrm{def}}\longrightarrow\mathsf{ pt}^{m,k}_{\mathrm{def}}\qquad\text{ and }\qquad\lambda_{\mathrm{FM}}\colon\Phi^{m,k}_{\mathrm{FM}} \longrightarrow\mathsf{pt}^{m,k}_{\mathrm{FM}},\]
from which the statement will follow by definition.
Let us start with \(\lambda_{\mathrm{def}}\). For any object \((S,v,H)\in\mathcal{G}^{m,k}_{\mathrm{def}}\), Theorem 1.27 provides an isometry
\[\lambda_{(S,v,H)}\colon v^{\perp}\longrightarrow\mathrm{H}^{2}(M_{v}(S,H), \mathbb{Z}).\]
For \(i=1,2\), let \(A_{i}=(S_{i},v_{i},H_{i})\in\mathcal{G}^{m,k}_{\mathrm{def}}\) be two objects, and let us take a morphism \(h\in\operatorname{Hom}_{\mathcal{G}^{m,k}_{\mathrm{def}}}(A_{1},A_{2})\). To conclude the proof of this first step, we need to show that there is a commutative diagram
(13)
We first prove the claim when \(h=\overline{\alpha}\), where \(\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\) is a deformation path. In this case, by Remark 2.26, we have that \(\widetilde{\Phi}^{m,k}_{\mathrm{def}}(\alpha)=p_{\alpha}\) is orientation preserving.
Let \(p\colon\mathcal{M}\to T\) be the family of moduli spaces induced by \((f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H})\). By definition there exists a flat section \(\mathsf{v}\) of the local system \(R^{\bullet}f_{*}\mathbb{Z}\) such that \(\mathsf{v}_{t_{1}}=v_{1}\) and \(\mathsf{v}_{t_{2}}=v_{2}\). We denote by \(\mathsf{v}^{\perp}\) the corresponding sub-local system of \(R^{\bullet}f_{*}\mathbb{Z}\). By Remark 2.4, the isometries \(\lambda_{A_{1}}\) and \(\lambda_{A_{2}}\) fit in an isomorphism of local systems
\[\lambda_{\mathsf{v}}\colon\mathsf{v}^{\perp}\longrightarrow R^{2}\phi_{*} \mathbb{Z}.\]
The commutativity of the diagram (13) then follows since both \(\Phi^{m,k}_{\mathrm{def}}(\alpha)=p_{\alpha}\) and \(\mathsf{pt}^{m,k}_{\mathrm{def}}(\alpha)=g_{\alpha}\) are parallel transport operators in the two families associated to \(\alpha\) (see also Remark 2.7).
Let us now turn to the case where \(h=\chi_{H_{1},H_{2}}\), i.e. when \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are congruent: then \(S_{1}=S_{2}=S\), \(v_{1}=v_{2}=v\) and \(\chi_{H_{1},H_{2}}\) is the identification \(M_{v}(S,H_{1})=M_{v}(S,H_{2})\) (see Definitions 1.23 and 2.12). In this case we see that \(\Phi^{m,k}_{\mathrm{def}}(h)=\mathrm{id}_{v^{\perp}}\), \(\mathsf{pt}^{m,k}_{\mathrm{def}}(h)=\mathrm{id}_{\mathrm{H}^{2}(M_{v}(S,H_{1} ))}\), and we have an identification \(\mathrm{H}^{2}(M_{v}(S,H_{1}),\mathbb{Z})=\mathrm{H}^{2}(M_{v}(S,H_{2}), \mathbb{Z})\) and an identification \(\lambda_{A_{1}}=\lambda_{A_{2}}\): diagram (13) is then commutative.
By Definition 2.13, we then deduce the existence of the isomorphism \(\lambda_{\mathrm{def}}\).
We are now left with \(\lambda_{\mathrm{FM}}\), whose construction follows in the same way. First of all, we define it on objects as in the previous case. Let \(A_{1}=(S,v_{1},H)\) and \(A_{2}=(S,v_{2},H)\) be two objects and let \(\phi\in\operatorname{Aut}(\mathrm{D}^{b}(S))\) be an equivalence
of categories that induces an isomorphism \(\phi_{v_{1}}\colon M_{v_{1}}(S,H)\to M_{v_{2}}(S,H)\). We need to prove that there exists a commutative diagram
(14)
Recall that, by definition, \(\mathfrak{pt}^{m,k}_{\mathrm{FM}}(\phi)=\phi_{v_{1},*}\) and \(\Phi^{m,k}_{\mathrm{FM}}(\phi)=(-1)^{\mathrm{or}(\phi^{\mathrm{H}})}\phi^{ \mathrm{H}}|_{v_{1}^{\perp}}\), where \(\phi^{\mathrm{H}}\) is the action of \(\phi\) on the Mukai lattice and or is the orientation character (11).
First of all, we notice that it is enough to prove the result only when \(\phi=\mathsf{L},\mathrm{FM}_{\Delta},\mathrm{FM}_{\Delta}^{\vee},\mathrm{FM}_ {\mathcal{P}}\) are as in Definition 2.17.
Now, if \(\phi=\mathsf{L},\mathrm{FM}_{\Delta}\), then \(\phi^{\mathrm{H}}\) is orientation preserving (see [1, Remark 5.4]) and by [15, Proposition 2.4]
\[\lambda_{A_{2}}^{-1}\circ\phi_{v_{1},*}\circ\lambda_{A_{1}}=\phi^{\mathrm{H}} |_{v_{1}^{\perp}}.\]
If \(\phi=\mathrm{FM}_{\Delta}^{\vee}\), then \(\phi^{\mathrm{H}}\) is orientation reversing. In fact the duality equivalence \((-)^{\vee}\in\mathrm{Aut}(\mathrm{D}^{b}(S))\) induces in cohomology the isometry
\[\delta\colon(r,c,s)\mapsto(r,-c,s)\]
which is orientation reversing: it changes the sign to a Kahler and a symplectic form, but it is the identity on the hyperbolic plane generated by \(\mathrm{H}^{0}(S,\mathbb{Z})\) and \(\mathrm{H}^{4}(S,\mathbb{Z})\). In this case, by [15, Proposition 2.5] we have that
\[\lambda_{A_{2}}^{-1}\circ\phi_{v_{1},*}\circ\lambda_{A_{1}}=-\phi^{\mathrm{H}} |_{v_{1}^{\perp}}.\]
Finally, if \(\phi=\mathrm{FM}_{\mathcal{P}}\), then by [1, Remark 5.4, Proposition 5.5.]\(\phi^{\mathrm{H}}\) is orientation preserving and by [15, Proposition 2.4]
\[\lambda_{A_{2}}^{-1}\circ\phi_{v_{1},*}\circ\lambda_{A_{1}}=\phi^{\mathrm{H}} |_{v_{1}^{\perp}}.\]
Those are exactly the commutativity condition needed to define the isomorphism \(\lambda_{\mathrm{FM}}\) and we are done.
## 3. Polarised monodromy of K3 surfaces and its lift to moduli spaces
The first part of this section is dedicated to show that the monodromy group of a K3 surface can be generated only by polarised monodromy operators. This result is well-known to experts but a rigorous proof is lacking in literature.
In the second and last part of the section we will instead show how to lift polarised monodromy operators on a K3 surface to some moduli spaces of sheaves on the same K3 surface.
### The monodromy group of a K3 surface
The aim of this section is to prove the following result.
**Theorem 3.1**.: _Let \(S\) be a projective K3 surface. Then the monodromy group \(\mathrm{Mon}^{2}(S)\) is generated by polarised parallel transport operators._
In the statement above we mean that there exists a set of generators of \(\operatorname{Mon}^{2}(S)\) each of which is composition of polarised parallel transport operators (see Definition 1.15 for the notion of polarised parallel transport operator).
By deforming the K3 surface via polarised families, it is enough to prove the statement for a special example. We will work with a projective elliptic K3 surface \(p\colon S\to\mathbb{P}^{1}\) with a section and with Picard rank 2. In particular, if we denote by \(f\) the class of the fibre and by \(\ell\) the class of the section, then
\[\operatorname{Pic}(S)=\mathbb{Z}.\ell\oplus\mathbb{Z}.f=\begin{pmatrix}-2&1\\ 1&0\end{pmatrix}\]
is the unimodular hyperbolic plane. Let us put \(e=\ell+f\), so that \(e\) and \(f\) form the standard basis of the hyperbolic plane.
Let us recall that in this case a class \(h=\alpha e+\beta f\) is ample when the ratio \(\beta/\alpha>1\). Let us now choose three positive integers \(r,k,p\gg 0\), and consider the following ample classes,
\[h_{1}=e{+}rf,\qquad h_{2}=e{+}(r{-}1)f,\qquad h_{3}=se{+}pf,\quad\text{and} \quad h_{4}=(s{-}1)e{+}pf.\]
We will prove the following result, which will imply Theorem 3.1.
**Proposition 3.2**.: _Let \(S\) be a projective elliptic K3 surface with a section and Picard rank \(2\). Let \(H_{1}\), \(H_{2}\), \(H_{3}\) and \(H_{4}\) be four polarisations whose classes are the classes \(h_{1}\), \(h_{2}\), \(h_{3}\) and \(h_{4}\) above. Then_
\[\operatorname{Mon}^{2}(S)=\langle\operatorname{Mon}^{2}(S,H_{1}), \operatorname{Mon}^{2}(S,H_{2}),\operatorname{Mon}^{2}(S,H_{3}),\operatorname {Mon}^{2}(S,H_{4})\rangle.\]
First of all, let us recall the following well-known result due to [11, Theorem 3.4.1 and Corollary 3.4.2]. The following statement can be found also in [10, Proposition 6.8].
**Lemma 3.3** ([11]).: _Let \(S\) be a projective K3 surface, \(H\in\operatorname{Pic}(S)\) an ample line bundle and \(h\in\operatorname{H}^{2}(S,\mathbb{Z})\) its class. Then_
\[\operatorname{Mon}^{2}(S,H)=\operatorname{O}^{+}(\operatorname{H}^{2}(S, \mathbb{Z}))_{h},\]
_where \(\operatorname{O}^{+}(\operatorname{H}^{2}(S,\mathbb{Z}))_{h}\) is the group of orientation preserving isometries \(g\) such that \(g(h)=h\)._
_Moreover, the whole group \(\operatorname{O}^{+}(\operatorname{H}^{2}(S,\mathbb{Z}))_{h}\) arises as the monodromy group of a projective polarised family \(f\colon\mathcal{S}\to T\) of K3 surface over a smooth and quasi-projective base \(T\)._
The smooth and quasi-projective base \(T\) of the family above is the image under the period map of the moduli space of polarised K3 surfaces with fixed degree (see [10, Section 6.3] for more details).
The last ingredient we need is the following lattice-theoretic result, which is a straighforward application of the Eichler criterion. We remark that a very similar result already appeared in the proofs of [10, Lemma 3.5] and [14, Theorem 5.4] for similar purposes.
Let \(L\) be an even lattice and let us assume that \(L\) contains at least three copies of the unimodular hyperbolic plane \(U\). We denote by \(U\) one distinguished such copy, with basis \(\{e,f\}\), and we write \(L=U\oplus L_{1}\). Given \(s,r,p>1\), we use the following notation,
\[h_{1}=e{+}rf,\qquad h_{2}=e{+}(r{-}1)f,\qquad h_{3}=se{+}pf,\quad\text{and} \quad h_{4}=(s{-}1)e{+}pf.\]
**Lemma 3.4**.: _With the notation above, we have_
\[\operatorname{O}^{+}(L)=\langle\operatorname{O}^{+}(L)_{h_{1}},\operatorname{O}^ {+}(L)_{h_{2}},\operatorname{O}^{+}(L)_{h_{3}},\operatorname{O}^{+}(L)_{h_{4}}\rangle.\]
Here \(\operatorname{O}^{+}(L)_{h_{i}}\) is the subgroup of \(\operatorname{O}^{+}(L)\) of the elements fixing \(h_{i}\).
Proof.: First of all, by [10, Proposition 3.3 (iii)], we have that
\[\operatorname{O}^{+}(L)=\langle\operatorname{O}^{+}(L_{1}),E_{\bar{U}}(L_{1})\rangle,\]
where \(\operatorname{O}^{+}(L_{1})\) is embedded in \(\operatorname{O}^{+}(L)\) by extending any isometry as the identity on \(U\), and where \(E_{U}(L_{1})\) is the group generated by transvections of the form \(t(e,a)\) and \(t(f,a)\) for all \(a\in L_{1}\). We recall that a transvection is defined in the following way: for any \(z\in L\) with \(z^{2}=0\) and \(a\in z^{\perp}\), then
\[t(z,a)\colon x\mapsto x-(a,x)z+(z,x)a-\frac{1}{2}(a,a)(z,x)z.\]
We refer to [10, Section 3] for backgrounds and main properties, but we point out that if \(a^{2}\neq 0\), then the extension of \(t(z,a)\) over \(\mathbb{Q}\) is the composition of the reflection \(R_{a}\) with the reflection \(R_{a+\frac{1}{2}(a,a)z}\). Any transvection \(t(z,a)\) is an orientation preserving isometry with determinant \(1\) (acting trivially on the discriminant group of \(L\)).
As \(h_{i}\in U\), it follows that \(\operatorname{O}^{+}(L_{1})\subset\operatorname{O}^{+}(L)_{h_{i}}\) for \(i=1,2,3,4\). Therefore, if we denote by \(G\) the group generated by the \(\operatorname{O}^{+}(L)_{h_{i}}\)'s, we only need to show that \(t(e,a),t(f,a)\in G\) for all \(a\in L_{1}\).
By assumption \(L_{1}\) contains at least two copies of the hyperbolic plane, so that we can apply [10, Proposition 3.3 (ii),] and find an isometry \(g\in\operatorname{O}^{+}(L_{1})\) such that \(g(a)\in\bar{U}\), where \(\bar{U}\) is a distinguished copy of the hyperbolic plane in \(L_{1}\). By [10, Relation (6) in Section 3] we have that
\[g\circ t(z,a)\circ g^{-1}=t(z,g(a))\]
for any isotropic element \(z\). In particular, since we already remarked that \(\operatorname{O}^{+}(L_{1})\subset G\), it is enough to prove that \(t(e,a),t(f,a)\in G\) for all \(a\in\bar{U}\).
As already remarked, \(t(e,a)\) and \(t(f,a)\) are orientation preserving isometries with determinant \(1\); moreover, since \(a\in\bar{U}\), \(t(e,a)\) and \(t(f,a)\) both act as the identity on \((U\oplus\bar{U})^{\perp}\), and without loss of generality they can then be viewed as elements in \(\operatorname{SO}^{+}(U\oplus\bar{U})\).
If we denote by \(\{\bar{e},\bar{f}\}\) a standard basis for \(\bar{U}\), then by [10, Lemma 3.2] we know that \(\operatorname{SO}^{+}(U\oplus\bar{U})\) is generated by the four transvections
\[t(\bar{e},e),\qquad t(\bar{e},f),\qquad t(\bar{f},e)\qquad\text{and}\qquad t( \bar{f},f).\]
So it is enough to prove that each of them is in \(G\). Let us show that \(t(\bar{e},f)\in G\), the others will follow similarly.
We use the following remark, which follows directly from the definition of transvection: if \(a\in U^{\perp}\), then \(t(e,a)\) and \(t(f,a)\) are the identity on the sublattice \(U^{\perp}\cap a^{\perp}\). In particular, for example, \(t(\bar{e},e-tf)\) acts as the identity on \(e+tf\).
By [10, Relation (5) in Section 3], we have in general that \(t(z,a)\circ t(z,b)=t(z,a+b)\) and \(t(z,-a)=t(z,a)^{-1}\). In our situation this says that
\[t(\bar{e},e-rf)\circ t(\bar{e},-e+(r-1)f)=t(\bar{e},f)^{-1},\]
but as remarked \(t(\bar{e},e-rf)\in\operatorname{O}^{+}(L)_{h_{1}}\) and similarly \(t(\bar{e},-e+(r-1)f)=t(\bar{e},e-(r-1)f)^{-1}\in\operatorname{O}^{+}(L)_{h_{1}}\), concluding the proof.
Proof of Proposition 3.2.: Let us keep the notations as in the statement of Proposition 3.2. By Lemma 3.3 we have that \(\operatorname{Mon}^{2}(S,H_{i})=\operatorname{O}^{+}(\operatorname{H}^{2}(S, \mathbb{Z}))_{h_{i}}\), for \(i=1,2,3,4\). In particular, by Lemma 3.4, we have that
\[\operatorname{O}^{+}(\operatorname{H}^{2}(S,\mathbb{Z}))=\langle\operatorname {Mon}^{2}(S,H_{1}),\operatorname{Mon}^{2}(S,H_{2}),\operatorname{Mon}^{2}(S,H_ {3}),\operatorname{Mon}^{2}(S,H_{4})\rangle.\]
Since \(\operatorname{Mon}^{2}(S)\subset\operatorname{O}^{+}(\operatorname{H}^{2}(S, \mathbb{Z}))\) (see Lemma 1.13), the claim follows.
Let us conclude with the following corollary, which recover a well-known result of Borel.
**Corollary 3.5** ([1]).: _If \(S\) is a K3 surface, then_
\[\operatorname{Mon}^{2}(S)=\operatorname{O}^{+}(\operatorname{H}^{2}(S, \mathbb{Z})).\]
### Lift of the polarised monodromy of a K3 surface to moduli spaces of sheaves
In this section we exhibit some special cases in which we can lift polarised monodromy operators of a K3 surface \(S\) to monodromy operators on a moduli space \(M_{v}(S,H)\), where \((S,v,H)\) is a suitable \((m,k)\)-triple. We will achieve this by using the groupoid representations defined in Section 2.4.
Since \(\operatorname{H}^{2}(S,\mathbb{Z})\subset\widetilde{\operatorname{H}}(S, \mathbb{Z})\), the group \(\operatorname{O}(\operatorname{H}^{2}(S,\mathbb{Z}))\) is identified with the subgroup of \(\operatorname{O}(\widetilde{\operatorname{H}}(S,\mathbb{Z}))\) of isometries acting as the identity on \(\operatorname{H}^{0}(S,\mathbb{Z})\oplus\operatorname{H}^{4}(S,\mathbb{Z})\).
On the other hand, if \((S,v,H)\) is an \((m,k)\)-triple, using the notation set in Section 2.1, we consider the homomorphism of groups
\[\widetilde{\Phi}^{m,k}_{\operatorname{def},(S,v,H)}\colon\operatorname{Aut}_{ \mathcal{G}^{m,k}_{\operatorname{def}}}(S,v,H)\longrightarrow\operatorname{ Aut}_{\widetilde{\mathcal{H}}^{m,k}}(\tilde{\operatorname{H}}(S,\mathbb{Z}),v, \epsilon_{S})\]
induced by the representation \(\widetilde{\Phi}^{m,k}_{\operatorname{def}}\colon\mathcal{G}^{m,k}_{ \operatorname{def}}\to\widetilde{\mathcal{H}}^{m,k}\) defined in Definition 2.24. Since by definition
\[\operatorname{Aut}_{\widetilde{\mathcal{H}}^{m,k}}(\tilde{\operatorname{H}}(S,\mathbb{Z}),v,\epsilon_{S})=\operatorname{O}(\widetilde{\operatorname{H}}(S,\mathbb{Z}))_{v},\]
it follows that also
\[\operatorname{Im}(\widetilde{\Phi}^{m,k}_{\operatorname{def},(S,v,H)})\subset \operatorname{O}(\widetilde{\operatorname{H}}(S,\mathbb{Z})).\]
In the following lemma, we relate the subgroup \(\operatorname{Mon}^{2}(S,H)\subset\operatorname{O}(\operatorname{H}^{2}(S, \mathbb{Z}))\) with \(\operatorname{Im}(\widetilde{\Phi}^{m,k}_{\operatorname{def},(S,v,H)})\), at least when the Mukai vector is \((m,0,-mk)\).
We will also relate \(\operatorname{Mon}^{2}(S,H)\) with the image of the homomorphism of groups
\[\operatorname{\mathfrak{p}t}^{m,k}_{\operatorname{def},(S,v,H)}\colon \operatorname{Aut}_{\mathcal{G}^{m,k}_{\operatorname{def}}}(S,v,H) \longrightarrow\operatorname{Aut}_{\mathcal{A}_{k}}(\operatorname{H}^{2}(M_{v }(S,H),\mathbb{Z}))\]
induced by the representation \(\operatorname{\mathfrak{p}t}^{m,k}_{\operatorname{def}}\colon\mathcal{G}^{m,k }_{\operatorname{def}}\to\mathcal{A}_{k}\) defined in Definition 2.33.
**Lemma 3.6**.: _Let \(m,k\geq 1\) be two integers and \(v=(m,0,-mk)\) a Mukai vector on a projective K3 surface \(S\). If \((S,v,H)\in\mathcal{G}^{m,k}_{\operatorname{def}}\) is an object, then there is an inclusion of groups_
\[\operatorname{Mon}^{2}(S,H)\subset\operatorname{Im}(\widetilde{\Phi}^{m,k}_{ \operatorname{def},(S,v,H)})\]
_and an injective morphism of groups_
\[\operatorname{Mon}^{2}(S,H)\hookrightarrow\operatorname{Im}(\operatorname{ \mathfrak{p}t}^{m,k}_{\operatorname{def},(S,v,H)}).\]
Proof.: Let us start with the inclusion of \(\operatorname{Mon}^{2}(S,H)\) in \(\operatorname{Im}(\widetilde{\Phi}^{m,k}_{\operatorname{def},(S,v,H)})\). Recall from Lemma 3.3 that there exists a projective polarised family \(f\colon\mathcal{S}\to T\) of projective K3 surfaces such that any isometry in \(\operatorname{Mon}^{2}(S,H)\) arises as a parallel transport operator in the local system \(R^{2}f_{*}\mathbb{Z}\), and such that the base \(T\) is smooth and quasi-projective. More precisely, the family \(f\) comes with the following properties:
* there exists \(\bar{t}\in T\) such that \(\mathcal{S}_{\bar{t}}=S\);
* there exists a relatively ample line bundle \(\mathcal{H}\) on \(\mathcal{S}\) such that \(\mathcal{H}_{\bar{t}}=H\).
Any \(g\in\operatorname{Mon}^{2}(S,H)\), is then associated to a loop \(\gamma\) in \(T\) centred in \(\bar{t}\).
As in the discussion before the lemma, we can view \(g\) as an element of \(\operatorname{Aut}_{\widetilde{\mathcal{H}}^{m,k}}(\widetilde{\operatorname{ H}}(S,\mathbb{Z}),v,\epsilon_{S})\), and we have to prove that there is a deformation path \(\alpha\) from \((S,v,H)\) to itself such that \(g=\widetilde{\Phi}^{m,k}_{\operatorname{def}}(\bar{\alpha})\).
To do so, we first notice that the subset \(Z\subset T\) of the points \(t\in T\) such that \(\mathcal{H}_{t}\) is not \(v_{t}\)-generic is a closed subset of \(T\) (see Remark 2.3). Moreover, since by assumptions \(\mathcal{H}_{\bar{t}}\) is \(v\)-generic, \(Z\) is strictly contained in \(T\) and hence it must have real codimension at least \(2\). In particular, since the base \(T\) is smooth, there exists a loop \(\gamma^{\prime}\) in \(T^{\prime}:=T\setminus Z\) centred in \(\bar{t}\) that is homotopic to \(\gamma\) (see for example [1, Theoreme 2.3 in Chapter X]).
Since parallel transport operators do not depend on the homotopy class of the path, we may suppose without loss of generality that \(\mathcal{H}_{t}\) is \(v\)-generic for every \(t\in T\).
By assumption the Mukai vector is of the form \((m,0,-mk)\), therefore it follows that
\[\alpha=(f\colon\mathcal{S}\to T,\mathcal{O}_{\mathcal{S}},\mathcal{H},\bar{t },\bar{t},\gamma)\]
defines a morphism \(\overline{\alpha}\) in \(\operatorname{Aut}_{\mathcal{G}^{m,k}_{\operatorname{def}}}(S,v,H)\). By definition, we get that
\[g=\widetilde{\Phi}^{m,k}_{\operatorname{def}}(\overline{\alpha}),\]
concluding the first part of the proof.
Now, since by Remark 2.26 the isometries of \(\operatorname{Mon}^{2}(S,H)\) preserve the orientation of \(\widetilde{\operatorname{H}}(S,\mathbb{Z})\), it follows that the inclusion just proved gives an injective map
\[\operatorname{Mon}^{2}(S,H)\hookrightarrow\operatorname{Im}(\Phi^{m,k}_{ \operatorname{def},(S,v,H)}).\]
Combining this with the isomorphism of functors \(\lambda_{\operatorname{def}}\colon\Phi^{m,k}_{\operatorname{def}}\to \mathfrak{pt}^{m,k}_{\operatorname{def}}\) constructed in the proof of Proposition 2.40 finishes the proof.
**Remark 3.7**.: The claim of Lemma 3.6 holds more generally when the Mukai vector has the form \((r,0,s)\), but we will only use it in the special case of the statement.
**Remark 3.8**.: If \(v=(m,0,-mk)\), then \(\operatorname{H}^{2}(S,\mathbb{Z})\subset v^{\perp}\) and again the group \(\operatorname{O}(\operatorname{H}^{2}(S,\mathbb{Z}))\) is naturally embedded in \(\operatorname{O}(v^{\perp})\) by extending with the identity. Using the isometry \(\lambda_{(S,v,H)}\colon v^{\perp}\longrightarrow\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\), we have a natural injective morphism
\[\operatorname{O}(\operatorname{H}^{2}(S,\mathbb{Z}))\hookrightarrow \operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\]
of which
\[\operatorname{Mon}^{2}(S,H)\hookrightarrow\operatorname{Im}(\mathfrak{pt}^{m,k }_{\operatorname{def},(S,v,H)})\]
is the restriction.
The following is the main result of this section. In what follows by a _very general projective elliptic K3 surface with a section_ we mean a projective elliptic K3 surface having a section and having Picard rank 2. As in Section 3.1, we denote by \(f\) the class of a fibre and by \(\ell\) the class of the section. Then \(\operatorname{Pic}(S)\) is isometric to the unimodular hyperbolic plane with standard basis \(e=\ell+f\) and \(f\).
**Proposition 3.9**.: _Assume that \(v=(m,0,-mk)\) and \(S\) is a very general projective elliptic K3 surface with a section. There exists an integer \(t\gg 0\) and a \(v\)-generic polarisation \(H=e+tf\) such that_
\[\operatorname{Mon}^{2}(S)\subset\operatorname{Im}(\widetilde{\Phi}^{m,k}_{ \operatorname{def},(S,v,H)}).\]
Proof.: By Proposition 3.2 we know that there are four polarisations \(H_{1}\), \(H_{2}\), \(H_{3}\) and \(H_{4}\) on \(S\) such that \(\operatorname{Mon}^{2}(S)\) is generated by the \(\operatorname{Mon}^{2}(S,H_{i})\)'s.
Now, we claim that the four polarisations \(H_{1}\), \(H_{2}\), \(H_{3}\) and \(H_{4}\) can be chosen to belong to the same \(v\)-chamber (see Remark 1.20), so that they are \(v\)-generic.
In particular, by Lemma 3.6 we will have that
\[\operatorname{Mon}^{2}(S,H_{i})\subset\operatorname{Im}(\widetilde{\Phi}^{m,k }_{\operatorname{def},(S,v,H_{i})}),\]
and by Remark 1.24 the groups \(\operatorname{Im}(\widetilde{\Phi}^{m,k}_{\operatorname{def},(S,v,H_{i})})\) can all be identified: the proposition will then follow from Proposition 3.2.
Indeed, let us recall how the polarisations \(H_{i}\) are chosen. If \(f\) is the class of the fibration of the elliptic surface \(S\) and \(\ell\) is the class of the section, then we put \(e=\ell+f\) and we choose
\[H_{1}=e+rf,\quad H_{2}=e+(r-1)f,\quad H_{3}=se+pf,\quad\text{and}\quad H_{4}=( s-1)e+pf\]
for \(r,k,p\gg 0\). Now, by [1, Lemma I.0.3] (see also [1, Definition 2.37 and Lemma 2.38]), if \(r\) is big enough and \(s\) and \(p\) are chosen such that the quotient \(p/s\) is big enough, then all the \(H_{i}\)'s are contained in the unique \(v\)-chamber whose closure contains the class \(f\).
We may now conclude the proof: choose a polarisation \(H\) on \(S\) that lies in the same \(v\)-chamber of \(H_{1},\cdots,H_{4}\), i.e. \(H\) belongs to the unique \(v\)-chamber whose closure contains \(f\). By definition of \(\mathcal{G}^{m,k}_{\operatorname{def}}\), as \(H\) and \(H_{i}\) lie in the closure of the same \(v\)-chamber for \(i=1,2,3,4\), we have an isomorphism \(\chi_{H,H_{i}}\colon(S,v,H)\to(S,v,H_{i})\) in \(\mathcal{G}^{m,k}\) and hence an isomorphism of groups
\[\chi^{\sharp}_{H,H_{i}}\colon\,\operatorname{Aut}_{\mathcal{G}^{m,k}_{ \operatorname{def}}}(S,v,H)\longrightarrow\operatorname{Aut}_{\mathcal{G}^{m,k }_{\operatorname{def}}}(S,v,H_{i})\]
given by conjugation with \(\chi_{H,H_{i}}\). Since \(\widetilde{\Phi}^{m,k}_{\operatorname{def}}(\chi_{H,H_{i}})\) is the identity, we get that \(\operatorname{Im}(\widetilde{\Phi}^{m,k}_{\operatorname{def},(S,v,H_{i})})= \operatorname{Im}(\widetilde{\Phi}^{m,k}_{\operatorname{def},(S,v,H)})\) for every \(i=1,2,3,4\), and this concludes the proof.
**Remark 3.10**.: If \(v\) is primitive, then the same result was proved in [1, Corollary 6.7]. We wish to point out that even though the idea of our proof and Markman's one is the same, they differ inasmuch as Markman uses non-polarised deformations of K3 surfaces (compare [1, Definition 6.2] with our Definition 2.13). If \(v\) is primitive this is justified by the fact that \(M_{(1,0,-k)}(S,H)\cong\operatorname{Hilb}^{k+1}(S)\). On the other hand, if \(v\) is not primitive we are forced to use polarised families.
**Corollary 3.11**.: _Assume that \(v=(m,0,-mk)\) and \(S\) is a very general projective elliptic K3 surface with a section. If \(H\) is a \(v\)-generic polarisation contained in the unique \(v\)-chamber whose closure contains the nef class \(f\), then there is an injective morphism_
\[\operatorname{Mon}^{2}(S)\hookrightarrow\operatorname{Mon}^{2}_{\operatorname{lt }}(M_{v}(S,H)).\]
Proof.: As in the last part of the proof of Lemma 3.6, by Remark 2.26 and Proposition 2.40 the inclusion of Proposition 3.9 gives an injective morphism
\[\operatorname{Mon}^{2}(S)\hookrightarrow\operatorname{Im}(\mathfrak{pt}^{m,k}_ {\operatorname{def},(S,v,H)}).\]
Then the claim follows from Corollary 2.38.
Finally, we want to exhibit an analogous version of Lemma 3.6 when the K3 surface is general enough, which will be useful later in Section 4.1.
**Lemma 3.12**.: _Let \(S\) be a projective K3 surface with \(\operatorname{Pic}(S)=\mathbb{Z}.H\) and \(v=(r,mH,s)\) a Mukai vector. If \((S,v,H)\) is an \((m,k)\)-triple, then there exists an inclusion of groups_
\[\operatorname{Mon}^{2}(S,H)\subset\operatorname{Im}(\widetilde{\Phi}^{m,k}_{ \operatorname{def},(S,v,H)})\]
_and an injective morphism of groups_
\[\operatorname{Mon}^{2}(S,H)\hookrightarrow\operatorname{Im}(\mathfrak{pt}^{m,k }_{\operatorname{def},(S,v,H)}).\]
Proof.: The proof runs verbatim as the proof of Lemma 3.6, the only difference being now that, with the same notation, the deformation path is
\[\alpha=(f\colon\mathcal{S}\to T,\mathcal{H}^{\otimes m},\mathcal{H},\bar{t}, \bar{t},\gamma).\]
## 4. The locally trivial monodromy group
Let \((S,v,H)\) be an \((m,k)\)-triple. As usual we write \(v=mw\), with \(w\) primitive, and we consider the \((1,k)\)-triple \((S,w,H)\) (cf. Remark 1.22).
By Remark 1.26 we can identify the (smooth) moduli space \(M_{w}(S,H)\) with the most singular locus of the (singular) moduli space \(M_{v}(S,H)\). In particular we consider the closed embedding
\[i_{w,m}\colon M_{w}(S,H)\longrightarrow M_{v}(S,H).\]
The aim of this section is to relate the locally trivial monodromy group of \(M_{v}(S,H)\) to the monodromy group of \(M_{w}(S,H)\) by mean of the morphism \(i^{\sharp}_{w,m}\) defined in Lemma 1.30.
In Section 4.1 we construct locally trivial monodromy operators and show that those operators form a distinguished group.
In Section 4.2 we prove that \(i^{\sharp}_{w,m}\) is injective.
In Section 4.3 we show that the monodromy operators constructed before are all and only by using the morphism \(i^{\sharp}_{w,m}\) as a constraint; as a corollary we get the fact that \(i^{\sharp}_{w,m}\) is an isomorphism.
The group \(\mathsf{W}(v^{\perp})\) as a subgroup of \(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\)
Let us recall the following notation. If \(L\) is an abstract lattice, the nondegenerate bilinear form embeds \(L\) into its dual \(L^{*}=\operatorname{Hom}(L,\mathbb{Z})\): the group \(L^{*}/L\) is denoted by \(A_{L}\) and called _discriminant group_ of \(L\). Any isometry \(g\in\operatorname{O}(L)\) extends to an isometry on \(L^{*}\) and hence it induces an automorphism of \(A_{L}\). We denote by \(\mathsf{W}(L)\subset\operatorname{O}^{+}(L)\) the subgroup of orientation preserving isometries acting as \(\pm\operatorname{id}\) on \(A_{L}\).
When \(L=\operatorname{H}^{2}(X,\mathbb{Z})\) for an irreducible symplectic variety \(X\), then we simply write \(\mathsf{W}(X)\) instead of \(\mathsf{W}(\operatorname{H}^{2}(X,\mathbb{Z}))\).
**Remark 4.1**.: Let \(S\) be a projective K3 surface and \(v\in\widetilde{\operatorname{H}}(S,\mathbb{Z})\) an element. The group \(\mathsf{W}(v^{\perp})\) consists of orientation preserving isometries of \(v^{\perp}\) that extend to isometries of \(\widetilde{\operatorname{H}}(S,\mathbb{Z})\). More precisely, if \(g\in\mathsf{W}(v^{\perp})\), then there exists \(\tilde{g}\in\operatorname{O}^{+}(\widetilde{\operatorname{H}}(S,\mathbb{Z}))\) such that \(\tilde{g}(v)=\pm v\) and \(\tilde{g}|_{v^{\perp}}=g\) (see [11, Lemma 4.10]).
The main result of this section is the following.
**Theorem 4.2**.: _Let \((S,v,H)\) be an \((m,k)\)-triple. Then_
\[\mathsf{W}(M_{v}(S,H))\subset\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v} (S,H)).\]
**Remark 4.3**.: When \(v\) is primitive, this statement is the main result in [11]. In fact our proof parallels Markman's one and only differs from his in some technical details arising from the non-primitivity of the Mukai vector.
Our tool to produce locally trivial monodromy operators is via the representation \(\mathsf{pt}^{m,k}\colon\mathcal{G}^{m,k}\to\mathcal{A}_{k}\) defined in Definition 2.36 (cf. Corollary 2.38); on the other end, because of the isomorphism \(\lambda\colon\Phi^{m,k}\to\mathsf{pt}^{m,k}\) (cf. Proposition 2.40), we will instead look at the representation \(\Phi^{m,k}\colon\mathcal{G}^{m,k}\to\mathcal{A}_{k}\) defined in Definition 2.30.
Since \(\Phi^{m,k}=\Psi_{\infty}\widetilde{\Phi}^{m,k}\), our most technical result involve the representation \(\widetilde{\Phi}^{m,k}\colon\mathcal{G}^{m,k}\to\widetilde{\mathcal{H}}^{m,k}\) defined in Definition 2.30.
If \((S,v,H)\) is an \((m,k)\)-triple, then we have the induced homomorphism
\[\widetilde{\Phi}^{m,k}_{(S,v,H)}\colon\operatorname{Aut}_{\mathcal{G}^{m,k}} (S,v,H)\longrightarrow\operatorname{Aut}_{\widetilde{\mathcal{H}}^{m,k}}( \widetilde{\operatorname{H}}(S,\mathbb{Z}),v)\]
(cf. Section 2.1 for the notation used).
**Proposition 4.4**.: _If \((S,v,H)\) is an \((m,k)\)-triple, then_
\[\operatorname{Im}(\widetilde{\Phi}^{m,k}_{(S,v,H)})=\operatorname{O}( \widetilde{\operatorname{H}}(S,\mathbb{Z}))_{v}.\]
We will now show how the theorem follows from the proposition and then we will spend the rest of the section proving the proposition.
Proof of Theorem 4.2 assuming Proposition 4.4.: By definition, if \((\Lambda,v,\epsilon)\) is an object of \(\widetilde{\mathcal{H}}^{m,k}\), then we have an homomorphism of groups
\[\Psi_{(\Lambda,v,\epsilon)}\colon\operatorname{O}(\Lambda)_{v}\longrightarrow \operatorname{O}^{+}(v^{\perp}),\qquad g\mapsto(-1)^{\operatorname{or}(g)}g|_ {v^{\perp}}\]
induced by the representation \(\Psi\colon\widetilde{\mathcal{H}}^{m,k}\to\mathcal{A}_{k}\) defined in Definition 2.39. Moreover, by [11, Lemma 4.10] it follows that
\[\operatorname{Im}(\Psi_{(\Lambda,v,\epsilon)})=\mathsf{W}(v^{\perp}).\]
On the other hand, if \((S,v,H)\) is an \((m,k)\)-triple, by Proposition 4.4 the homomorphism
\[\widetilde{\Phi}^{m,k}\colon\operatorname{Aut}_{\mathcal{G}^{m,k}}(S,v,H) \longrightarrow\operatorname{Aut}_{\widetilde{\mathcal{H}}^{m,k}}(\widetilde{ \operatorname{H}}(S,\mathbb{Z}),v,\epsilon_{S})\]
is surjective. Since by definition \(\Phi^{m,k}=\Psi\circ\widetilde{\Phi}^{m,k}\), it follows that
\[\operatorname{Im}(\Phi^{m,k}_{(S,v,H)})=\mathsf{W}(v^{\perp}). \tag{15}\]
Now, let us consider the isometry
\[\lambda_{(S,v,H)}\colon v^{\perp}\longrightarrow\operatorname{H}^{2}(M_{v}(S, H),\mathbb{Z})\]
in Theorem 1.27 and the induced isomorphism
\[\lambda^{\sharp}_{(S,v,H)}\colon\operatorname{O}(v^{\perp})\longrightarrow \operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}))\]
defined in equation (3). Then we have a chain of equalities:
\[\operatorname{Im}(\mathfrak{pt}^{m,k}_{(S,v,H)}) =\lambda^{\sharp}_{(S,v,H)}\bigl{(}\operatorname{Im}(\Phi^{m,k} _{(S,v,H)})\bigr{)}\] (by Proposition 2.40) \[=\lambda^{\sharp}_{(S,v,H)}(\mathsf{W}(v^{\perp}))\] (by equality ( 15 )) \[=\mathsf{W}(M_{v}(S,H)).\]
Therefore the claim follows at once by Corollary 2.38.
Let us now prove Proposition 4.4.
Proof of Proposition 4.4.: Notice that
\[\operatorname{Im}(\widetilde{\Phi}^{m,k}_{(S,v,H)})\subset\operatorname{Aut}_ {\widetilde{\mathcal{H}}^{m,k}}(\widetilde{\operatorname{H}}(S,\mathbb{Z}),v )=\operatorname{O}(\widetilde{\operatorname{H}}(S,\mathbb{Z}))_{v},\]
where the last equality follows by definition.
To prove the opposite inclusion, we first notice that it is sufficient to prove it for a special choice of the \((m,k)\)-triple \((S,v,H)\). Indeed, if \((S^{\prime},v^{\prime},H^{\prime})\) is any other \((m,k)\)-triple, then by Remark 2.19 there exists a non-zero element
\[\eta\in\operatorname{Hom}_{\mathcal{G}^{m,k}}((S,v,H),(S^{\prime},v^{\prime}, H^{\prime}))\]
and
\[\operatorname{Im}(\widetilde{\Phi}^{m,k}_{(S^{\prime},v^{\prime},H^{\prime})} )=\widetilde{\Phi}^{m,k}(\eta)\circ\operatorname{Im}(\widetilde{\Phi}^{m,k}_{ (S,v,H)})\circ\widetilde{\Phi}^{m,k}(\eta)^{-1}.\]
Therefore if \(\operatorname{Im}(\widetilde{\Phi}^{m,k}_{(S,v,H)})=\operatorname{O}( \widetilde{\operatorname{H}}(S,\mathbb{Z}))_{v}\), as \(\widetilde{\Phi}^{m,k}(\eta)\) maps \(v\) to \(v^{\prime}\), it follows that \(\operatorname{Im}(\widetilde{\Phi}^{m,k}_{(S^{\prime},v^{\prime},H^{\prime})} )=\operatorname{O}(\widetilde{\operatorname{H}}(S^{\prime},\mathbb{Z}))_{v^{ \prime}}\).
Because of this, we will prove the proposition in the case when \(v=(m,0,-mk)\) and \(S\) is a projective elliptic K3 surface with a section and having Picard rank 2. For short, in the following we will refer to such a K3 surface as a _very general projective elliptic K3 surface with a section_. If we denote by \(p\colon S\to\mathbb{P}^{1}\) the elliptic structure of \(S\) and by \(s\colon\mathbb{P}^{1}\to S\) the section, then we put \(f=p^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)\) and \(\ell=[s(\mathbb{P}^{1})]\). In this case
\[\operatorname{Pic}(S)=\mathbb{Z}.\ell\oplus\mathbb{Z}.f=\begin{pmatrix}-2&1\\ 1&0\end{pmatrix},\]
and we will always denote by \(e=\ell+f\) the other isotropic generator.
Finally, the polarisation \(H\) will be chosen in the unique \(v\)-chamber (see Remark 1.20) whose closure contains the nef class \(f\).
As a first step, let us recall the following lattice-theoretic result due to Markman, which exhibits a set of generators for the group \(\mathrm{O}^{+}(\widetilde{\mathrm{H}}(S,\mathbb{Z}))_{v}\), when \(S\) is now any K3 surface.
**Proposition 4.5** ([12, Lemma 8.1, Proposition 8.6]).: _Let \(S\) be a K3 surface, \(v=(m,0,-mk)\in\widetilde{\mathrm{H}}(S,\mathbb{Z})\) and \(\xi\in\mathrm{H}^{2}(S,\mathbb{Z})\) any primitive class such that \(\xi^{2}=2k-2\). The group \(\mathrm{O}^{+}(\widetilde{\mathrm{H}}(S,\mathbb{Z}))_{v}\) is generated by \(\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\) and the reflection \(R_{u}\) around the class \(u=(1,\xi,k)\in v^{\perp}\)._
Here, as usual, we consider the group \(\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\) as a subgroup of \(\mathrm{O}^{+}(\widetilde{\mathrm{H}}(S,\mathbb{Z}))\) by extending by the identity on \(\mathrm{H}^{0}(S,\mathbb{Z}))\oplus\mathrm{H}^{4}(S,\mathbb{Z})\), and since \(v\in\mathrm{H}^{0}(S,\mathbb{Z}))\oplus\mathrm{H}^{4}(S,\mathbb{Z})\), we actually view the group \(\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\) as a subgroup of \(\mathrm{O}^{+}(\widetilde{\mathrm{H}}(S,\mathbb{Z}))_{v}\).
Proof.: By [12, Proposition 8.6], \(\mathrm{O}^{+}(\widetilde{\mathrm{H}}(S,\mathbb{Z}))_{v}\) is generated by the group \(\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\) and all the reflections \(R_{u}\) as in the statement. The reason why only one is in fact needed is a consequence of [13, Theorem 1.14.4, Theorem 1.17.1] as it is explained in the proof of [12, Lemma 8.1]: if \(u^{\prime}=(1,\xi^{\prime},k)\) is another \((-2)\)-class, then there exists an isometry \(g\in\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\) such that \(g(\xi^{\prime})=\xi\) and by extending \(g\) to \(\mathrm{O}^{+}(\widetilde{\mathrm{H}}(S,\mathbb{Z}))\) by the identity, it follows that \(g\circ R_{u^{\prime}}\circ g^{-1}=R_{u}\). In other words, any two such reflections are conjugated by elements of \(\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\).
Therefore, in order to prove Proposition 4.4, we need to exhibit examples \((m,k)\)-triples such that the generators in the Proposition 4.5 arise as morphisms in \(\mathrm{Im}(\widetilde{\Phi}^{m,k}_{(S,v,H)})\).
By Proposition 3.9, there exists \(t\gg 0\) and a polarisation \(v\)-generic polarisation \(H=e+tf\) on the very generic elliptic surface \(S\) such that
\[\mathrm{Mon}^{2}(S)\subset\mathrm{Im}(\widetilde{\Phi}^{m,k}_{(S,v,H)})).\]
Since by Corollary 3.5 we have that \(\mathrm{Mon}^{2}(S)=\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\), it follows that
\[\mathrm{O}^{+}(\mathrm{H}^{2}(S,\mathbb{Z}))\subset\mathrm{Im}(\widetilde{ \Phi}^{m,k}_{(S,v,H)})). \tag{16}\]
Now, let \(\beta\in\mathrm{H}^{2}(S,\mathbb{Z})\) be a class such that \(\beta^{2}=2k-2\) and such that \(\beta\) is orthogonal to both \(f\) and \(\ell\). Let us consider the class \(u=(1,\beta-f,k)\in v^{\perp}\). Notice that \(u^{2}=-2\).
**Lemma 4.6** ([12, Proposition 7.1]).: _If \(H=\ell+tf\) with \(t\gg 0\), then_
\[R_{u}\in\mathrm{Im}(\widetilde{\Phi}^{m,k}_{(S,v,H)}).\]
Proof.: Since \(H=\ell+tf\) with \(t\gg 0\), by Proposition 2.16 the derived equivalence \(\mathrm{FM}_{\mathcal{P}}\) induces an isomorphism
\[\mathrm{FM}_{\mathcal{P}}\colon M_{(m,0,-mk)}(S,H)\longrightarrow M_{(0,m( \ell+(k+1)f),m)}(S,H).\]
For simplicity, let us put
\[v=(m,0,-mk)\quad\text{ and }\quad\bar{v}=(0,m(\ell+(k+1)f),m).\]
Then
\[\mathrm{FM}_{\mathcal{P}}\in\mathrm{Hom}_{\mathcal{G}^{m,k}_{\mathrm{FM}}}((S,v,H),(S,\bar{v},H)).\]
The cohomological action \(\operatorname{FM}_{\mathcal{P}}^{\operatorname{H}}\) has been studied in [12, Lemma 7.2] and
\[\operatorname{FM}_{\mathcal{P}}^{\operatorname{H}}(1,\beta-f,k)=(0,\ell-(k-1)f- \beta,0).\]
Put \(a=\ell-(k-1)f-\beta\in\operatorname{H}^{2}(S,\mathbb{Z})\). Then \(a^{2}=-2\) and \(R_{a}\in\operatorname{Mon}^{2}(S)=\operatorname{O}^{+}(\operatorname{H}^{2}(S, \mathbb{Z}))\). Notice also that \((0,a,0)\in\bar{v}^{\perp}\).
Since
\[R_{u}=\operatorname{FM}_{\mathcal{P}}^{\operatorname{H}}\circ R_{(0,a,0)} \circ(\operatorname{FM}_{\mathcal{P}}^{\operatorname{H}})^{-1}\]
the proof will be concluded as soon as we prove the following claim.
**Claim**.: \(R_{(0,a,0)}\in\operatorname{Im}(\widetilde{\Phi}_{\operatorname{def},(S,\bar{ v},H)}^{m,k})\)_._
First of all, let us consider a deformation path
\[\alpha=(f\colon\mathcal{S}\to T,\mathcal{L},\mathcal{H},t_{1},t_{2},\gamma)\]
where
\[(\mathcal{S}_{t_{1}},\mathcal{L}_{t_{1}},\mathcal{H}_{t_{1}})=(S,m(\ell+(k+1) f),H)\quad\text{and}\quad(\mathcal{S}_{t_{2}},\mathcal{L}_{t_{2}},\mathcal{H}_{t_{2}})=(S ^{\prime},L^{\prime},H^{\prime})\]
with \(\operatorname{Pic}(S^{\prime})=\mathbb{Z}H^{\prime}\) and \(L^{\prime}=mH^{\prime}\). Put \(v^{\prime}=v_{t_{2}}\).
Up to conjugating with \(\widetilde{\Phi}(\bar{\alpha})\), it is enough to prove that \(R_{b}\) is an element of \(\operatorname{Im}(\widetilde{\Phi}_{\operatorname{def},(S^{\prime},v^{\prime },H^{\prime})}^{m,k})\), where \(b=\widetilde{\Phi}(\bar{\alpha})(0,a,0)\). On the other hand the last claim follows from Lemma 3.12: in fact the triple \((S^{\prime},L^{\prime},H^{\prime})\) satisfies its hypothesis and \(R_{b}\in\operatorname{O}^{+}(\operatorname{H}^{2}(S,\mathbb{Z}))_{H^{\prime}}\) by construction.
Combining equality (16) and Lemma 4.6, by Proposition 4.5 we have that
\[\operatorname{O}^{+}(\widetilde{\operatorname{H}}(S,\mathbb{Z}))_{v}\subset \operatorname{Im}(\widetilde{\Phi}_{(S,v,H)}^{m,k}),\]
where \(H\) is a \(v\)-generic polarisation inside the \(v\)-chamber whose closure contains the class \(f\).
To conclude the proof of Proposition 4.4 it is then enough to prove that \(\operatorname{Im}(\widetilde{\Phi}_{(S,v,H)}^{m,k})\) contains an orientation reversing isometry.
First of all, let \((S^{\prime},v^{\prime},H^{\prime})\) be an \((m,k)\)-triple such that \(\operatorname{Pic}(S^{\prime})=\mathbb{Z}.H^{\prime}\) and let us pick a non-trivial element
\[\eta\in\operatorname{Hom}_{\mathcal{G}_{\operatorname{def}}^{m,k}}((S,v,H),( S^{\prime},v^{\prime},H^{\prime})).\]
By Lemma 2.14 we know that the derived autoequivalence \(\mathsf{L}\) is a morphism in \(\operatorname{Hom}_{\mathcal{G}_{\operatorname{FM}}^{m,k}}((S^{\prime},v^{ \prime},H^{\prime}),(S^{\prime},v^{\prime}_{L},H^{\prime}))\). If we take \(L=nH^{\prime}\) for \(n\gg 0\), then by Lemma 2.15 we also know that
\[\operatorname{FM}_{\Delta}^{\vee}\in\operatorname{Hom}_{\mathcal{G}_{ \operatorname{FM}}^{m,k}}((S^{\prime},v^{\prime}_{L},H^{\prime}),(S^{\prime}, \hat{v}^{\prime}_{L},H^{\prime})),\]
so that
\[\phi=\operatorname{FM}_{\Delta}^{\vee}\circ\mathsf{L}\circ\eta\in\operatorname {Hom}_{\mathcal{G}^{m,k}}((S,v,H),(S^{\prime},\hat{v}^{\prime}_{L},H^{\prime})).\]
Notice that \(\phi^{\operatorname{H}}\) is orientation reversing (cf. proof of Proposition 2.40).
By Remark 2.19 there is a morphism \(\psi\in\operatorname{Hom}_{\mathcal{G}^{m,k}}((S^{\prime},\hat{v}^{\prime}_{L},H^{\prime}),(S,v,H))\) such that:
* \(\psi\) is obtained by concatenating morphisms in \(\mathcal{G}_{\operatorname{def}}^{m,k}\) with morphisms in \(\mathcal{G}_{\operatorname{FM}}^{m,k}\) of the form \(\mathsf{L}\) and \(\operatorname{FM}_{\Delta}\);
* \(\psi^{\operatorname{H}}\) is orientation preserving (cf. proof of Proposition 2.40 again).
Then
\[(\psi\circ\phi)^{\mathrm{H}}\in\mathrm{Im}(\widetilde{\Phi}^{m,k}_{(S,v,H)})\]
is orientation reversing and we are done.
An injective morphism from \(\mathrm{Mon}^{2}_{\mathrm{lt}}(M_{v}(S,H))\) to \(\mathrm{Mon}^{2}(M_{w}(S,H))\)
Let \((S,v,H)\) be an \((m,k)\)-triple. Write \(v=mw\), with \(w\) primitive, and consider the \((1,k)\)-triple \((S,w,H)\).
The closed embedding
\[i_{w,m}\colon M_{w}(S,H)\longrightarrow M_{v}(S,H)\]
is defined in Remark 1.26 and by Corollary 1.29 the pullback morphism
\[i^{*}_{w,m}\colon\,\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z})\longrightarrow \mathrm{H}^{2}(M_{w}(S,H),\mathbb{Z})\]
is a similitude of lattices. Moreover, by Lemma 1.30, \(i^{*}_{w,m}\) induces an isomorphism of orthogonal groups
\[i^{\sharp}_{w,m}\colon\,\mathrm{O}(\mathrm{H}^{2}(M_{v}(S,H),\mathbb{Z})) \longrightarrow\mathrm{O}(\mathrm{H}^{2}(M_{w}(S,H),\mathbb{Z})). \tag{17}\]
More generally, if \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) are two \((m,k)\)-triples, and if we write \(v_{1}=mw_{1}\) and \(v_{2}=mw_{2}\), then we consider the bijection (see Lemma 1.31)
\[i^{\sharp}_{w_{1},w_{2},m}\colon\,\mathrm{O}(\mathrm{H}^{2}(M_{v_{1}}, \mathbb{Z}),\mathrm{H}^{2}(M_{v_{2}},\mathbb{Z}))\longrightarrow\mathrm{O}( \mathrm{H}^{2}(M_{w_{1}},\mathbb{Z}),\mathrm{H}^{2}(M_{w_{2}},\mathbb{Z})). \tag{18}\]
The next proposition shows that \(i^{\sharp}_{w_{1},w_{2},m}\) sends locally trivial parallel transport operators to parallel transport operators. In particular it follows that the restriction of \(i^{\sharp}_{w_{1},w_{2},m}\) to the subset of locally trivial parallel transport operators is an injection.
**Proposition 4.7**.: _Let \((S_{1},v_{1},H_{1})\) and \((S_{2},v_{2},H_{2})\) be two \((m,k)\)-triples with \(m>1\), and write \(v_{1}=mw_{1}\) and \(v_{2}=mw_{2}\). Then the bijection (18) maps \(\mathsf{PT}^{2}_{\mathrm{lt}}(M_{v_{1}}(S_{1},H_{1}),M_{v_{2}}(S_{2},H_{2}))\) to \(\mathsf{PT}^{2}(M_{w_{1}}(S_{1},H_{1}),M_{w_{2}}(S_{2},H_{2}))\), i.e. it restricts to an injective function_
\[i^{\sharp}_{w_{1},w_{2},m}\colon\mathsf{PT}^{2}_{\mathrm{lt}}(M_{v_{1}}(S_{1},H_{1}),M_{v_{2}}(S_{2},H_{2}))\rightarrow\mathsf{PT}^{2}(M_{w_{1}}(S_{1},H_{ 1}),M_{w_{2}}(S_{2},H_{2})).\]
Proof.: Let \(p\colon\mathcal{X}\to T\) be a locally trivial family of primitive symplectic varieties with fibres \(\mathcal{X}_{t_{1}}\cong M_{v_{1}}(S_{1},H_{1})\) and \(\mathcal{X}_{t_{2}}\cong M_{v_{2}}(S_{2},H_{2})\), then by Remark 1.10 we have a relative stratification \(\mathcal{X}\supset\mathcal{X}_{1}\supset\dots\supset\mathcal{X}_{s}=:\mathcal{Y}\) that restricted to each fiber \(X_{t}\) of \(\mathcal{X}\) is the stratification of the singularities given by Proposition 1.3. By the local triviality of the family and the fact that a subspace of a Kahler space is again Kahler (see [20, II,1.3.1(i) Proposition]), the smallest stratum \(q\colon\mathcal{Y}\to T\) of this relative stratification is a smooth family of irreducible holomorphic symplectic manifolds; by Remark 1.26 we have \(\mathcal{Y}_{t_{1}}\cong M_{w_{1}}(S_{1},H_{1})\) and \(\mathcal{Y}_{t_{2}}\cong M_{w_{2}}(S_{2},H_{2})\).
Let \(\gamma\) be a continuous path in \(T\) with initial point in \(t_{1}\) and final point in \(t_{2}\) and let \(g\colon\,\mathrm{H}^{2}(M_{v_{1}}(S_{1},H_{1}),\mathbb{Z})\rightarrow\mathrm{H }^{2}(M_{v_{2}}(S_{2},H_{2}),\mathbb{Z})\) be the corresponding parallel transport operator in the family \(p\colon\mathcal{X}\to T\). The following claim concludes the proof of the proposition by interpreting \(i^{\sharp}_{w_{1},w_{2},m}(g)\) as a parallel transport operator from \(M_{w_{1}}(S_{1},H_{1})\) to \(M_{w_{2}}(S_{2},H_{2})\).
**Claim**.: _The isometry \(i^{\sharp}_{w_{1},w_{2},m}(g)\) is the parallel transport operator in the family \(q\colon\mathcal{Y}\to T\) along the path \(\gamma\)._
Let \(g_{\mathbb{Q}}\) be the \(\mathbb{Q}\)-linear extension of \(g\), i.e. let \(g_{\mathbb{Q}}\) be the parallel transport operator \(\mathsf{PT}_{p}(\gamma)\) along \(\gamma\) in the local system \(R^{2}q_{*}\mathbb{Q}\). By definition of \(i^{\sharp}_{w_{1},w_{2},m}\) (see Lemma 1.31), it is enough to show that the \(i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}(g_{\mathbb{Q}})\) is the parallel transport operator \(\mathsf{PT}_{q}(\gamma)\) along \(\gamma\) in the local system \(R^{2}q_{*}\mathbb{Q}\).
By local triviality, the inclusion \(i\colon\mathcal{Y}\to\mathcal{X}\) induces a morphism of local systems
\[i^{*}_{\mathbb{Q}}\colon R^{2}p_{*}\mathbb{Q}\longrightarrow R^{2}q_{*} \mathbb{Q}\]
such that \(i^{*}_{t_{1},\mathbb{Q}}=i^{*}_{w_{1},m,\mathbb{Q}}\) and \(i^{*}_{t_{2},\mathbb{Q}}=i^{*}_{w_{2},m,\mathbb{Q}}\). Notice that since \(i^{*}_{t_{1},\mathbb{Q}}\) (and \(i^{*}_{t_{2},\mathbb{Q}}\)) is an isometry (cf. Corollary 1.29), it follows that \(i^{*}_{\mathbb{Q}}\) is an isomorphism of local systems.
The claim follows since
\[i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}(g_{\mathbb{Q}})=i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}(\mathsf{PT}_{p}(\gamma))=i^{*}_{w_{2},m,\mathbb{Q}}\circ\mathsf{ PT}_{p}(\gamma)\circ(i^{*}_{w_{1},m,\mathbb{Q}})^{-1},\]
by definition of \(i^{\sharp}_{w_{1},w_{2},m,\mathbb{Q}}\), and
\[i^{*}_{w_{2},m,\mathbb{Q}}\circ\mathsf{PT}_{p}(\gamma)\circ(i^{*}_{w_{1},m, \mathbb{Q}})^{-1}=\mathsf{PT}_{q}(\gamma)\]
because \(i^{*}_{\mathbb{Q}}\colon R^{2}p_{*}\mathbb{Q}\to R^{2}q_{*}\mathbb{Q}\) is an isomorphism of local systems restricting to \(i^{*}_{w_{1},m,\mathbb{Q}}\) at \(t_{1}\) and to \(i^{*}_{w_{2},m,\mathbb{Q}}\) at \(t_{2}\).
**Remark 4.8**.: Let \(\Omega(T,t_{1},t_{2})\) be the set of homotopy classes of paths in \(T\) from \(t_{1}\) to \(t_{2}\), and \(\mathsf{PT}_{p}\) (resp. \(\mathsf{PT}_{q}\)) be the morphism mapping the class of a path \([\gamma]\) to the parallel transport operator associated to \(\gamma\). With this notation the claim in the proof of Proposition 4.7 states that the diagram
is commutative.
In the particular case of \((S,v,H)=(S^{\prime},v^{\prime},H^{\prime})\) we get the following.
**Corollary 4.9**.: _Let \((S,v,H)\) be an \((m,k)\)-triple with \(m>1\), and write \(v=mw\). Then the isomorphism (17) restricts to an injective group morphism_
\[i^{\sharp}_{w,m}\colon\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H) )\longrightarrow\operatorname{Mon}^{2}(M_{w}(S,H)).\]
### The main results
**Theorem 4.10**.: _Let \((S,v,H)\) be an \((m,k)\)-triple and suppose that \(m>1\). Then_
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))=\mathsf{W}(M_{v}(S,H)).\]
The case when \(v\) is primitive, i.e. \(m=1\), has been proved by Markman.
**Theorem 4.11** ([10, Theorem 1.1]).: _Let \((S,w,H)\) be an \((1,k)\)-triple. Then_
\[\operatorname{Mon}^{2}(M_{w}(S,H))=\mathsf{W}(M_{w}(S,H)).\]
We will use Markman's result in our proof, this is the reason why we have omitted this case from our statement.
In the proof we use the following notation: if \(H\) is a subgroup of a group \(G\), then \([G:H]\) stands for its index.
Proof.: We will use the simplified notation \(M_{w}\) (resp. \(M_{v}\)) instead of \(M_{w}(S,H)\) (resp. \(M_{v}(S,H)\)).
By Theorem 4.2 we have that
\[\mathsf{W}(M_{v})\subset\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}) \subset\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z}))\]
so that
\[[\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z})):\mathsf{W}(M_{v})] \geq[\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z})):\operatorname {Mon}^{2}_{\operatorname{lt}}(M_{v})].\]
On the other hand since \(m>1\), by Lemma 1.30 and Corollary 4.9 there is an isomorphism
\[i^{\sharp}_{w,m}\colon\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z}) )\longrightarrow\operatorname{O}(\operatorname{H}^{2}(M_{w},\mathbb{Z}))\]
such that
\[i^{\sharp}_{w,m}(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}))\subset \operatorname{Mon}^{2}(M_{w})\]
and, by Theorem 4.11, \(\operatorname{Mon}^{2}(M_{w})=\mathsf{W}(\operatorname{H}^{2}(M_{w},\mathbb{Z}))\). It follows that
\[[\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z})):\operatorname{Mon}^ {2}_{\operatorname{lt}}(M_{v})]\geq[\operatorname{O}(\operatorname{H}^{2}(M_ {w},\mathbb{Z})):\mathsf{W}(M_{w})].\]
Finally, since \(\operatorname{H}^{2}(M_{v},\mathbb{Z})\) and \(\operatorname{H}^{2}(M_{w},\mathbb{Z})\) are abstractly isometric as lattices, and since the groups \(\mathsf{W}(M_{v})\) and \(\mathsf{W}(M_{w})\) are defined only in lattice-theoretic terms, we get the equality
\[[\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z})):\mathsf{W}(M_{v})] =[\operatorname{O}(\operatorname{H}^{2}(M_{w},\mathbb{Z})):\mathsf{W}(M_{w})],\]
which allows us to deduce that
\[[\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z})):\operatorname{Mon}^ {2}_{\operatorname{lt}}(M_{v})]=[\operatorname{O}(\operatorname{H}^{2}(M_{v},\mathbb{Z})):\mathsf{W}(M_{v})].\]
Since by Theorem 4.2 we have that
\[\mathsf{W}(M_{v})\subset\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}),\]
we conclude that \(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v})=\mathsf{W}(M_{v})\).
**Corollary 4.12** ([10, Lemma 4.2]).: _Let \((S,v,H)\) be a \((m,k)\)-triple. Then_
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\subset\operatorname{O} ^{+}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}))\]
_has index \(2^{\rho(k)-1}\), where \(\rho(k)\) is the number of distinct primes in the factorisation of \(k\)._
**Remark 4.13**.: Let \((S,v,H)\) be an \((m,k)\)-triple. It is interesting to point out that the index of \(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\) in \(\operatorname{O}^{+}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}))\) does not depend on \(m\). In particular we have that if \(k\) is a power of a prime, then
\[\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))=\operatorname{O}^{+} (\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})).\]
Theorem 4.10 can be analogously formulated in terms of the inclusion \(i_{w,m}\colon M_{w}(S,H)\to M_{v}(S,H)\).
**Theorem 4.14**.: _Let \((S,v,H)\) be an \((m,k)\)-triple, \(v=mw\) with \(m>1\) and \((S,w,H)\) the corresponding \((1,k)\)-triple. Then the isomorphism_
\[\dot{i}^{\sharp}_{w,m}\colon\operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H), \mathbb{Z}))\longrightarrow\operatorname{O}(\operatorname{H}^{2}(M_{w}(S,H), \mathbb{Z}))\]
_induces by restriction an isomorphism_
\[\dot{i}^{\sharp}_{w,m}\colon\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}( S,H))\longrightarrow\operatorname{Mon}^{2}(M_{w}(S,H)).\]
Proof.: First of all recall that \(\dot{i}^{\sharp}_{w,m}=\lambda^{\sharp}_{(S,w,H)}\circ(\lambda^{\sharp}_{(S,v,H )})^{-1}\) by Lemma 1.30.(2).
Since the groups \(\operatorname{\mathsf{W}}(M_{v}(S,H))\) and \(\operatorname{\mathsf{W}}(M_{w}(S,H))\) are defined in lattice-theoretic terms, and \(\lambda_{(S,w,H)}\) and \(\lambda_{(S,v,H)}\) are isometries, it follows that
\[\dot{i}^{\sharp}_{w,m}(\operatorname{\mathsf{W}}(M_{v}(S,H)))=\operatorname{ \mathsf{W}}(M_{w}(S,H)). \tag{19}\]
On the other hand, by Corollary 4.9 the isometry \(\dot{i}^{\sharp}_{w,m}\) sends monodromy operators in monodromy operators, i.e.
\[\dot{i}^{\sharp}_{w,m}(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H) ))\subset\operatorname{Mon}^{2}(M_{w}(S,H)). \tag{20}\]
Combining this with Theorem 4.2 and Theorem 4.11, we eventually get
\[\operatorname{\mathsf{W}}(M_{w}(S,H)) =\dot{i}^{\sharp}_{w,m}(\operatorname{\mathsf{W}}(M_{v}(S,H))) \text{(by equality \eqref{eq:mw})}\] \[\subseteq\dot{i}^{\sharp}_{w,m}(\operatorname{Mon}^{2}_{ \operatorname{lt}}(M_{v}(S,H))) \text{(by Theorem \ref{thm})}\] \[\subseteq\operatorname{Mon}^{2}(M_{w}(S,H)) \text{(by inclusion \eqref{eq:mw})}\] \[=\operatorname{\mathsf{W}}(M_{w}(S,H)) \text{(by Theorem \ref{thm})}\]
from which it follows that
\[\dot{i}^{\sharp}_{w,m}(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H) ))=\operatorname{Mon}^{2}(M_{w}(S,H))\]
and therefore the claim.
**Remark 4.15**.: Suppose that \(m^{\prime}\neq m\) are two strictly positive integers and put \(v^{\prime}=m^{\prime}w\) and \(v=mw\). There is then a natural isomorphism
\[(\dot{i}^{\sharp}_{w,m^{\prime}})^{-1}\circ\dot{i}^{\sharp}_{w,m}\colon \operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\stackrel{{ \sim}}{{\longrightarrow}}\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v^{ \prime}}(S,H)).\]
It follows that the isomorphism class of \(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\) does not depend on \(m\), but only on \(w\).
Since the locally trivial monodromy group is invariant along locally trivial families of primitive symplectic varieties (see Definition 1.4), we get the following corollary.
**Corollary 4.16**.: _Let \(X\) be an irreducible symplectic variety that is locally trivially deformation equivalent to a moduli space \(M_{v}(S,H)\), where \((S,v,H)\) is an \((m,k)\)-triple._
_Then \(\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\) is the subgroup \(\operatorname{\mathsf{W}}(X)\subset\operatorname{O}(\operatorname{H}^{2}(X, \mathbb{Z}))\) of the orientation preserving isometries acting as \(\pm\operatorname{id}\) on the discriminant group._
Proof.: Since \(X\) is locally trivially deformation equivalent to a moduli space \(M_{v}(S,H)\) as in the statement, there exists a locally trivial parallel transport operator
\[g\colon\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z})\longrightarrow\operatorname {H}^{2}(X,\mathbb{Z}).\]
Conjugation with \(g\) gives an isomorphism of orthogonal groups
\[g^{\sharp}\colon\operatorname{O}(\operatorname{H}^{2}(M_{v}(S,H),\mathbb{Z}) )\longrightarrow\operatorname{O}(\operatorname{H}^{2}(X,\mathbb{Z}))\]
which maps \(\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\) to \(\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\) by definition.
Moreover, since \(g^{\sharp}\) is induced by an isometry, it also has to map \(\mathsf{W}(M_{v}(S,H))\) to \(\mathsf{W}(X)\).
Since \(\mathsf{W}(M_{v}(S,H))=\operatorname{Mon}^{2}_{\operatorname{lt}}(M_{v}(S,H))\) by Theorem 4.10, it follows that
\[\mathsf{W}(X)=\operatorname{Mon}^{2}_{\operatorname{lt}}(X,\mathbb{Z}).\]
Finally, let us observe that the analogue of Theorem 4.14 holds for any symplectic variety \(X\) locally trivial deformation equivalent to \(M_{v}(S,H)\).
In order to state the result, we recall that the most singular locus \(Y\) of \(X\) is an irreducible symplectic manifold (see Proposition 1.3). Let us formally define the morphism
\[i^{\sharp}_{Y,X}\colon\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\longrightarrow \operatorname{Mon}^{2}(Y)\]
induced by the inclusion \(i_{Y,X}\colon Y\to X\), as follows. For every monodromy operator \(g\in\operatorname{Mon}^{2}_{\operatorname{lt}}(X)\), there exists a locally trivial family of irreducible symplectic varieties \(p\colon\mathcal{X}\to T\), a point \(\bar{t}\in T\) and loop \(\gamma\) centered at \(\bar{t}\) such that \(\mathcal{X}_{\bar{t}}=X\) and \(g\) is the parallel transport operator \(\mathsf{PT}_{p}(\gamma)\) associated with the family \(p\) and the loop \(\gamma\). By local triviality of \(p\), the restriction of \(p\) to the most singular locus \(\mathcal{Y}\) of \(\mathcal{X}\) gives a smooth family \(q\colon\mathcal{Y}\to T\) of irreducible symplectic manifolds and we define \(i^{\sharp}_{Y,X}(g)\colon\operatorname{H}^{2}(Y,\mathbb{Z})\to\operatorname{H }^{2}(Y,\mathbb{Z})\) as the parallel transport operator \(\mathsf{PT}_{q}(\gamma)\) associated with the family \(q\) and the loop \(\gamma\). We remark that \(i^{\sharp}_{Y,X}(g)\) is well defined and its definition immediately implies
\[i^{\sharp}_{Y,X}(g)\circ i^{*}_{Y,X}=i^{*}_{Y,X}\circ g\colon\operatorname{H} ^{2}(X,\mathbb{Z})\longrightarrow\operatorname{H}^{2}(Y,\mathbb{Z}) \tag{21}\]
**Corollary 4.17**.: _Let \(X\) be an irreducible symplectic variety that is locally trivially deformation equivalent to a moduli space \(M_{v}(S,H)\), where \((S,v,H)\) is an \((m,k)\)-triple. Let \(Y\subset X\) be the most singular locus and \(i_{Y,X}\colon Y\to X\) be the closed embedding. Then the morphism_
\[i^{\sharp}_{Y,X}\colon\operatorname{Mon}^{2}_{\operatorname{lt}}(X) \xrightarrow{\sim}\operatorname{Mon}^{2}(Y)\]
_is an isomorphism._
Proof.: Let \(p\colon\mathcal{X}\to T\) be a locally trivial family of irreducible symplectic varieties such that there exists two points \(t_{1},t_{2}\in T\) with \(\mathcal{X}_{t_{1}}=X\) and \(\mathcal{X}_{t_{2}}=M_{v}(S,H)\), respectively. Here \(M_{v}(S,H)\) is the irreducible symplectic variety associated to an \((m,k)\)-triple \((S,v,H)\). Let \(\mathcal{Y}\) be the most singular locus of \(\mathcal{X}\) and let \(q\colon\mathcal{Y}\to T\) be the restriction of \(p\); by Remark 1.10 the family \(q\) is a family of irreducible holomorphic symplectic manifolds. Let \(\gamma\) be a path contained in \(T\) from \(t_{1}\) to \(t_{2}\) and denote by \(\mathsf{PT}_{p}(\gamma)\colon\operatorname{H}^{2}(X,\mathbb{Z})\to\operatorname{ H}^{2}(M_{v}(S,H),\mathbb{Z})\) and \(\mathsf{PT}_{q}(\gamma)\colon\operatorname{H}^{2}(Y,\mathbb{Z})\to\operatorname{ H}^{2}(M_{w}(S,H),\mathbb{Z})\) the parallel transport operators associated with the path \(\gamma\) and the families \(p\) and \(q\), respectively. By construction we have
\[i^{*}_{w,m}\circ\mathsf{PT}_{p}(\gamma)=\mathsf{PT}_{q}(\gamma)\circ i^{*}_{Y, X}\colon\operatorname{H}^{2}(X,\mathbb{Z})\to\operatorname{H}^{2}(M_{w}(S,H), \mathbb{Z}). \tag{22}\]
Finally we define the group isomorphisms
\[\mathsf{PT}_{p}^{\sharp}(\gamma)\colon \operatorname{Mon}_{\operatorname{lt}}^{2}(M_{v}(S,H)) \longrightarrow\operatorname{Mon}_{\operatorname{lt}}^{2}(X)\] \[g \longmapsto(\mathsf{PT}_{p}(\gamma))^{-1}\circ g\circ\mathsf{PT}_{p }(\gamma)\]
and
\[\mathsf{PT}_{q}^{\sharp}(\gamma)\colon \operatorname{Mon}^{2}(M_{w}(S,H)) \longrightarrow\operatorname{Mon}^{2}(Y)\] \[g \longmapsto(\mathsf{PT}_{q}(\gamma))^{-1}\circ g\circ\mathsf{PT}_{ q}(\gamma).\]
By formulae (21) and (22) we deduce that
\[i_{Y,X}^{\sharp}=\mathsf{PT}_{q}^{\sharp}(\gamma)\circ i_{w,m}^{\sharp}\circ( \mathsf{PT}_{p}^{\sharp}(\gamma))^{-1}\]
and, since \(\mathsf{PT}_{q}^{\sharp}(\gamma)\), \(i_{w,m}^{\sharp}\) and \((\mathsf{PT}_{p}^{\sharp}(\gamma))^{-1}\) are isomorphisms, the morphism \(i_{Y,X}^{\sharp}\) is an isomorphism too.
|
2309.11089 | Practical Probabilistic Model-based Deep Reinforcement Learning by
Integrating Dropout Uncertainty and Trajectory Sampling | This paper addresses the prediction stability, prediction accuracy and
control capability of the current probabilistic model-based reinforcement
learning (MBRL) built on neural networks. A novel approach dropout-based
probabilistic ensembles with trajectory sampling (DPETS) is proposed where the
system uncertainty is stably predicted by combining the Monte-Carlo dropout and
trajectory sampling in one framework. Its loss function is designed to correct
the fitting error of neural networks for more accurate prediction of
probabilistic models. The state propagation in its policy is extended to filter
the aleatoric uncertainty for superior control capability. Evaluated by several
Mujoco benchmark control tasks under additional disturbances and one practical
robot arm manipulation task, DPETS outperforms related MBRL approaches in both
average return and convergence velocity while achieving superior performance
than well-known model-free baselines with significant sample efficiency. The
open source code of DPETS is available at https://github.com/mrjun123/DPETS. | Wenjun Huang, Yunduan Cui, Huiyun Li, Xinyu Wu | 2023-09-20T06:39:19Z | http://arxiv.org/abs/2309.11089v1 | Practical Probabilistic Model-based Deep Reinforcement Learning by Integrating Dropout Uncertainty and Trajectory Sampling
###### Abstract
This paper addresses the prediction stability, prediction accuracy and control capability of the current probabilistic model-based reinforcement learning (MBRL) built on neural networks. A novel approach dropout-based probabilistic ensembles with trajectory sampling (DPETS) is proposed where the system uncertainty is stably predicted by combining the Monte-Carlo dropout and trajectory sampling in one framework. Its loss function is designed to correct the fitting error of neural networks for more accurate prediction of probabilistic models. The state propagation in its policy is extended to filter the aleatoric uncertainty for superior control capability. Evaluated by several Mujoco benchmark control tasks under additional disturbances and one practical robot arm manipulation task, DPETS outperforms related MBRL approaches in both average return and convergence velocity while achieving superior performance than well-known model-free baselines with significant sample efficiency. The open source code of DPETS is available at [https://github.com/mrjun123/DPETS](https://github.com/mrjun123/DPETS).
## I Introduction
Reinforcement learning (RL) provides a biomimetic learning framework where the agent gradually learns to complete the given task by interacting with the environment without prior human knowledge [1]. As an appealing way of artificial intelligence, the model-free RL that directly learns control strategies without model knowledge has been widely applied to not only outperform humans in video and rule-based games [2, 3], but also autonomously control complex systems including chemical plant [4, 5], unmanned vehicles [6, 7], and robots [8, 9]. On the other hand, while model-free RL has achieved great success in simulation environments where sufficient and unbiased sampling is accessible, its real-world engineering applications remain challenging due to expensive sampling costs and complex environmental disturbances. Model-based RL (MBRL) has been proposed to tackle these issues by approximating the system dynamics during the learning procedure. Although MBRL with a proper model contributes to superior sample efficiency compared to model-free RL, accurately modeling the system dynamics is difficult in engineering scenarios where uncertain disturbances lead to large modeling errors and can therefore cripple the effectiveness of MBRL.
One feasible solution is incorporating the system uncertainties in MBRL. System uncertainties can be divided into two types: aleatoric uncertainty which arises from the inherent randomness of the system and epistemic uncertainty caused by a lack of system knowledge or data. One famous probabilistic MBRL approach, the probabilistic inference for learning control (PILCO) [10] was proposed to employ Gaussian processes (GP) [11] and analytic moment-matching [12, 13] to model and propagate the epistemic uncertainty of the target system in a full Bayesian perspective. Although PILCO achieved great sample efficiency in many traditional control tasks, it assumes that all uncertainty can be fully observed and described by the system dynamics. This assumption is not suitable for engineering scenarios where environmental disturbances are usually unobservable and frequently changing. To break this limitation, model predictive control (MPC) was employed in GP-MPC to promptly respond to the changing environment [14]. Its extensions have demonstrated great potential in unmanned surface vehicles in ocean environments [15, 16]. However, implementing these approaches to high-dimensional systems with large samples remains difficult due to their non-parametric nature: the computational complexity of GP-based MBRL increases exponentially with the number of samples \(N\), at a rate of \(\mathcal{O}(N^{3})\). Additionally, the aleatoric uncertainty in GP-based MBRL is modeled by homoscedastic Gaussian distributions without considering its variability.
To address the issue of computational complexity described above, Deep Pilco [17] was developed by employing deep neural networks, whose computational complexity does not depend on the number of samples, to approximate system dynamics. The epistemic uncertainty is estimated by a Monte-Carlo dropout (MC Dropout) with a theoretical guarantee, while the aleatoric uncertainty is also homoscedastically modeled. However, the prediction variance of MC Dropout highly depends on the dropout rate and neural network size, making it difficult to consistently express the uncertainty of the target system [18]. Probabilistic ensembles with trajectory sampling (PETS) further introduced deep neural networks to MBRL with MPC-based policy [19]. It employs a bootstrap sampling of trajectories to predict and propagate the epistemic uncertainty and utilizes multiple neural networks with Gaussian distribution output to estimate the aleatoric uncertainty. Thanks to the robustness against disturbances under the MPC framework, PETS achieved comparable performances to model-free RL in many simulation benchmarks while enjoying superior sample efficiency. Sharing the similar principle of PETS, model-based policy optimization (MBPO) further improved the sample efficiency in MBRL with long-term uncertainty
propagation by generating enhanced data from the predictive model [20]. Overall, previous works [17, 19, 20] have suggested using deep neural networks to overcome the computational burden of probabilistic MBRL and left the following issues unaddressed: 1) the uncertainty propagation based on neural networks is unstable; 2) the fitting error of neural networks is less considered; 3) the aleatoric and epistemic uncertainties are not distinguished during propagation.
In this paper, a novel and practical probabilistic MBRL approach, dropout-based probabilistic ensembles with trajectory sampling (DPETS)1 was proposed to tackle all three issues above. Following the principle of DPETS demonstrated in Fig. 1, the stability of uncertainty propagation was improved by introducing a restrictive MC Dropout in the trajectory sampling while the fitting error caused by neural networks was alleviated by a novel training strategy. The aleatoric uncertainty with less Markov property was further filtered in propagation to properly estimate the epistemic uncertainty in the MPC-based policy. Evaluated by several Mujoco benchmark control tasks with increasing complexity compared with related model-based and traditional model-free baselines, DPETS consistently demonstrated significant superiority in control performances, sample efficiency and robustness against disturbances. In a practical robot arm control scenario, DPETS not only outperformed the model-free RL approaches while reducing \(99\%\) usage of samples but also suppressed the related MBRL approaches with over \(100\%\) higher average return and more sophisticated trajectories in robot end-effector control. All these results indicated the potential of DPETS as an emerging direction of practical MBRL approaches.
Footnote 1: Code available [https://github.com/mrijun123/DPETS](https://github.com/mrijun123/DPETS)
According to the properties summarized in Table I where \(\bigcirc\), \(\times\) and N/A denote the involved, uninvolved and inapplicable terms, the contributions of this work are:
1. A restrictive MC Dropout with enhanced stability and expressive capability was proposed by combining the uncertainty propagation of Deep Pilco and PETS. It extended MC Dropout to more practical scenarios.
2. We explored a novel learning strategy to reduce the prediction bias caused by correcting fitting errors resulting from neural networks. This contributed to improved accuracy in multi-step prediction in probabilistic MBRL.
3. The uncertainty propagation of the MPC-based policy in MBRL was updated to filter out the aleatoric uncertainty that does not exhibit the Markov property. It effectively suppressed the rapidly expanding uncertainty in the multi-step prediction caused by environmental disturbances and therefore contributed to superior robustness and control capability against external disturbances.
The remainder of this paper is organized as follows. Section II provided the background of existing probabilistic MBRL approaches based on neural networks. Section III detailed the proposed DPETS. It was evaluated and analyzed in Section IV as the experimental results. The conclusion was given in Sections V.
## II Preliminary
### _Probabilistic MBRL with MPC-based Policy_
In this section, the probabilistic MBRL with an MPC-based policy is introduced following the demonstration in
Fig. 1: Principle of the proposed DPETS in improving the stability of uncertainty propagation, correcting the fitting error, and filtering aleatoric uncertainty.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Approach & MPC & DNN & \begin{tabular}{c} Fitting \\ Error \\ \end{tabular} &
\begin{tabular}{c} Distinguished \\ Uncertainties \\ \end{tabular} \\ \hline PILCO [10] & \(\times\) & \(\times\) & N/A & \(\times\) \\ \hline GP-MPC [14] & \(\bigcirc\) & \(\times\) & N/A & \(\times\) \\ \hline Deep Pilco [17] & \(\times\) & \(\bigcirc\) & \(\times\) & \(\times\) \\ \hline PETS [19] & \(\bigcirc\) & \(\bigcirc\) & \(\times\) & \(\times\) \\ \hline MBPO [20] & \(\times\) & \(\bigcirc\) & \(\times\) & \(\times\) \\ \hline DPETS (ours) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of the proposed and the related approaches
Fig. 2. The problem is modeled as a Markov decision process (MDP). State and action at time step \(t\) are defined as \(\mathbf{s}_{t}\in\mathbb{R}^{d_{s}}\), \(\mathbf{a}_{t}\in\mathbb{R}^{d_{a}}\). The next step state follows the unknown system dynamics \(\mathbf{s}_{t+1}=f(\mathbf{s}_{t},\mathbf{a}_{t})\). Given a task-related reward function \(\mathcal{R}(\mathbf{s}_{t+1},\mathbf{a}_{t})\), the agent aims to learn a policy \(\pi:\mathbf{s}_{t}\rightarrow\mathbf{a}_{t}\) that maximizes the obtained reward in a long term. Unlike model-free RL, which directly learns the value function of MDP, probability MBRL estimates the system dynamics from a Bayesian perspective:
\[\left[\mathbf{\mu}(\mathbf{s}_{t+1}),\mathbf{\Sigma}(\mathbf{s}_{t+1})\right]=\hat{f}\left(\bm {s}_{t},\mathbf{a}_{t}\right)+\mathbf{w} \tag{1}\]
where \(\mathbf{\mu}(\cdot)\) and \(\mathbf{\Sigma}(\cdot)\) denote the mean and variance, \(\mathbf{w}\) represents the aleatoric uncertainty caused by disturbances.
GP-MPC [14] and PETS [19] employ an MPC-based policy to promptly respond to frequently changing disturbances. Starting from the current state \(\mathbf{s}_{t}\), the MPC-based policy predicts \(H\) steps based on \(\hat{f}(\cdot)\) and searches a sequence of optimal actions to maximize the long-term reward while propagating the uncertainty:
\[\left[\mathbf{a}_{t},\ldots,\mathbf{a}_{t+H-1}\right]=\arg\max\sum_{h=0}^ {H-1}\mathbb{E}\left[\mathcal{R}\left(\mathbf{s}_{t+h+1},\mathbf{a}_{t+h}\right) \right], \tag{2}\] \[\text{s.t.}\quad\left[\mathbf{\mu}(\mathbf{s}_{t+h+1}),\mathbf{\Sigma}(\mathbf{s }_{t+h+1})\right]=\hat{f}\left(\mathbf{s}_{t+h},\mathbf{a}_{t+h}\right).\]
It is optimized by nonlinear optimization approaches like the cross entropy method (CEM) [21]. The MPC-based policy executes the first action \(\mathbf{a}_{t}\) and moves to the next step, this process is repeated as a close loop controller \(\pi:\mathbf{s}_{t}\rightarrow\mathbf{a}_{t}\) following Algorithm 1. The probabilistic model is updated by the collected samples \(\mathcal{D}\) after each episode.
```
Input: Sample set with warmup samples \(\mathcal{D}\) Initialize the probabilistic model: \(\hat{f}\) for episode \(k=1,...,K\)do for time \(t=1,...,T\)do \(\mathbf{s}_{t}=\texttt{Observe\_State()}\) \(\mathbf{a}_{t}=\texttt{MPC\_Policy(}\mathbf{s}_{t}\) following Eq. (2) Execute_Action(\(\mathbf{a}_{t}\)) \(\mathbf{s}_{t+1}=\texttt{Observe\_State()}\) Expand sample set \(\mathcal{D}\leftarrow\mathcal{D}\cup\{\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1}\}\). Update \(\hat{f}\) by sample set \(\mathcal{D}\).
```
**Algorithm 1**Framework of Probabilistic MBRL with MPC-based Policy
### _Probabilistic Model with MC Dropout_
Deep Pilco [17] built the probabilistic model by neural networks. It predicted the uncertainty by randomly sampling \(Q\) sets of dropout particles following Bernoulli distribution \(\left\{\mathbf{z}^{q}\right\}_{q=1}^{Q},\mathbf{z}^{q}=\{\mathbf{z}_{1}^{q},\ldots,\mathbf{z}_ {L}^{q}\}\):
\[\hat{\mathbf{y}}_{t}=\frac{1}{Q}\sum_{q=1}^{Q}\hat{f}_{\mathbf{W}}(\mathbf{x}_{t},\mathbf{z}^{q}) \tag{3}\]
where \(\hat{f}_{\mathbf{W}}(\cdot)\) was the neural networks with weights and bias matrix \(\mathbf{W}\), \(\mathbf{x}_{t}=\{\mathbf{s}_{t},\mathbf{a}_{t}\}\) was the input vector, \(\hat{\mathbf{y}}_{t}\) was the predicted state. Defining the first layer's input as \(\hat{\mathbf{y}}_{0}=\mathbf{x}_{t}\), the output of layer \(l=1,...,L\) was calculated as:
\[\hat{\mathbf{y}}_{l}=\phi\left(\left(\hat{\mathbf{y}}_{l-1}\circ\mathbf{z}_{l}^{q}\right) \mathbf{W}_{l}\right) \tag{4}\]
where \(\mathbf{W}_{l}\) is the weights matrix of the \(l\)-th layer, \(\circ\) denotes the element wise product, \(\phi(\cdot)\) is the activation function.
During the training, the loss function between ground truth \(\mathbf{y}_{t}\) and the output \(\hat{\mathbf{y}}_{t}\) was defined as the minus log likelihood over \(Q\) sets of particles:
\[L\left(\mathbf{x}_{t},\mathbf{y}_{t}\right)=\log\sum_{q=1}^{Q}\exp\left(\frac{1}{2 \sigma^{2}}\left\|\mathbf{y}_{t}-\hat{f}_{\mathbf{W}}(\mathbf{x}_{t},\mathbf{z}^{q})\right\|^ {2}\right) \tag{5}\]
where \(\sigma\) is the manually selected homoscedastic standard deviation of the system noise \(\mathbf{w}\sim\mathcal{N}(0,\sigma^{2})\).
The uncertainty propagation was achieved by assuming the output as a Gaussian distribution. It can be treated as a sampling-based analytic moment-matching [12]. Generate \(P\) states from distribution \(\mathbf{s}_{t}^{p}\sim\mathcal{N}(\mathbf{\mu}_{t},\mathbf{\Sigma}_{t})\), the output distribution was estimated as:
\[\mathbf{\mu}(\mathbf{s}_{t+1}) \approx\frac{1}{PQ}\sum_{p=1}^{P}\sum_{q=1}^{Q}\hat{f}_{\mathbf{W}}( \mathbf{s}_{t}^{p},\mathbf{a}_{t},\mathbf{z}^{p,q}), \tag{6}\] \[\mathbf{\Sigma}(\mathbf{s}_{t+1}) \approx\frac{1}{PQ}\sum_{p=1}^{P}\sum_{q=1}^{Q}\hat{f}_{\mathbf{W}}( \mathbf{s}_{t}^{p},\mathbf{a}_{t},\mathbf{z}^{p,q})\hat{f}_{\mathbf{W}}(\mathbf{s}_{t}^{p},\mathbf{a }_{t},\mathbf{z}^{p,q})^{T}\] \[-\mathbf{\mu}_{t+1}\mathbf{\mu}_{t+1}^{T}+\sigma^{2}\mathbf{I}\]
where \(\mathbf{\Sigma}(\mathbf{s}_{t+1}^{p})\) mixtures both the aleatoric uncertainty from noise and the epistemic uncertainty from neural networks.
### _Uncertainty Propagation by Trajectory Sampling_
PETS provided an alternative solution to propagate uncertainties [19]. It directly approximated the aleatoric uncertainty as a distribution with mean and standard deviation:
\[\left[\mathbf{\mu}_{\mathbf{W}}(\mathbf{x}_{t}),\mathbf{\Sigma}_{\mathbf{W}}(\mathbf{x}_{t})\right]= \hat{f}_{\mathbf{W}}(\mathbf{x}_{t}). \tag{7}\]
Fig. 2: Framework of probabilistic MBRL with an MPC-based policy.
The epistemic uncertainty was represented through \(B\) bootstrap ensamples of neural networks with independent weights \(\{\mathbf{W}_{1},\ldots,\mathbf{W}_{B}\}\):
\[\hat{\mathbf{y}}_{t}=\frac{1}{B}\sum_{b=1}^{B}\hat{f}_{\mathbf{W}_{b}}(\mathbf{x }_{t}), \tag{8}\] \[\hat{f}_{\mathbf{W}_{b}}(\mathbf{x}_{t})\sim\mathcal{N}\left(\mathbf{\mu}_{\bm {W}_{b}}(\mathbf{x}_{t}),\mathbf{\Sigma}_{\mathbf{W}_{b}}(\mathbf{x}_{t})\right).\]
Define \(\mathbf{E}_{\mathbf{W}_{b}}=[\mathbf{\mu}_{\mathbf{W}_{b}}(\mathbf{x}_{t})-\mathbf{y}_{t}]\), the loss function became:
\[L\left(\mathbf{x}_{t},\mathbf{y}_{t}\right)\!=\!\!\sum_{b=1}^{B}\!\mathbf{E} _{\mathbf{W}_{b}}^{T}\mathbf{\Sigma}_{\mathbf{W}_{b}}^{-1}(\mathbf{x}_{t})\mathbf{E}_{\mathbf{W}_{b}}\! +\!\log\det\mathbf{\Sigma}_{\mathbf{W}_{b}}\left(\mathbf{x}_{t}\right). \tag{9}\]
The aleatoric and epistemic uncertainties were not distinguished in PETS, each bootstrap ensemble employed \(P\) independent trajectory sampling whose expectation contributed to the predicted state:
\[\mathbf{\mu}(\mathbf{s}_{t+1}) =\frac{1}{BP}\sum_{b=1}^{B}\sum_{p=1}^{P}\mathbf{\mu}_{\mathbf{W}_{b}}( \mathbf{x}_{t}^{p}), \tag{10}\] \[\mathbf{x}_{t}^{p} \sim\mathcal{N}\left(\mathbf{\mu}_{\mathbf{W}_{b}}(\mathbf{x}_{t-1}^{p}),\bm {\Sigma}_{\mathbf{W}_{b}}(\mathbf{x}_{t-1}^{p})\right).\]
## III Approach
### _Restrictive MC Dropout_
The MC dropout proposed by Deep Pilco [17] requires \(Q\) times Bernoulli sampling for one prediction. However, as shown in the left side of Fig. 3, its output uncertainty is highly dependent on the sampling results, which may unnecessarily increase epistemic uncertainty in the data-covered space and result in unstable control strategies. To address this issue, DPETS proposes a restrictive MC Dropout with improved stability in uncertainty representation. At the start of each episode, restrictive MC Dropout selects \(M\) sets of dropout particles from a Bernoulli distribution \(\left\{\mathbf{z}^{m}\right\}_{m=1}^{M},\mathbf{z}^{m}=\{\mathbf{z}_{1}^{m},\ldots,\mathbf{z}_ {L}^{m}\}\). During the episode, DPETS randomly samples \(0.5M<Q<M\) sets of particles from the fixed \(\left\{\mathbf{z}^{m}\right\}_{m=1}^{M}\). This contributes to a superior trade-off between presenting system uncertainties and avoiding the instability caused by unrestricted Bernoulli sampling. Inspired by PETS [19], given the matrix of weights and bias \(\mathbf{W}\) and the set of dropout particles \(q\), the neural network simultaneously predict both mean and variance:
\[[\mathbf{\mu}_{\mathbf{W}}\left(\mathbf{x}_{t},\mathbf{z}^{q}\right),\mathbf{\Sigma}_{\mathbf{W}} \left(\mathbf{x}_{t},\mathbf{z}^{q}\right)]=\hat{f}_{\mathbf{W}}\left(\mathbf{x}_{t},\mathbf{z}^{q }\right). \tag{11}\]
For a deterministic input \(\mathbf{x}_{t}\), the output \(\mathbf{y}_{t}\) is calculated by sampling \(Q\) times with different sets of particles. Compared to PETS, which models epistemic uncertainty using highly random sampling and described aleatoric uncertainty as homoscedastic Gaussian noise \(\mathcal{N}(0,\sigma^{2})\), DPETS effectively expresses both types of uncertainty while reducing unstable predictions, as shown in the right side of Fig. 3:
\[\hat{\mathbf{y}}_{t} =\frac{1}{Q}\sum_{q=1}^{Q}\hat{f}_{\mathbf{W}}\left(\mathbf{x}_{t},\mathbf{z} ^{q}\right) \tag{12}\] \[\hat{f}_{\mathbf{W}}\left(\mathbf{x}_{t},\mathbf{z}^{q}\right) \sim\mathcal{N}\!\left(\mathbf{\mu}_{\mathbf{W}}\left(\mathbf{x}_{t},\mathbf{z}^{q }\right),\mathbf{\Sigma}_{\mathbf{W}}\left(\mathbf{x}_{t},\mathbf{z}^{q}\right)\right).\]
During the training, the loss function of neural networks using restrictive MC Dropout is defined as:
\[L(\mathbf{x}_{t},\mathbf{y}_{t})\!=\!\!\sum_{q=1}^{Q}\!\!\left(\mathbf{E}_{\mathbf{W}}^{T}\mathbf{ \Sigma}_{\mathbf{W}}^{-1}\left(\mathbf{x}_{t},\mathbf{z}^{q}\right)\mathbf{E}_{\mathbf{W}}\!+\!\log \det\!\mathbf{\Sigma}_{\mathbf{W}}(\mathbf{x}_{t},\mathbf{z}^{q})\right) \tag{13}\]
where \(\mathbf{E}_{\mathbf{W}}=[\mathbf{\mu}_{\mathbf{W}}\left(\mathbf{x}_{t},\mathbf{z}^{q}\right)-\mathbf{y}_{t}]\) is the difference between the output mean with particles set \(q\) and the ground truth.
### _Fitting Error Correction_
The loss functions of Deep Pilco and PETS only consider the error between the one-step prediction and the ground truth, as shown in Figs. (5) and (9). These loss functions do not distinguish between the errors caused by external disturbances and neural network approximation and therefore can easily lead to overfitting the one-step prediction that is heavily affected by system noises while neglecting the fitting error of dynamic models in a longer-term perspective. This defect may result in accumulated bias and deteriorated control capability in the policies based on multiple-step predictions. To tackle this issue, DPETS employs a novel loss function to further correct the fitting errors of neural networks by punishing incorrect predictions over two continuous steps.
Denote the sample of continuous two steps \(t\) and \(t+1\) as \(\{\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1},\mathbf{a}_{t+1},\mathbf{s}_{t+1}\}\), \(\mathbf{x}_{t}=\{\mathbf{s}_{t},\mathbf{a}_{t}\}\), \(\mathbf{y}_{t}=\{\mathbf{s}_{t+1}\}\), \(\mathbf{y}_{t+1}=\{\mathbf{s}_{t+2}\}\), the loss function with fitting error correction is calculated in two parts following the middle of Fig. 1:
\[L_{FEC}(\mathbf{x}_{t},\mathbf{a}_{t+1},\mathbf{y}_{t},\mathbf{y}_{t+1})=L(\mathbf{x}_{t},\mathbf{y}_{t })\!+\!L^{\prime}(\mathbf{x}_{t},\mathbf{a}_{t+1},\mathbf{y}_{t+1}). \tag{14}\]
The first term focuses on the error of one-step prediction following Eq. (13). Define \(\mathbf{x}_{t+1}^{q}=\{\mathbf{\mu}_{\mathbf{W}}(\mathbf{x}_{t},\mathbf{z}^{q}),\mathbf{a}_{t+1}\}\) as the predicted mean in one-step prediction with particles set \(q\), the second term considers the corresponding error as:
\[L^{\prime}(\mathbf{x}_{t},\mathbf{a}_{t+1},\mathbf{y}_{t+1})= \tag{15}\] \[\sum_{q=1}^{Q}\left(\mathbf{E}^{\prime T}_{\mathbf{W}}\mathbf{\Sigma}_{\mathbf{W} }^{-1}\left(\mathbf{x}_{t+1}^{q},\tilde{\mathbf{z}}^{q}\right)\mathbf{E}^{\prime}{}_{\mathbf{W }}+\log\det\!\mathbf{\Sigma}_{\mathbf{W}}\left(\mathbf{x}_{t+1}^{q},\tilde{\mathbf{z}}^{q} \right)\right)\]
Fig. 3: Comparison of MC Dropout (left) and restrictive MC Dropout (right) in fitting noiseless sin functions. The training samples were shown in black line, the out epistemic uncertainty was demonstrated as the red region.
where \(\mathbf{E^{\prime}_{W}}=\left[\mathbf{\mu_{W}}\left(\mathbf{x}_{t+1}^{q},\mathbf{z}^{q}\right)- \mathbf{y}_{t+1}\right]\) is the difference between the output mean in two-step prediction with particles set \(q\) and the ground truth \(\mathbf{y}_{t+1}\), \(\mathbf{\bar{z}}\) indicates another randomly selected set of particles independent to \(\mathbf{z}\) for better generalization capability. During the training process with \(N\) samples, define \((\mathbf{x}^{n},\mathbf{y}^{n})\) as the \(n\)-th sample with states and actions over two continuous steps, the overall loss function of neural networks is set as:
\[\mathcal{L}:=\frac{1}{N}\sum_{n=1}^{N}L_{FEC}(\mathbf{x}^{n},\mathbf{y}^{n})+\sum_{l=1 }^{L}\lambda_{l}(\|\mathbf{W}_{l}\|_{2}^{2}+\|\mathbf{b}_{l}\|_{2}^{2}). \tag{16}\]
The second term punishes the over-large weights.
Following PETS [19], the proposed method utilizes \(B\) ensembles of neural networks with independent matrix \(\{\mathbf{W}_{1},...,\mathbf{W}_{B}\}\) to further enhance the model's representation capability of epistemic uncertainty besides the restrictive MC Dropout introduced in Section III-A. Each ensemble of neural networks is trained with independent \(Q\) randomly sampled sets of particles from the fixed \(M\) sets in the current episode following Eq. (16).
### _Epistemic Uncertainty Propagation_
Existing works, Deep Pilco [17] and PETS [19] propagate their predictions via Eqs. (6) and (10) with consideration of both epistemic and aleatoric uncertainties. However, the aleatoric uncertainty caused by data noises and external disturbances which is independent of the system dynamics has a weaker Markov property, and merging it with the epistemic uncertainty of the system could result in over-large variances in multiple-step prediction which negatively affect the control performance of the policy.
We propose an efficient uncertainty propagation to filter the negative effect of aleatoric uncertainty in long-term prediction following the right side of Fig. 1. In the \(H\) step prediction of the MPC-based policy, each ensemble of neural networks with weights matrix \(\mathbf{W}_{b}\) (we demonstrated the propagated states of two ensembles as blue and red color in Fig. 1) randomly selects \(P\) sets of particles and conducts the following prediction:
\[\left[\mathbf{\mu_{W_{b}}}\left(\mathbf{x}_{t+h}^{b,p},\mathbf{z}^{b,p}\right),\mathbf{\Sigma _{W_{b}}}\left(\mathbf{x}_{t+h}^{b,p},\mathbf{z}^{b,p}\right)\right]=\hat{f}_{\mathbf{W}_{ b}}\left(\mathbf{x}_{t+h}^{b,p},\mathbf{z}^{q}\right) \tag{17}\]
where \(\mathbf{x}_{t+h}^{b,p}=\{\mathbf{s}_{t+h}^{b,p},\mathbf{a}_{t+h}\}\), \(\mathbf{z}^{b,p}\) is the \(p\)-th set of particles selected by ensamble \(b\), \(h\) in the index of prediction horizon \(H\). The reward function in Eq. (2) at each step is calculated by the mean of all ensembles and particles to fully consider both epistemic and aleatoric uncertainties:
\[\mathcal{R}(\mathbf{s}_{t+h+1},\mathbf{a}_{t+h})\] \[=\frac{1}{BP}\sum_{b=1}^{B}\sum_{p=1}^{P}\mathcal{R}\left(\mathbf{\mu _{W_{b}}}(\mathbf{x}_{t+h}^{b,p},\mathbf{z}^{b,p}),\mathbf{\Sigma_{W_{b}}}(\mathbf{x}_{t+h}^{b, p},\mathbf{z}^{b,p})\right). \tag{18}\]
To filter the unnecessary aleatoric in long-term prediction, we omit the output variances in the propagation by setting the next step state as the output mean:
\[\mathbf{s}_{t+h+1}^{b,p}=\mathbf{\mu_{W_{b}}}(\mathbf{x}_{t+h}^{b,p},\mathbf{z}^{p}). \tag{19}\]
Only the epistemic uncertainty represented by \(B\) ensembles and \(P\) sets of particles would be passed to the next step. Please note that the uncertainty propagation introduced in this section is only employed in the MPC-based policy while the training process of \(B\) ensembles is independently conducted through \(Q\) sets of particles following Eq. (16) with full consideration of epistemic and aleatoric uncertainties.
```
Input: Sample set with warmup samples \(\mathcal{D}\) Initialize ensambles of models \(\hat{f}_{\mathbf{W}_{b}},b=1,...,B\) for episode \(k=1,...,K\)do General \(M\) fixed dropout particles \(\{\mathbf{z}^{m}\}_{m=1}^{M}\) following Bernoulli distribution for time \(t=1,...,T\)do \(\mathbf{s}_{t}=\texttt{Observe\_State()}\) \(\mathbf{a}_{t}=\texttt{MPC\_Policy(}\mathbf{s}_{t}\)) Execute_Action(\(\mathbf{a}_{t}\)) \(\mathbf{s}_{t+1}=\texttt{Observe\_State()}\) if\(t>0\)then Expand sample set \(\mathcal{D}\leftarrow\mathcal{D}\cup\{\mathbf{s}_{t-1},\mathbf{a}_{t-1},\mathbf{s}_{t},\mathbf{a }_{t},\mathbf{s}_{t+1}\}\) Update \(\hat{f}_{\mathbf{W}_{b}},b=1,...,B\) by sample set \(\mathcal{D}\) based on \(Q\) times sampling on \(\{\mathbf{z}^{m}\}_{m=1}^{M}\) following Eqs. (14), (15) and (16) FunctionMPC_Policy(\(\mathbf{s}_{t}\)): Sample dropout particles \(\mathbf{z}^{b,p}\) from \(\{\mathbf{z}^{m}\}_{m=1}^{M}\) Set \(B\times P\) initial states \(\mathbf{s}_{t}^{b,p}=\mathbf{s}_{t}\) Optimize sequence \([\mathbf{a}_{t},...,\mathbf{a}_{t+H-1}]\) by CEM to maximize the reward in Eq. (18), the states are propagated following Eqs. (17) and (19) return\(\mathbf{a}_{t}\)
```
**Algorithm 2**Learning Procedure of DPETS
### _Overview of DPETS_
In this section, we detail the whole process of the proposed DPETS following Algorithm 2. Given the sample set \(\mathcal{D}\) with warmup samples that are usually generated by random control actions, DPETS initializes \(B\) ensembles of neural networks and interacts with the target environment in \(K\) episodes. At the start of each episode, \(M\) fixed sets of particles are generated by restrictive MC Dropout. The agent then conducts a \(T\) steps rollout. At each step, it observes the current state, decides and executes the control action, and observes the next step state. Unlike the general probabilistic MBRL framework, DPETS collects the samples over two continuous steps to fulfill its loss function with fitting error correction. At the end of each episode, all \(B\) ensembles of neural networks will be independently updated by \(\mathcal{D}\) based
on \(Q\) times sampling on the fixed dropout particles following Eqs. (14), (15) and (16).
Based on the state propagation in Eqs, (17) and (19), the MPC-based policy in DPETS is conducted parallelly by \(B\) ensembles of neural networks and \(B\times P\) dropout particles. Calculating the reward function by Eq. (18), the optimization process of the control sequence \([\mathbf{a}_{t},...,\mathbf{a}_{t+H-1}]\) is achieved by CEM [21] following PETS [19] to fairly evaluate the superiority of proposed method compared with the existing work. Please note that it is straightforward to use other nonlinear optimization approaches in DPETS. In practice, the output variance of DPETS is adaptively bounded following Appendix A.1 of PETS [19].
## IV Experiment
### _Experimental Settings_
In this section, the proposed method DPETS was evaluated by six benchmark control tasks with increasing complexity: Inverted Pendulum, 7-DOF Pusher, Half Cheetah, Hopper, Ant, Walker2d and one practical robot arm end effector position control task (ur_ee_position). The proposed DPETS was implemented by PyTorch [22]. We selected Deep Pilco [17], PETS [19], MBPO [20] as the MBRL baselines, and selected SAC [23], PPO [24], DDPG [25] as the model-free RL baselines2. Please note that the trick of multiple pieces of training in one episode in MBPO was disabled to ensure all MBRL methods are trained once per episode for a fair comparison. The six benchmark tasks were developed based on OpanAI Gym [27] and Mujoco [28]. The practical robot manipulation task was from a robot-based simulator robogym [29]. They were conducted with the parameters summarized in Table II. All experiments were conducted by three independent trials on a computational server with Intel i9-12900 CPU, NVIDIA GeForce RTX 3070 Ti GPU, 64GB memory and Ubuntu 18.04 OS.
Footnote 2: Deep Pilco was developed following [https://github.com/BrunoKM/deep-plico-torch](https://github.com/BrunoKM/deep-plico-torch), PETS and MBPO were developed based on [https://github.com/kkuang/maful-of-trials](https://github.com/kkuang/maful-of-trials) and [https://github.com/janemr/mbpo](https://github.com/janemr/mbpo). All model-free approaches were implemented by PARLte [https://github.com/PaddlePaddle/PARL](https://github.com/PaddlePaddle/PARL) and PaddlePaddle [26].
### _Evaluation of Control Performances_
We first evaluated the learning capability of the proposed method by six benchmark control tasks in Mujoco. The learning curves DPETS and other baselines were compared in Fig. 4. It was observed that DPETS enjoyed superiority in both average reward and convergence velocity. For other MBRL approaches, PETS had relatively good performances in the inverted pendulum and 7-DOF pusher tasks but converged slowly. MBPO worked in complicated scenarios like Hooper, Ant and Walker2d but could not reach the level of DPETS. Deep Pilco achieved the worst result and failed in all tasks except the inverted pendulum.
In the inverted pendulum task, DPETS significantly outperformed Deep Pilco and MBPO in the average reward through only \(1.5k\) steps (\(75\) episodes) interactions. PETS achieved a close performance with a slower convergence. Considering that all model-free approaches could not converge within \(75\) episodes, we also compared their average reward after \(200\) episodes (shown as dotted lines). DPETS outperformed DDPG, SAC and PPO while reducing over \(95\%\) interactions. This result indicated the excellent sample efficiency of DPETS.
In the 7-DOF pusher task, DPETS achieved a higher average reward than all MBRL baselines within \(15k\) steps (\(100\) episodes). Compared with the model-free baselines that converged with more than \(150k\) steps (\(1k\) episodes), DPETS outperformed SAC and PPO, while reaching a close performance to DDPG using only \(10\%\) interactions. Please note that the standard deviation in the right bottom came from PPO which started from a very low average return near \(-600\) and converged slowly.
In the HalfCheetah task, DPETS successfully outperformed all baselines within \(140k\) steps. Compared with the suboptimal policy learned by SAC with \(3000k\) steps, the proposed method achieved \(180\%\) higher average reward with \(95\%\) fewer interactions.
In the Hopper task, DPETS significantly outperformed other MBRL baselines within \(70k\) steps (the result of Deep Pilco was not displayed due to its extremely low average return). Although DPETS achieved \(18\%\) lower average return than DDPG that converged with more than \(3000k\) steps (but still suppressed PPO at convergence and SAC at convergence with \(36.2\%\) and \(6.3\%\) higher average returns), it had significantly higher average return than all model-free approaches within the first \(5\%\) interactions.
In the Ant task, DPETS quickly learned the best policy with about \(8000\) average returns within \(500k\) steps while
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Parameter** & **Inverted Pendulum** & **7-DOF Pusher** & **HalfCheetah** & **Hopper** & **Ant** & **Walker2d** & **UrEEPosition** \\ \hline Layers number (\(L\)) & 4 & 4 & 6 & 3 & 4 & 4 & 3 \\ \hline Neuron number & 200 & 200 & 200 & 200 & 200 & 200 & 200 \\ \hline Learning rate & \(10^{-3}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-3}\) \\ \hline MPC horizon (\(H\)) & 25 & 30 & 30 & 90 & 30 & 60 & 30 \\ \hline Rollout step (\(T\)) & 200 & 150 & 1000 & 1000 & 1000 & 1000 & 300 \\ \hline Dropout size (\(M\)) & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\ \hline Ensambles (\(B\)) & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\ \hline Parallel sampling (\(P\)) & 4 & 4 & 4 & 5 & 4 & 4 & 4 \\ \hline \end{tabular}
\end{table} TABLE II: Parameters of different control tasks
other MBRL baselines' best performances were all below \(4000\). Compared with the model-free RL baselines, it significantly outperformed the policies of DDPG, PPO and SAC that converged using six times more samples with over \(437.1\%\), \(133.6\%\) and \(7.2\%\) higher average returns.
In the last Walker2d task, the proposed DPETS consistently demonstrated its advantages over all MBRL baselines within \(400k\) steps, it outperformed MBPO, PETS and Deep Pilco with \(19.8\%\), \(277.7\%\) and \(388.1\%\) more average returns. Compared with the best policy of SAC that converged after \(3000k\) steps, DPETS achieved \(92.4\%\) average return using only \(13.3\%\) interactions.
### _Evaluation of Sample efficiency_
Defining the sample efficiency as the number of episodes spent by each method to reach the lower boundary of the maximum average returns over all baselines, we evaluated it in Fig. 5 where the number of episodes was denoted by the first episode reaching the lower boundary based on the average learning curve over three random trials. Please note that the methods that could not learn the corresponding task were removed from the comparison in each subfigure. In the inverted pendulum task, DPETS required only three episodes to reach the boundary while PETS and MBPO needed four and \(58\) episodes respectively. In comparison, all model-free approaches converged to the same performance using over \(100\) episodes. In the 7-DOF pusher task, DPETS reduced the required episodes by \(39\%\) and \(85.5\%\) compared to PETS and DDPG. Although SAC finally reached a slightly superior average return, it required over \(27\) times episodes to reach a certain level of performance. In the HalfCheetah task, DPETS reached the lower boundary of maximum average returns while reducing the used episodes by about \(14.3\%\) to \(67.6\%\) compared with other baselines. In the Hopper task, the proposed method required only nine episodes to reach a certain level. Although DDPG achieved the highest maximum average return, it required about \(40\) times as many interactions to meet the same level. In the Ant task, only SAC had a close maximum average return to DPETS but had far worse sample efficiency. It required over three times as many samples to reach the lower boundary of the maximum average returns. The same phenomenon was also observed in the Walker2d task, where DPETS converged to the lower boundary using only \(78\%\) and \(51\%\) samples of MBPO and SAC. Overall, the proposed DPETS showed a significant advantage in sample efficiency compared to both model-based and model-free baselines across a wide range of benchmark control tasks. When given a certain level of control performance, DPETS converged to it with the fewest interactions, which demonstrates its potential in real-world hardware where sampling can be extremely expensive.
### _Evaluation of Noise Tolerance_
To further investigate the learning capability of DPETS against external disturbances, we added Gaussian noises with a factor of \(0.05\) to the observed states in the six benchmark tasks above following [30]. According to the learning curves of average reward demonstrated in Fig. 6, all approaches required more samples and converged to a lower average reward under the additional disturbances while the proposed
Fig. 4: Learning curves of DPETS and other baselines in Mujoco benchmark tasks. The shaded region represents the corresponding standard deviation.
DPETS still enjoyed significant superiority. In the inverted pendulum task, DPETS outperformed other MBRL baselines within \(20k\) steps and reached the same performances of model-free baselines using only \(25\%\) interactions. In the 7-DOF pusher task, DPETS quickly converged to the height average reward compared with all baselines while the model-free approaches required \(20\) times more samples to reach a close performance. In the HalfCHeetah task, DPETS learned the best control policy within \(150\) episodes, while the suboptimal policy learned by SAC required \(1k\) episodes to reach \(64\%\) average reward of DPETS. In the Hopper task, DPETS was the only MBRL approach that quickly learned a comparable policy to the optimal one learned by DDPG after \(1240k\) steps. In the Ant task, DPETS quickly outperformed the suboptimal policy of SAC in the maximum average return using less than \(20\%\) interactions. As a comparison, the suboptimal MBRL method MBPO reached the level of PPO with a very large standard deviation. In the last Walker2d task, the proposed DPETS consistently had the best convergence and learning performances than other MBRL approaches. Using only \(35\%\) number of interactions, it achieved a close control performance to both PPO and SAC.
Fig. 5: Number of episodes used by all compared methods to reach the lower boundary of the maximum average returns over six benchmark control tasks.
Fig. 6: Learning curves of DPETS and other baselines in Mujoco benchmark tasks with additional noises. The shaded region represents the corresponding standard deviation.
SAC while maintaining an acceptable range of standard deviation which indicated a more stable learning process.
### _Ablation Test_
The average reward of the learned policy in the ablation test was summarized in Table III where task with N indicates the additional Gaussian noises, DPETS-MC and DPETS-BE indicate the DPETS using the MC Dropout from Deep Pilco and the bootstrap ensembles from PETS, w/o FEC and DU indicate the proposed method without the loss function with fitting error correction and the propagation without distinguishing epistemic and aleatoric uncertainties. These results demonstrated that the roles of each component (restrictive MC Dropout, fitting error correction and epistemic uncertainty propagation) in DPETS become increasingly important as task complexity and external disturbances increase. In the original inverted pendulum task, three components had less impact on the converged policy. Under Gaussian noises, MC Dropout resulted in \(2\%\) fewer returns, bootstrap ensembles turned to \(9\%\) fewer returns with a large standard deviation. Over \(10\%\) fewer average reward was observed in DPETS w/o FEC and DU. In the 7-DOF pusher task, DPETS had \(38\%\) fewer returns without using fitting error correction and epistemic uncertainty propagation. Under additional disturbances, DPETS achieved limited improvement (\(2\%\) higher returns) which was consistent with the results of DPETS and PETS in Fig. 6. In the original HalfCheetah task, DPETS-BE had \(14\%\) fewer average returns while other ablated approaches could not learn the task with over \(40\%\) reduced average returns. Turned to the task under noises, only DPETS successfully learned the task with over \(50\%\) higher average returns than all ablated approaches. In more complex scenarios such as Hopper, Ant, and Walker2d, fitting error correction and epistemic uncertainty propagation consistently contributed to a significant improvement in the maximum average returns, both in the original and noisy environments. Meanwhile, the MC Dropout and bootstrap ensembles resulted in deteriorated control performances.
Fig. 8: Predicted trajectories of the MPC-based policy in PETS and DPETS at step \(20\) in the rollout of inverted pendulum case study. The shaded region represents the predicted uncertainties.
Fig. 7: Trajectories of states of PETS and DPETS in one test rollout of inverted pendulum with additional disturbances. The shaded region represents the predicted uncertainties.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Task** & **DPETS** & **DPETS-MC** & **DPETS-BE** & **DPETS w/o FEC** & **DPETS w/o DU** \\ \hline
**Pendulum** & \(179.29\pm 1.26\) & \(176.65\pm 1.86\) & \(175.72\pm 1.21\) & \(175.43\pm 2.57\) & \(176.16\pm 2.19\) \\ \hline
**7-DOF Pusher** & \(-62.76\pm 7.43\) & \(-84.66\pm 8.37\) & \(-79.52\pm 25.9\) & \(-86.65\pm 24.13\) & \(-61.35\pm 8.82\) \\ \hline
**HalfCheetah** & \(25094.45\pm 14872.13\) & \(7261.7\pm 369.71\) & \(21582.52\pm 12059.06\) & \(13926.72\pm 5093.9\) & \(9263.82\pm 1025.2\) \\ \hline
**Hopper** & \(3427.88\pm 252.01\) & \(1967.51\pm 324.51\) & \(2394.6\pm 405.19\) & \(1641.47\pm 368.43\) & \(1200.54\pm 205.27\) \\ \hline
**Ant** & \(8420.22\pm 554.37\) & \(929.83\pm 4.79\) & \(844.37\pm 71.1\) & \(931.62\pm 72.35\) & \(895.28\pm 6.61\) \\ \hline
**Walk2d** & \(4151.32\pm 242.39\) & \(3417.73\pm 254.16\) & \(3572.09\pm 316.12\) & \(2949.23\pm 949.51\) & \(2322.61\pm 911.36\) \\ \hline
**Pendulum-N** & \(171.32\pm 11.09\) & \(167.74\pm 7.6\) & \(155.4\pm 15.14\) & \(143.78\pm 21.32\) & \(152.99\pm 18.51\) \\ \hline
**7-DOF Pusher-N** & \(-99.86\pm 5.65\) & \(-102.94\pm 12.67\) & \(-110.96\pm 15.34\) & \(-106.16\pm 15.34\) & \(104.66\pm 6.38\) \\ \hline
**HalfCheetah-N** & \(11320.19\pm 2319.03\) & \(5831.79\pm 853.59\) & \(5610.18\pm 609.79\) & \(5408.26\pm 297.97\) & \(5576.09\pm 159.98\) \\ \hline
**Hopper-N** & \(2434.6\pm 216.6\) & \(1380.45\pm 104.52\) & \(2125.39\pm 587.48\) & \(1308.2\pm 125.4\) & \(1326.22\pm 287.8\) \\ \hline
**Ant-N** & \(5288.47\pm 382.37\) & \(911.38\pm 124.52\) & \(2126.32\pm 188.47\) & \(777.93\pm 74.72\) & \(867.1\pm 158.78\) \\ \hline
**Walk2d-N** & \(2439.63\pm 205.73\) & \(2084.54\pm 221.54\) & \(2379.27\pm 164.34\) & \(1702.43\pm 106.89\) & \(1302.44\pm 145.32\) \\ \hline \end{tabular}
\end{table} TABLE III: Average maximum returns of DPETS with ablated components
### _Case Study_
In this subsection, we detailed one rollout of the policies learned by PETS and DPETS in the inverted pendulum task (after \(100\) episodes) task and HalfCheetah (after \(170\) episodes) task with disturbances as two case studies to further demonstrate the superiority of the proposed method compared with related MBRL approaches. The state trajectories in the inverted pendulum task were shown in Fig. 7 where the blue lines are the trajectories of real strategies, the red lines are the trajectories of predicted states after \(10\) steps, and the translucent areas represent the corresponding standard deviations. It is observable that the proposed method enjoyed superior prediction accuracy than PETS in all dimensions. Compared with the inaccurate prediction with a high standard deviation in PETS, the proposed method significantly alleviated the prediction error while properly describing the uncertainty by the standard deviation. We analyzed the prediction of action sequences decided by the MPC-based policies of PETS and DPETS at step \(20\) in Fig. 8. Without the filter process of aleatoric uncertainty in the state propagation of the MPC-based policy, the predicted standard deviation of PETS rapidly increased over the full horizon and resulted in not only a hugely biased prediction but also an unreliable control sequence. As a comparison, DPETS successfully removed the effect of aleatoric uncertainty and focused on the epistemic uncertainty generated by the model in prediction. Based on the accurate estimation of future states, DPETS planned a more reliable control sequence to maximize the reward in the long horizon.
The superior characteristic of DPETS in control behavior and uncertainty propagation was also observed in the more challenging HalfCheetah task. As shown in Fig. 9, unlike PETS which presented uncertainty even when the model estimation was accurate, the proposed DPETS only expressed high uncertainty in a few states where the estimation differed greatly from the observation and quickly turned to determined states. DPETS learned an efficient (but not very realistic) control strategy which is about twice as efficient in terms of movement per step compared with PETS. This result was consistent with the significantly higher maximum average return over all baselines in Fig. 6. The two case studies above successfully indicated the superiority of DPETS. The probabilistic neural networks model using restrictive MC Dropout and loss function with fitting error correction enjoyed both improved accuracy in the long-term prediction and proper description of the aleatoric uncertainty. The MPC-based policy with epistemic uncertainty propagation greatly reduced the prediction error caused by the excessive inflation of aleatoric uncertainty in decision-making, significantly improving its control capability.
### _Robot Control Test_
Regarding the potential in engineering implementation, we finally evaluated DPETS by a real-robot-based simulation robogym [29]. The target task ur_ee_position aims to control the UR5 robot arm to reach the randomly generated targets from its initial state. The learning curves of all compared methods were demonstrated in Fig. 10. DPETS quickly converged to an optimal policy with about \(40\) average return within the first \(1000\) steps while PETS only reached about \(20\) average return, neither MBPO nor Deep Pilco could learn this task. As a comparison, all model-free baselines could not learn any meaningful policy in the first \(30000\) steps. Even after \(1000k\) interactions with the environment, none of them achieved a comparable performance to DPETS. Compared with the optimal model-free baseline DDPG, the proposed method learned a superior policy with over \(100\%\) higher average return while reducing \(99\%\) usage of samples. This
Fig. 10: Learning curves of DPETS and other baselines in UR5 end-effector position control task. The shaded region represents the corresponding standard deviation.
Fig. 9: Trajectories of states of PETS and DPETS in one test rollout of HalfCheetah with additional disturbances. The shaded region represents the predicted uncertainties.
result clearly indicated the great efficiency of DPETS in robot control.
We further investigated the characteristics of DPETS in MPC and the corresponding behaviors. Evaluating the learned policies of DPETS and PETS after \(30000\) steps' training, the average returns of the MPC prediction were summarized in Fig. 11. DPETS quickly optimized stable trajectories and converged the model uncertainty within \(6\) steps and continuously executed the trajectories with confidence. This feature contributed to smoother control trajectories of the robot arm in the space of the end-effector position during the MPC optimization as demonstrated in Fig. 12. In contrast, PETS struggled to converge to confident control trajectories due to its unstable uncertainty propagation, resulting in not only lower average returns with overlarge predictive uncertainty but also frequently shaking control action of the robot arm.
## V Conclusion
In this paper, DPETS, a novel probabilistic MBRL approach based on neural networks was proposed to tackle the issues of prediction stability, prediction accuracy and policy quality in probabilistic MBRL. DPETS stably predicted the system uncertainty by introducing a restrictive MC Dropout that naturally combined dropout and trajectory sampling. A loss function with fitting error correction was proposed to reduce the approximation error of neural networks while improving its accuracy in long-term prediction. An uncertain state propagation that filters aleatoric uncertainty was further developed to enhance the control capability of MPC-based policy. Validated by six benchmark tasks under additional disturbances and one practical robot arm control task, DPETS not only outperformed the related MBRL approaches in average returns and convergence velocity but also achieved superior control performance compared with well-known model-free RL methods with significant sample efficiency. These results indicated the potential of DPETS as a stable and sample-efficient MBRL approach to solve control problems under complicated disturbances from a probabilistic perspective.
|
2309.13846 | Controllable Operations of Edge States in Cross-One-dimensional
Topological Chains | Topological edge states are recently attracting intense interest due to their
robustness in the presence of disorder and defects. However, most approaches
for manipulating such states require global modulations of the system's
Hamiltonian. In this work, we develop a method to control edge states using
local interactions of a four-node junction between cross-one-dimensional
topological atomic chains. These junction interactions can give rise to tunable
couplings between the hybridized edge states within different geometric
symmetry, allowing us to implement robust quantum state transfer and SWAP gate
between the two topological chains, where the edge states are pair-encoded as a
single qubit. Moreover, when the atoms are precisely positioned to couple
waveguides, the correlated decay caused by the environment enables the
anti-symmetric edge states to present subradiant dynamics and thus show
extremely long coherence time. These findings open up new possibilities for
quantum technologies with topological edge states in the future. | Xian-Liang Lu, Ze-Liang Xiang | 2023-09-25T03:17:46Z | http://arxiv.org/abs/2309.13846v1 | # Controllable Operations of Edge States in Cross-One-dimensional Topological Chains
###### Abstract
Topological edge states are recently attracting intense interest due to their robustness in the presence of disorder and defects. However, most approaches for manipulating such states require global modulations of the system's Hamiltonian. In this work, we develop a method to control edge states using local interactions of a four-node junction between cross-one-dimensional topological atomic chains. These junction interactions can give rise to tunable couplings between the hybridized edge states within different geometric symmetry, allowing us to implement robust quantum state transfer and SWAP gate between the two topological chains, where the edge states are pair-encoded as a single qubit. Moreover, when the atoms are precisely positioned to couple waveguides, the correlated decay caused by the environment enables the anti-symmetric edge states to present subradiant dynamics and thus show extremely long coherence time. These findings open up new possibilities for quantum technologies with topological edge states in the future.
As a fundamental element in topological materials, the edge states predicted by the bulk-boundary correspondence emerge at the interface where the topological invariant changes [1; 2]. One of the most significant features of topological systems is their robustness against disorder and defects [3; 4; 5], which leads to topologically protected energy information flow [6; 7; 8]. In addition, the zero-energy edge states [9] in the one-dimensional (1D) topological system with chiral Hamiltonian can separate from bulk states with large bandgaps [10; 11]. These unique advantages show great potential in realizing robust quantum state transfer with 1D lattices [12; 13; 14; 15; 16; 17; 18], as well as performing topological quantum computation in superconducting wire networks [19; 20; 21], where the unpaired Majorana zero modes are encoded as qubits. Thus, achieving the quantum control of such topological edge states has become a critical issue.
The transfer and operations of multiple protected edge states are vital in robust quantum computation and large-scale quantum information processing. Previous works have developed techniques based on adiabatic protocols to drive single edge state [15; 16; 17; 22; 23; 24], in which the operation time is generally limited by the adiabatic theorem without accelerated strategies [17]. The topological charge pump, introduced by Thouless [25], can give rise to a quantized transport of particles in each cyclic evolution [26; 27]. In recent studies, non-quantized topological pumping is exploited to transfer the edge states [28; 29], and experimental realizations have been reported in a variety of platforms including photonics [30; 31; 32; 33], elastic lattices [34], magneto- and electro-mechanical systems [35; 36], and acoustic structures [37; 38; 39]. However, due to the topological robustness, these approaches demand simultaneous modulations of system parameters to drive the edge states, and they are incapable of building multi-qubit operations with edge states. On the other hand, the quantum gates based on braiding in topological chains also require adiabatic moving of domain walls [40; 41]. Thus, it is still challenging to implement direct quantum control of topological edge states with few-body interactions and in non-adiabatic processes. All of these give rise to the following question: Can one build tunable interactions between edge states to implement such transport and quantum gates through dynamical evolution?
In this Letter, we present a concise approach to control the topological edge states with few-body local interactions, where we leverage the interactions within a four-node junction formed by two intersecting topological chains of two-level atoms, to implement direct manipulations of edge states. In the topologically nontrivial phase, the paired zero-energy edge states [10; 42] emerge at four edges of the system and are effectively controlled by the local junction interactions. When we encode the four edge states as two-qubit states, we find that the \(C_{4}\) and \(\mathbb{Z}_{2}\) geometric symmetries of the system can contribute to different qubit-qubit interactions, enabling us to implement the robust state transfer and a topological SWAP gate via the non-adiabatic process. Moreover, we study the dissipative dynamics of the edge states when each atomic chain is specifically structured to couple individual 1D electromagnetic environment, which leads to the topologically protected super/subradiance of edge states. This allows us to generate and transfer remote entanglement between edge atoms. Our proposal provides a means of manipulating edge states via local interactions and paves the way for robust qubit operations as well as long-time storage.
_Model._--We consider a topological system as depicted in Fig. 1 (a), where two identical topological chains cross at their mutual center. Each chain consists of \(2N\) identical two-level atoms to form a Su-Schrieffer-Heeger (SSH) model [43], i.e., the atoms are located in two sets of sublattices with alternated hopping rates, which can be described by a tight-binding Hamiltonian. In addition, each chain couples to a waveguide in order to induce nonlocal dissipative couplings between atoms. Such a configuration can be realized with artificial atomic
systems, such as superconducting circuits [44; 45]. The Hamiltonian of atomic chains reads \(H_{\rm tot}=H_{0}+H_{\rm XSSH}+H_{I}\), where \(H_{0}=\sum_{l=1,2}\sum_{i}\omega_{a}\sigma_{l,i}^{+}\sigma_{l,i}^{-}\) denotes the bare Hamiltonian with the transition frequency \(\omega_{a}\). \(H_{\rm XSSH}\) describes the two chains with no inter-chain hopping as \(H_{\rm XSSH}=\sum_{l=1,2}\sum_{i}(J_{1}\sigma_{l,iA}^{+}\sigma_{l,iB}^{-}+J_{2} \sigma_{l,i+1A}^{+}\sigma_{l,iB}^{-}+\text{H.c.})\), where \(l\) is the chain's index, and \(\sigma_{l,iA}^{+(-)}\) and \(\sigma_{l,iB}^{+(-)}\) are the raising (lowering) operators for atoms in the corresponding sublattices A and B of the \(i\)-th unit cell, respectively. The staggered nearest-neighbor hopping amplitudes \(J_{1}\) and \(J_{2}\) represent the inter- and intra-cell hoppings, respectively. In the single-excitation subspace, when \(J_{1}<J_{2}\) the paired edge states emerge at the two ends of each finite-sized SSH chain, which hybridize from \(|\psi_{L}\rangle\) and \(|\psi_{R}\rangle\) into the symmetric (\(|\psi_{S}\rangle\)) and antisymmetric (\(|\psi_{A}\rangle\)) edge states, with energies [42]
\[\epsilon_{S}=-\epsilon_{A}=(-1)^{N+1}\frac{J_{2}\sinh\lambda}{\sinh(N+1) \lambda}, \tag{1}\]
where \(\lambda\) is given by \(\sinh{(N\lambda)}/\sinh{[(N+1)\lambda]}=J_{1}/J_{2}\). In the inset of Fig. 1 (a), we show the configuration of the central four-node junction. The atomic emitters are coupled through tunable couplers, which contribute to the interactions between the two topological chains,
\[\begin{split} H_{I}=(K_{1}\sigma_{1,s}^{+}+K_{2}\sigma_{1,s+1}^{+ })\sigma_{2,s}^{-}+\text{H.c.}\\ +(K_{3}\sigma_{1,s}^{+}+K_{4}\sigma_{1,s+1}^{+})\sigma_{2,s+1}^{-} +\text{H.c.},\end{split} \tag{2}\]
where \(s\) and \(s+1\) are the location index of the central atoms. Without loss of generality [45], the cell number \(N\) is assumed to be odd and thus \(s=\{(N+1)/2,A\}\), \(s+1=\{(N+1)/2,B\}\). There are four well-defined edge states when we consider \(H_{I}\) as a perturbation, which can be decomposed into three parts, corresponding to edge-edge, edge-bulk, and bulk-bulk couplings. It offers different interactions between these hybridized edge states, as illustrated in Fig. 1 (b), whereas the bulk-edge coupling is suppressed due to the large gap between the edge and bulk states [11]. Additionally, this gap also protects the edge states from thermal noise, and thus we focus on the dynamics within the zero-energy manifold of edge states.
_Bright/Dark states and state transfer_--First we consider the simplest case where all the local junction interactions are identical, i.e., \(K_{i}=K\). The system now possesses \(C_{4}\) symmetry and each chain exhibits the inversion symmetry [45], leading to degenerate subspaces of odd-parity eigenstates \(|\psi_{l,2n-1}\rangle\), with \(n=1,\cdots,N\). Here, \(|\psi_{l,n}\rangle\) denotes the \(n\)-th eigenstate of SSH chain in ascending order of energy. Consequently, the symmetric edge states \(|\psi_{1S}\rangle,|\psi_{2S}\rangle\) become coupled modes owing to their even parity, as shown in Fig. 2 (a), and in contrast the anti-symmetric states \(|\psi_{1A}\rangle,|\psi_{2A}\rangle\) are uncoupled modes. Using the perturbation theory, we obtain the effective interaction Hamiltonian
\[H_{\rm eff}^{\rm id}=\widetilde{H}_{0}+g\left(|\Psi_{1S}\rangle\langle\Psi_{2 S}|+\text{H.c.}\right), \tag{3}\]
where \(\widetilde{H}_{0}=\sum_{l=1,2}2\epsilon_{S}\left(|\Psi_{lS}\rangle\langle\Psi_{ lS}|-|\Psi_{lA}\rangle\langle\Psi_{lA}|\right)\) is the unperturbed Hamiltonian, with the coupling strength
\[g=2K\eta^{2},\quad\eta=\frac{\sinh[(N+1)\lambda/2]}{\sqrt{2\sum_{i}\sinh^{2}[( N+1-i)\lambda]}}. \tag{4}\]
We note that the second-order perturbation induced by the residual bulk-edge coupling is considerably weak when \(N\) is odd [45]. In this case, only symmetric edge states have the transition element, which results in an oscillation in their subspace, manifesting a topological state transfer from one chain to another via the local junction in a non-adiabatic process. As shown in Fig. 2(b), this leads to the formation of bright and dark states, where we denote the edge states \(\{|\psi_{1S}\rangle,|\psi_{1A}\rangle,|\psi_{2A}\rangle,|\psi_{2S}\rangle\}\) as \(\{|\uparrow\uparrow\rangle,|\uparrow\downarrow\rangle,|\downarrow\uparrow \rangle,|\downarrow\downarrow\rangle\}\), respectively. In, Fig. 2(e), we show the process of an excitation swap between the bright states of different chains with transfer time \(T_{t}=\pi/2g\), whereas the dark states keep stationary.
As regards the rotationally symmetric configuration \(K_{1}=K_{3}=K_{+}\) while \(K_{2}=K_{4}=K_{-}\) (\(K_{+}>K_{-}\)), the system remains a \(\mathbb{Z}_{2}\) residual symmetry. The effective
Figure 1: (a) Schematic of the compound topological system. Two identical atomic chains with bipartite lattice have staggered hopping amplitudes \(J_{1}\) and \(J_{2}\) respectively, and there are tunable inter-chain hoppings \(K_{i}\) at the central node. Each chain is coupled to an individual waveguide. (b) Eigenenergies of the model (with an \(\omega_{0}\) frequency shift) and the zero-energy manifold composed of the hybridized edge states in the single-excitation subspace. The interactions between edge states are mediated by tunable central network couplings \(K_{i}\).
Hamiltonian in the zero-energy manifold reads
\[H_{\rm eff}^{\rm rot}=\widetilde{H}_{0}+g_{S}|\psi_{1S}\rangle\langle\psi_{2S}|+g _{A}|\psi_{1A}\rangle\langle\psi_{2A}|+{\rm H.c.}, \tag{5}\]
where \(g_{S}=\eta^{2}(K_{+}+K_{-})\), \(g_{A}=\eta^{2}(K_{+}-K_{-})\). Fig. 2(c) shows that the edge states are shared by two chains and distribute uniformly at the four edges. This distribution, protected by \(\mathbb{Z}_{2}\) symmetry, is independent of the parameters, where the edge states are denoted by \(|\psi_{S}^{+}\rangle=(1,1,1,1)^{\rm T}/2\), \(|\psi_{S}^{-}\rangle=(1,1,-1,-1)^{\rm T}/2\) and \(|\psi_{A}^{+}\rangle=(1,-1,-1,-1)^{\rm T}/2\), \(|\psi_{A}^{-}\rangle=(1,-1,-1,1)^{\rm T}/2\) in terms of the basis \(\{|\psi_{1L}\rangle,|\psi_{1R}\rangle,|\psi_{2L}\rangle,|\psi_{2R}\rangle\}\). In this configuration, the junction interactions provide an additional interacting channel for the two-qubit exchange process through anti-symmetric edge states, resulting in their conversion from the dark to bright states, as shown in Fig. 2 (d). These two interacting subspaces correspond to two distinct uncoupled spin-exchange processes. We can perfectly control the switching on/off of the interacting channels and their strength by changing the local junction interactions. In a more general case with broken \(\mathbb{Z}_{2}\) symmetry, more interaction channels become accessible [45].
_Transitions and quantum gates._--Having a platform with tunable interactions between edge states, we investigate its effective model in the picture of two spins. We solely use the inversion symmetry to reduce its original Hamiltonian in the zero-energy manifold. The effective spin model in the rotationally symmetric scenario has an anisotropic Heisenberg Hamiltonian \(H_{\rm eff}^{\rm rot}=t\sigma_{z}\otimes\sigma_{z}+u\sigma_{x}\otimes\sigma_{x }-v\sigma_{y}\otimes\sigma_{y}\), where \(t=2\epsilon_{S}\), \(u=\eta^{2}K_{+}\), \(v=\eta^{2}K_{-}\). In the case with \(C_{4}\) symmetry, it reduces to \(u=v\), where the rotating wave term \(\sigma_{1}^{+}\sigma_{2}^{-}+\)H.c. vanishes in the spin-flipping process \(|\uparrow\downarrow\rangle\langle\downarrow\uparrow|\) while the counter-rotating wave term \(\sigma_{1}^{+}\sigma_{2}^{+}+\)H.c. dominates. When \(u\neq v\), the edge states oscillate in two decoupled subspaces \(\Pi:\{|\uparrow\uparrow\rangle,\,|\downarrow\downarrow\rangle\}\) and \(\Sigma:\{|\uparrow\downarrow\rangle,\,|\downarrow\uparrow\rangle\}\), respectively, as mentioned above, offering another degree of freedom to manipulate the topological qubit states.
We analyze the coherent dynamics in our system to illustrate the operation of edge-state qubits, specifically focusing on the SWAP gate [46]. The time evolution of an arbitrary state is given by
\[\psi(t)=\frac{1}{2}\left(\begin{array}{cccc}\mathcal{R}_{\Pi}^{+}&0&0& \mathcal{R}_{\Pi}^{-}\\ 0&\mathcal{R}_{\Sigma}^{+}&\mathcal{R}_{\Sigma}^{-}&0\\ 0&\mathcal{R}_{\Sigma}^{-}&\mathcal{R}_{\Sigma}^{+}&0\\ \mathcal{R}_{\Pi}^{-}&0&0&\mathcal{R}_{\Pi}^{+}\end{array}\right)\psi(0) \tag{6}\]
where \(\mathcal{R}_{\Pi}^{\pm}=U_{S}^{+}\pm U_{S}^{-}\) and \(\mathcal{R}_{\Sigma}^{\pm}=U_{A}^{+}\pm U_{A}^{-}\) are the matrix elements of different subspaces, and the rotating vectors \(U_{\nu}(t)=e^{-iE_{\nu}^{\pm}t/\hbar}\) in the complex plane can contribute to the interference of different components, with energies \(E_{S}^{\pm}=t\pm(u+v)\) and \(E_{A}^{\pm}=-t\pm(u-v)\). The oscillation frequencies in the \(\Sigma\) and \(\Pi\) subspaces are \(\Omega_{1}=2(u-v)\) and \(\Omega_{2}=2(u+v)\), respectively. Therefore, when we adjust the junction interactions such that \(K_{+}=3K_{-}\), the unitary evolution becomes a SWAP gate at the time
\[T_{\rm SWAP}=\frac{2n\pi}{2(u+v)}=\frac{m\pi}{2(u-v)}=\frac{2k\pi}{2(t+v)}, \tag{7}\]
where \(n,m,k\in\mathbb{Z}\). As verified by numerical simulation in Fig. 3(a), the SWAP gate with different initial states is realized, and the gate fidelity \(\bar{\mathcal{F}}>0.999\) after a complete time cycle \(T_{\rm SWAP}\), where we use the average fidelity \(\bar{\mathcal{F}}=\int d\psi_{q}{\rm Tr}\{|\psi_{q}\rangle\langle\psi_{q}| \rho(t)\}\)[47] with \(\psi_{q}\) being the target state. The result is calculated by the total Hamiltonian \(H_{\rm tot}\), which shows good consistency with our analysis based on the effective Hamiltonian \(H_{\rm eff}^{\rm rot}\). Eq. (7) determines the sweet point of the junction interactions \((K_{-}^{0},K_{+}^{0})\), with \(K_{+}^{0}=3K_{-}^{0}\), \(\epsilon_{S}=2\eta^{2}(4k-1)K_{-}^{0}\). In Fig. 3(b), the high-fidelity regions are separated by different \(k\) values, and the diagram reveals the rotationally symmetric structure by exchanging \(K_{+}\leftrightarrow K_{-}\). The gate time can also be tuned by modifying the junction interactions \(K_{+}\), as depicted in Fig. 3(c), which scales as \(T_{\rm SWAP}\sim K_{+}^{-\overline{N}}\) and the fidelity maintains \(\bar{\mathcal{F}}>0.99\) within a certain range of \(K_{+}\)[45]. In the SSH chain, the edge states protected by the chiral symmetry show robustness against the imperfections of hopping rates \(J_{i}\), i.e., the off-diagonal disorder. However, the quantum gate operations and state transfer are subject to this disorder since it changes the energy of edge states [14; 16], which results in the breakdown of condition Eq. (7). In our model, the energy
Figure 2: Junction-induced tunable interactions between edge states with different geometries. Real-space wave functions of the edge states in (a) and (c) correspond to \(C_{4}\) and \(\mathbb{Z}_{2}\) symmetries, respectively. (b) and (d) show the schematic illustration of tunable spin-exchange interactions between hybridized edge states. The dark state interacting channel opens when the system breaks into \(\mathbb{Z}_{2}\) symmetry from \(C_{4}\) symmetry. Here we choose the parameters as \(J_{1}=0.4J_{2}\), with \(N=5\) for each chain. The junction interactions are \(K_{i}=0.07J_{2}\) in (a) and \(K_{1}=K_{3}=0.07J_{2}\), \(K_{2}=K_{4}=0.05J_{2}\) in (c), respectively. (e) Schematic of the edge-state transfer in a non-adiabatic process. The state transfer is simulated with the initial state \(|\psi(0)\rangle=|\psi_{1\rm S}\rangle\).
levels of edge states are also related to the junction interactions. Thus, by tuning the junction interactions, we can still reconstruct the SWAP gate in Eq. (6) in the presence of the off-diagonal disorder [45]. In Fig. 3(d), we illustrate the gate fidelity with and without the modulation of \(K_{i}\) as a function of the disorder strength \(\delta\), where \(J_{i}=J_{i}+\delta_{i}\) and the disorder \(\delta_{i}\in[-\delta,\delta]\) is in uniform distribution. The junction interactions significantly reduce the deviation and fluctuation of the fidelity, and it shows a plateau (\(\bar{\mathcal{F}}>0.99\)) even when the disorder is larger than the junction interaction \(K_{-}^{0}\).
_Parity-associated super/subradiance and remote entanglement._--In practice, the quantum system is inevitably affected by the environment, and this can be characterized by a non-unitary evolution. Here, we utilize a 1D waveguide to confine the radiative emission from atoms. The interference of the collective radiation can suppress the decay of hybridized edge states through the waveguide bath [48; 49; 50]. As shown in Fig. 1 (a), each atomic chain couples to an independent waveguide, which induces the dissipative couplings and gives rise to the correlated two-body dissipative dynamics [51; 52; 53; 54],
\[\dot{\rho}(t)=-\frac{i}{\hbar}\left[H_{0}+H_{\rm nl},\rho(t)\right]+\mathcal{L }\rho, \tag{8}\]
where \(H_{\rm nl}=\sum_{l=1,2}\sum_{i,j}\hbar g_{ij}(\sigma_{l_{1}l}^{+}\sigma_{l_{j} j}^{-}+\sigma_{l_{i}l_{j}}^{-}\sigma_{l_{j}j}^{+})\) denotes the nonlocal interaction induced by the exchanging of virtual photons, and the Lindblad superoperator [55; 56; 57; 58; 59; 60; 61]
\[\mathcal{L}\rho=\sum_{l=1,2}\sum_{i,j=1}^{2N}\gamma_{ij}\Big{(}\sigma_{li}^{-} \rho\sigma_{lj}^{+}-\frac{1}{2}\sigma_{li}^{+}\sigma_{lj}^{-}\rho-\frac{1}{2} \rho\sigma_{li}^{+}\sigma_{lj}^{-}\Big{)} \tag{9}\]
shows the correlated decay of the system. The non-local interaction and correlated decay are analytically given by [45]\(g_{ij}=\gamma_{0}\sin\left(2\pi d_{ij}/\lambda_{a}\right)/2\) and \(\gamma_{ij}=\gamma_{0}\cos\left(2\pi d_{ij}/\lambda_{a}\right)\), respectively, where \(\gamma_{0}\) is the spontaneous emission rate to the waveguide bath. The two-body interaction \(g_{ij}\) and decay \(\gamma_{ij}\) oscillate with the distance between atoms \(d_{ij}\) scaled with the characteristic wavelength \(\lambda_{a}=2\pi c/\omega_{0}\). We assume that the hopping rate \(J_{i}\) is much larger than the atom-field coupling such that the interaction term plays a role of disorder. Here, we precisely design each atomic chain such that the distance between bulk atoms is \(\lambda_{a}/2\), and the edge atoms are separated from bulk atoms by \(\lambda_{a}/4\), as shown in Fig. 4(a). In this case, the dissipative dynamics of edge atoms decouples from bulk atoms, accompanied by parity-associated super/subradiant states. The non-Hermitian Hamiltonian in the single-excitation subspace can be written as
\[H_{\rm nh}=H_{\rm SSH}-i\Gamma_{E,-}|\Gamma_{E,-}\rangle\langle\Gamma_{E,-}|- i\Gamma_{B}|\Gamma_{B}\rangle\langle\Gamma_{B}|, \tag{10}\]
where \(|\Gamma_{E,-}\rangle\) and \(|\Gamma_{B}\rangle\) denote the anti-symmetric dissipative eigenmodes for the edge and bulk atoms
Figure 3: (a) The gate fidelity of different initial states. The initial states are \(\psi_{S2}\), \(\psi_{A2}\), \(\left(\psi_{S2}+\psi_{A2}\right)/\sqrt{2}\) (blue dashed), respectively. The solid black line denotes average fidelity \(\bar{F}\). (b) The high-fidelity regions with tunable junction interactions \(K_{i}\). (c) Tunable gate time by modifying junction interactions with different values of \(J_{2}\). Here, we set \(K_{+}=3K_{-}\), and \(J_{1}\) is the optimal value to achieve the highest fidelity. (d) The gate fidelity as a function of the disorder strength \(\delta\). The solid lines correspond to the results averaged over \(10^{3}\) disorder instances with (red) and without (yellow) the modulation of \(K_{i}\), where the shaded area indicates a standard deviation for the yellow line (the fluctuation of the red line is less than \(0.01\)). Here, we take \(K_{\pm}^{0}\) at \(k=2\) regime, \(J_{1}=0.51J_{2}\) [except in (c)], and \(N=5\).
Figure 4: (a) Schematic of a structured atomic chain coupled to a 1D waveguide. (b) Effective decay rates for hybridized edge states. The solid line is calculated by using the master equation. The inset shows their parity and decay rates in the presence of disorder. (c) Entanglement of the edge atoms from highly localized edge states. The inset shows their asymptotic limit with \(N\) for \(J_{1}/J_{2}=0.3\) (blue), \(0.4\) (orange), \(0.5\) (yellow), \(0.6\) (purple), respectively. (d) Generation and transfer of the remote entanglement. The solid line shows the population of Bell states in a single SSH chain, where the red points correspond to concurrence \(C_{0}=0\), \(C_{\rm max}=0.49\). The dashed line shows the transfer of Bell state between two topological chains. Here, \(J_{1}=0.25J_{2}\), \(\gamma_{0}=0.035J_{2}\), \(N=5\) and \(N=3\) for the solid and dashed lines, respectively.
respectively, with decay rates \(\Gamma_{E,-}=2\gamma_{0}\) and \(\Gamma_{B}=(2N-2)\gamma_{0}\). The effective decay rates of the hybridized edge states are given by
\[\gamma_{A/S}=\Gamma_{E,-}|\langle\Gamma_{E,-}|\psi_{A/S}\rangle|^{2}+\Gamma_{B} |\langle\Gamma_{B}|\psi_{A/S}\rangle|^{2}, \tag{11}\]
with \(\gamma_{A}=0\), \(\gamma_{S}\simeq 2\gamma_{0}\). As shown in Fig. 4(b), the edge states with odd/even parity exhibits super/subradiance, which corresponds to a dissipative bright/dark state [62; 63; 64; 65]. Moreover, in the inset of Fig. 4(b), we demonstrate the topological stability of the super/subradiance when considering the disorder, where the parity function is defined as \(P(\psi)=\sum_{j=1}^{2N}\left|\langle\psi|\sigma_{j}^{+}|G\rangle+\langle\psi| I\sigma_{j}^{+}|G\rangle\right|^{2}-1\). Although the disorder breaks the inversion symmetry, the parity shows robustness since the edge states are protected by chiral symmetry of the SSH chain.
The similarity between highly localized edge states and Bell states \(|\Psi^{\pm}\rangle=(|e_{1}g_{2N}\rangle\pm|g_{1}e_{2N}\rangle)/\sqrt{2}\) also enables the generation of remote entangled states. By tracing out the bulk atoms, we investigate the concurrence \(\mathcal{C}(\rho)\) of the reduced density matrix \(\rho_{\text{edge}}\) of the edge states. Fig. 4(c) shows the concurrence and the inverse participation ratio (IPR) of two edge atoms versus \(J_{1}/J_{2}\), where the IPR measures the localization of wave functions, with \(\text{IPR}(\psi)=\sum_{i}|\psi\left(r_{i}\right)|^{4}=\sum_{i}p_{i}^{2}\) and \(p_{i}\) being the probability at \(i\)-th site. Furthermore, they exhibit asymptotic independence as the cell number \(N\) increases [45]. In order to prepare remote entanglement, one can first excite an edge atom via a pump field, as shown in Fig. 4(a). The entangled state \(|\Psi^{-}\rangle\) decays quickly, while \(|\Psi^{+}\rangle\) shows a slight oscillation with other dark states, around the steady value of mean concurrence \(\bar{C}\simeq 0.5|\langle\psi_{S}|\Psi^{+}\rangle|^{4}\). Fig. 4(d) displays the remote entanglement transfer in decoherence-free subspace between the two topological chains, through the tunable couplings of edge states, which also inherits their topological robustness.
_Conclusion._--In summary, we have proposed a topological system for studying controllable interaction via a local junction. The dynamics is constrained in the zero-energy manifold, providing two interacting subspaces for transitions of edge states. By tuning the junction interactions with \(C_{4}\) and \(\mathbb{Z}_{2}\) geometric symmetries, we find different interacting channels between edge states. According to the theory we implement the robust quantum state transfer and SWAP gate for the topological system, which suggests the excellent approximation from total Hamiltonian to the effective spin model. Beyond the coherent dynamics, this system shows the symmetric edge states are immune to environmental dissipation, which leads to protection against both disorder and decoherence. Our work paves the way towards controlling topological edge states with few-body local interactions in a non-adiabatic process.
_Acknowledgments._--We thank Y.-X. Liu, J.-Q. Liao and Z. Peng for stimulating discussions. This work is supported by the National Key R&D Program of China (Grant No. 2019YFA0308200), the National Natural Science Foundation of China (Grant No. 11874432).
|
2309.12580 | Flow-induced vibration of a flexible cantilever in tandem configuration | The present work investigates the fluid-structure interaction (FSI) of a
flexible cylindrical cantilever in a tandem configuration. A fully coupled
fluid-structure solver based on the three-dimensional incompressible
Navier-Stokes equations and Euler-Bernoulli beam equation is employed to
numerically examine the coupled dynamics of the cantilever. We assess the
extent to which such a flexible structure could sustain oscillations in both
subcritical and post-critical regimes of Reynolds number ($Re$).
Spatio-temporal power transfer patterns, response amplitudes, and vorticity
dynamics are quantified and compared between isolated and tandem
configurations. Results of our analysis indicate that the cantilever in tandem
configuration is prone to sustained oscillations dependent on $Re$ and the
reduced velocity parameter ($U^*$). In the subcritical $Re$ regime, the
cantilever exhibits sustained oscillations with peak transverse oscillation
amplitudes occurring within a specific range of $U^*$. Within this range, the
transverse oscillations demonstrate lock-in behavior and synchronization with
the vortex shedding frequency. The vorticity dynamics in the subcritical $Re$
regime reveal that in the tandem configuration, the presence of the upstream
cylinder significantly modifies the wake structure, delaying vortex formation
and extending the near wake. In the post-critical $Re$ regime, the cantilever
shows a broader range of sustained oscillations in terms of $U^*$, with single-
and multi-frequency dynamics driven by vortex-body interactions. The power
transfer analysis shows cyclic energy exchange patterns between the fluid and
flexible structure, with significant variations in the hydrodynamic loading
along the cantilever. The findings of this work help broaden the understanding
of sustained oscillations in flexible cantilevers and are relevant to the
design of cantilever flow sensors. | Shayan Heydari, Rajeev K Jaiman | 2023-09-22T02:15:13Z | http://arxiv.org/abs/2309.12580v2 | # Sustained oscillation of flexible cantilevers without vortex shedding
###### Abstract
The present work investigates the fluid-structure interaction (FSI) of a flexible cylindrical cantilever beam at subcritical Reynolds numbers (\(Re\)). A fully-coupled fluid-structure solver based on the three-dimensional (3D) incompressible Navier-Stokes equations and Euler-Bernoulli beam theory is employed to numerically examine the coupled dynamics of the beam. We assess the extent to which such a flexible cylindrical beam could sustain oscillations in this \(Re\) regime when it is either exposed to a steady upstream wake (i.e., tandem cylinder configuration) or subjected to an externally applied base excitation. Our results indicate that within a particular range of reduced velocity parameter (\(U^{*}\)), the beam experiences sustained oscillations in both scenarios, leading to periodic vortex shedding downstream. The mechanism governing the sustained oscillations is characterized as synchronization, during which the frequency of the cross-flow fluid loading matches the beam's first-mode natural frequency. When the beam is subjected to base excitation, the critical Reynolds number for vortex shedding (\(Re_{c}\)) is found to reduce to \(Re_{c}\approx 5\). Above this threshold, vortex shedding is found to occur by stimulating the pair of counter-rotating vortices in the near-wake region. For the tandem cylinder configuration, the beam is shown to exhibit figure-eight-shaped tip motion trajectories during its oscillatory response. However, various patterns of tip motion trajectories, including figure-eight, and chaotic-type responses, are observed when the beam is under external base excitation. The findings of this work aim to generalize our understanding of sustained oscillation in flexible cylindrical cantilevers and have relevance to the development of bio-inspired cantilever flow sensors.
## 1 Introduction
Fluid-structure interactions are ubiquitous in nature and various engineering applications. One intriguing manifestation of FSIs occurs when a flexible body with a bluff cross-section is exposed to a steady incident flow perpendicular to its length. When the Reynolds number, which is based on the body's characteristic length (\(D\)) and free-stream velocity (\(U_{0}\)), is above a critical value, often approximated as \(Re_{c}\approx 45\) (Jackson (1987)), the shedding of von Karman vortices can induce sustained oscillations in the body, a phenomenon widely known as vortex-induced vibrations (VIVs). The study of VIVs has garnered substantial attention over the past few decades primarily owing to the intricate vortex dynamics and nonlinear physics that manifest during VIVs. For a bluff body experiencing VIVs, studies have shown
that there is a particular range of system parameters, known as the synchronization (also sometimes called lock-in) regime, within which the frequency of vortex shedding deviates from Strouhal's relationship, which determines the vortex-shedding frequency of the body's stationary counterpart, and becomes equal or close to the body's first-mode natural frequency. During VIVs, the vibration amplitudes are known to exhibit bell-shaped trends as functions of the reduced velocity parameter \(U^{*}\) and have been shown to be of the order \(O(D)\) in the cross-flow direction within the synchronization regime, as reviewed by Williamson & Govardhan (2004) and Sarpkaya (2004).
Although VIVs have been extensively studied at supercritical Reynolds numbers (i.e., \(Re>Re_{c}\)), recent research has unveiled intriguing phenomena in the subcritical regime of Reynolds number (i.e., \(Re<Re_{c}\)). In particular, studies on the coupled dynamics of flexible bluff bodies, including flexible circular cylinders (Heydari _et al._ (2022); Bourguet (2020); Buffoni (2003)), and elastically-mounted rigid cylinders (Miyanawala & Jaiman (2019); Boersma _et al._ (2021); Cossu & Morino (2000); Mittal & Singh (2005); Dolci & Carmo (2019); Meliga & Chomaz (2011)), have revealed that, in some scenarios, sustained oscillations and periodic vortex shedding could occur in the subcritical regime of \(Re\). For instance, in previous research (Heydari _et al._ (2022)), we investigated the FSI of a long flexible cylindrical cantilever beam for \(Re\in[20,40]\) using high-fidelity numerical simulations. The results of our numerical experiments revealed that the flexible cantilever could experience sustained oscillations, with a periodic vortex shedding pattern evident downstream, for Reynolds numbers as low as \(Re=22\) within the synchronization regime.
The findings regarding the sustained oscillation of flexible cylindrical bodies at subcritical Reynolds numbers challenge the conventional understanding that vortex shedding exclusively occurs in the supercritical regime of \(Re\) and emphasize the need to expand our knowledge of flow-induced vibrations, especially at subcritical Reynolds numbers. Such knowledge is not only relevant to engineering applications but also holds significant promise within the context of biological flow sensing. Many biological species employ mechanosensory hairs and other flow-sensing mechanisms to detect flow features and minute changes in flow patterns (Beem & Triantafyllou (2015)). By investigating the complexities of sustained oscillations in these structures, it becomes possible to inform the design of artificial flow sensors tasked with measuring flow information and differentiating between various flow patterns. In this context, it is essential to understand the role of wake interference and base excitation on the sustained oscillation of flexible cylinders in the subcritical \(Re\) regime.
This study examines the coupled dynamics of a flexible cylindrical cantilever beam, as a canonical model of a biological flow sensor, targeting the subcritical regime of \(Re\). Two distinct configurations will be explored: (i) a tandem cylinder arrangement, where two cylinders are placed in line, and (ii) an isolated beam subjected to external base excitation. These configurations have the potential to significantly influence the oscillatory response of the beam and alter the wake dynamics in the subcritical \(Re\) regime, as demonstrated by Bourguet (2023) in the case of an elastically-mounted rigid cylinder under forced rotation. The current research is expected to expand our knowledge of sustained oscillation in flexible cantilevers and is relevant to biological flow sensing and engineered flow sensors.
## 2 Problem description and numerical methodology
To investigate the coupled dynamics of the flexible cantilever, we utilize a 3D computational framework. A schematic of the computational domain with details of the domain size and boundary conditions is given in Fig. (1a). The cantilever is taken as a circular cylinder of diameter \(D\) and length \(L=100D\). For the tandem cylinder arrangement, a rigid stationary cylinder is positioned in the upstream region at a streamwise distance \(x_{0}\) from the cantilever.
The no-slip boundary condition is enforced on the surface of the cantilever, denoted as \(\Gamma^{\rm fs}\), and the surface of the rigid stationary cylinder. A uniform flow of velocity \(\mathbf{u}^{\rm f}=(U_{0},0,0)\) is applied at the inflow surface and the slip boundary condition is imposed at the side, top, and bottom surfaces of the computational domain, as illustrated in Fig. (1a). For the outflow surface, the traction-free boundary condition, given as \(\mathbf{\sigma}^{\rm f}\mathbf{.}\mathbf{n}^{\rm f}=\mathbf{0}\), is specified, where \(\mathbf{\sigma}^{\rm f}\) is the Cauchy stress tensor for a Newtonian fluid and \(\mathbf{n}^{\rm f}\) is the unit normal vector to the outflow surface. The unsteady 3D Navier-Stokes equations are employed to predict the flow dynamics. Using an arbitrary Lagrangian-Eulerian (ALE) reference frame on the fluid domain \(\Omega^{\rm f}(t)\), the Navier-Stokes equations are written as:
\[\rho^{\rm f}\frac{\partial\mathbf{u}^{\rm f}}{\partial t}\bigg{|}_{ \hat{x}^{\rm f}}+\rho^{\rm f}(\mathbf{u}^{\rm f}-\mathbf{u}^{\rm m})\cdot\nabla\mathbf{u}^{ \rm f} =\nabla\cdot\mathbf{\sigma}^{\rm f}+\mathbf{b}^{\rm f}\ \ \ \text{on}\ \ \Omega^{\rm f}(\rm t), \tag{1}\] \[\nabla\cdot\mathbf{u}^{\rm f} =0\ \ \ \text{on}\ \ \Omega^{\rm f}(\rm t), \tag{2}\]
where \(\rho^{\rm f}\) is the density of the fluid, \(\mathbf{u}^{\rm f}=\mathbf{u}^{\rm f}(\mathbf{x}^{\rm f},t)\) and \(\mathbf{u}^{\rm m}=\mathbf{u}^{\rm m}(\mathbf{x}^{\rm f},t)\) are the fluid and mesh velocities defined for each spatial point \(\mathbf{x}^{\rm f}\in\Omega^{\rm f}(t)\), respectively, \(\mathbf{b}^{\rm f}\) is the body force applied to the fluid and \(\mathbf{\sigma}^{\rm f}\) is the Cauchy stress tensor, given as \(\mathbf{\sigma}^{\rm f}=-p\mathbf{I}+\mu^{\rm f}(\nabla\mathbf{u}^{\rm f}+(\nabla\mathbf{u}^ {\rm f})^{T})\), where \(p\) denotes the fluid pressure, and \(\mu^{\rm f}\) is the dynamic viscosity of the fluid. The first term in Eq. (1) represents the partial derivative of \(\mathbf{u}^{\rm f}\) with respect to time while the ALE referential coordinate \(\hat{x}^{\rm f}\) is kept fixed. The fluid forcing acting on the cantilever's surface is calculated by integrating the surface traction on the fluid-structure interface \(\Gamma^{\rm fs}\). The instantaneous coefficients of the lift and drag forces are then computed as:
\[C_{\rm L}=\frac{1}{\frac{1}{2}\rho^{\rm f}U_{0}^{2}DL}\int_{\Gamma^{\rm fs}}( \mathbf{\sigma}^{\rm f}\cdot\mathbf{n})\cdot\mathbf{n}_{\rm y}{\rm d}\Gamma,\ \ \ \ C_{\rm D}=\frac{1}{\frac{1}{2}\rho^{\rm f}U_{0}^{2}DL}\int_{\Gamma^{\rm fs}}( \mathbf{\sigma}^{\rm f}\cdot\mathbf{n})\cdot\mathbf{n}_{\rm x}{\rm d}\Gamma, \tag{3}\]
where \(\mathbf{n}_{\rm x}\) and \(\mathbf{n}_{\rm y}\) are the Cartesian components of the unit outward normal vector \(\mathbf{n}\). A stabilized Petrov-Galerkin finite element method is employed to discretize the fluid domain \(\Omega^{f}\) into \(n_{el}\) non-overlapping finite elements \(\Omega^{e}\) in space such that \(\Omega^{f}=\bigcup_{e=1}^{n_{el}}\Omega^{e}\). The employed computational grid, as shown in Fig. (1b), consists of unstructured prism elements, with a boundary layer mesh around the cylinders. The accuracy of the computational grid and the validity of the numerical solver have been substantiated in our prior studies (Joshi & Jaiman (2017); Heydari _et al._ (2022)).
The flexible cantilever is taken as a slender structure with small lateral motions, hence, its dynamic response is modeled using the Euler-Bernoulli beam theory (Blevins (2016)). The structural domain \(\Omega^{\rm s}\) consists of structure coordinates \(\mathbf{x}^{\rm s}=(x,y,z)\). The transverse displacements \(\mathbf{w}^{\rm s}(z,t)\) are solved using the Euler-Bernoulli beam equation excited by a distributed unsteady fluid loading per unit length \(\mathbf{f}^{\rm s}\). Neglecting the damping and shear effects, the equation of motion of the beam is written as:
\[m\frac{\partial^{2}\mathbf{w}^{\rm s}(z,t)}{\partial t^{2}}+EI\frac{\partial^{4} \mathbf{w}^{\rm s}(z,t)}{\partial z^{4}}=\mathbf{f}^{\rm s}(z,t), \tag{4}\]
where \(m=\rho^{\rm s}A\) is the mass per unit length of the beam, with \(\rho^{\rm s}\) and \(A\) being the density of the structure and the cross-sectional area, respectively. The Young's modulus and second moment of area of the beam are denoted by \(E\) and \(I\), respectively. For the cases with base excitation, the transverse displacements \(\mathbf{w}^{\rm s}(z,t)\) are computed as:
\[\mathbf{w}^{\rm s}(z,t)=\mathbf{w}^{\rm s}_{rel}(z,t)+\mathbf{w}^{\rm s}_{b}(t), \tag{5}\]
where \(\mathbf{w}^{\rm s}_{b}(t)\) is the displacement of the beam at its base and \(\mathbf{w}^{\rm s}_{rel}(z,t)\) is the relative
displacement to the base. Hence, Eq. (4) is generalized as:
\[m\left(\frac{\partial^{2}\mathbf{w}_{rel}^{\mathrm{s}}(z,t)}{\partial t^{2}}+\frac{ \partial^{2}\mathbf{w}_{b}^{\mathrm{s}}(t)}{\partial t^{2}}\right)+EI\frac{ \partial^{4}\mathbf{w}_{rel}^{\mathrm{s}}(z,t)}{\partial z^{4}}=\mathbf{f}^{\mathrm{s}}( z,t). \tag{6}\]
The boundary conditions at the fixed end of the cantilever beam are given as \(\mathbf{w}_{rel}^{\mathrm{s}}(0,t)=0\) and \(\left.\frac{\partial\mathbf{w}_{rel}^{\mathrm{s}}(z,t)}{\partial z}\right|_{z=0}=0\). To find a solution for Eq. (6), we consider a mode superposition approach. We define the solution to the beam's relative displacement as:
\[\mathbf{w}_{rel}^{\mathrm{s}}(z,t)=\sum_{\mathrm{i=1}}^{\infty}\mathbf{\gamma}_{ \mathrm{i}}(t)S_{\mathrm{i}}(z), \tag{7}\]
in which i is the mode number, \(\mathbf{\gamma}_{\mathrm{i}}(t)\) is a vector containing unknown time functions, and \(S_{\mathrm{i}}\) is the beam's i-th natural vibration mode. For the considered cantilever beam, the natural vibration modes are taken as the sums of sine, cosine, sinh, and cosh functions written as:
\[S_{\mathrm{i}}\left(z\right)=\cosh\left(\frac{\lambda_{\mathrm{i}}z}{L}\right) -\cos\left(\frac{\lambda_{\mathrm{i}}z}{L}\right)-\sigma_{\mathrm{i}}\sinh \left(\frac{\lambda_{\mathrm{i}}z}{L}\right)+\sigma_{\mathrm{i}}\sin\left( \frac{\lambda_{\mathrm{i}}z}{L}\right), \tag{8}\]
where \(\lambda_{\mathrm{i}}\) and \(\sigma_{\mathrm{i}}\) are dimensionless parameters dependent on the mode number (see Blevins (2016) for values of \(\lambda_{\mathrm{i}}\) and \(\sigma_{\mathrm{i}}\)). In our analysis, we consider a sinusoidal base excitation given as \(\mathbf{w}_{b}^{\mathrm{s}}(t)=\mathbf{F}_{b}\sin(2\pi f_{b}t)\), where \(\mathbf{F}_{b}\) is a vector containing the components of the base excitation amplitude in the transverse directions and \(f_{b}\) is the frequency of the base excitation. Substituting Eq. (7) into Eq. (6) and considering the orthogonality conditions,
Figure 1: Problem specification: (a) schematic of the computational domain with details of the domain size and boundary conditions (For the isolated beam configuration, the upstream cylinder is omitted from the computational domain, and the beam is positioned at distances \(15D\) and \(45D\) from the inflow and outflow surfaces, respectively.), (b) the unstructured finite element grid with a close-up view of the boundary layer mesh.
Eq. (2.6) is rewritten as:
\[\frac{\partial^{2}\mathbf{\gamma}_{\rm i}(t)}{\partial t^{2}}+\omega_{\rm i}^{2}\mathbf{ \gamma}_{\rm i}(t)=\frac{1}{mL}\int_{0}^{L}\mathbf{f}^{\rm s}(z,t)S_{\rm i}(z)\; \mathrm{d}z-\frac{1}{L}\frac{\partial^{2}\mathbf{w}_{\rm b}^{\rm s}(t)}{\partial t^ {2}}\int_{0}^{L}S_{\rm i}(z)\;\mathrm{d}z, \tag{2.9}\]
where \(\omega_{\rm i}\) is defined as \(\omega_{\rm i}=\frac{\lambda_{\rm i}^{2}}{L^{2}}\sqrt{\frac{EI}{m}}\). The solution to Eq. (2.9) is obtained using the trapezoidal integration rule. The relative displacements are then calculated using Eq. (2.7). Finally, the beam's displacement is computed using Eq. (2.5). The continuity of the velocity and traction is also satisfied at the fluid-structure interface. For a comprehensive review of the numerical algorithm and implementation details, the reader is referred to Jaiman & Joshi (2022).
The dimensionless parameters relevant to this study are mass ratio (\(m^{*}\)), Reynolds number \(Re\), and reduced velocity \(U^{*}\) defined as:
\[m^{*}=\frac{4m}{\pi D^{2}\rho^{\rm f}},\qquad Re=\frac{\rho^{\rm f}U_{0}D}{\mu ^{\rm f}},\qquad U^{*}=\frac{U_{0}}{f_{\rm n}D}, \tag{2.10}\]
where \(f_{\rm n}\) is the first-mode natural frequency of the beam taking into account the effect of added fluid mass. In our present study, we set the mass ratio at \(m^{*}=1\), Reynolds number \(Re\leqslant 40\), and reduced velocity \(U^{*}\in[5,19]\). The reduced velocity is modified by altering the velocity of the free-stream flow while maintaining a constant Reynolds number value.
## 3 Sustained oscillations at subcritical Reynolds number
The response characteristic of the cantilever beam in the tandem cylinder configuration is provided in SS 3.1. Vibration amplitudes and frequencies, as a function of the amplitude and frequency of the base excitation, are presented in SS 3.2. The flow visualization and the vorticity dynamics are discussed in SS 3.3.
### Response dynamics of tandem cylinders
First, we investigate the coupled dynamics of the cantilever beam in the tandem cylinder configuration. The streamwise distance between the two cylinders, denoted by \(x_{0}\), is set at either \(x_{0}=5D\) or \(x_{0}=10D\). Figure (2a) shows the variation of the root-mean-square (rms) value of the cross-flow vibration amplitude (\(A_{y}^{rms}/D\)) at the free end of the cantilever for \(U^{*}\in[5,19]\) at \(Re=40\). Compared to the isolated cantilever beam, where the oscillations occur for \(U^{*}>5\)(Heydari _et al._ (2022)), the cantilever beam in the tandem arrangement starts to exhibit sustained oscillations for \(U^{*}>7\). The oscillations are present within \(U^{*}\in(7,17)\) for the studied tandem configurations. In the case of an elastically-mounted rigid cylinder in a tandem arrangement, previous research (Mysa _et al._ (2016)) has demonstrated that at \(Re=100\) the cylinder maintains a constant amplitude of oscillation, larger than its isolated counterpart, for reduced velocities \(U^{*}\geqslant 15\). However, our present results indicate that within the subcritical regime of the Reynolds number, sustained oscillations are not present at such high values of \(U^{*}\). As shown in Fig. (2a), when \(U^{*}\geqslant 17\), the cantilever beam in the tandem arrangement remains in its steady position (i.e., no sustained oscillations) at both \(x_{0}=5D\) and \(x_{0}=10D\). It is important to highlight that for the considered tandem arrangements, oscillations are absent across all studied \(U^{*}\) values when the Reynolds number is set at \(Re=30\) or below.
In all the instances where the cantilever beam exhibits sustained oscillations, there is a frequency match between the frequency of the lift coefficient and the frequency of the cross-flow oscillations, as shown in Fig. (2b). The power spectra of the cross-sectional drag and lift coefficients, along with the power spectra of the vibration amplitudes are shown in Figs. (2c)
and (2d) for the tandem cylinder configuration at \(x_{0}=5D\) and \(U^{*}=9\). As seen in these figures, there is a frequency match between the frequency of the cross-flow (streamwise) oscillation and the frequency of the lift (drag) coefficient. According to Figs. (2c) and (2d), the streamwise oscillation frequency is twice the frequency of the cross-flow vibrations. This 2:1 ratio between the frequency of the streamwise-to-cross-flow oscillations is observed across all \(U^{*}\) values where the beam exhibits sustained oscillations. A typical figure-eight type of tip motion trajectory, often associated with this 2:1 ratio, is hence evident during the
Figure 2: Response characteristics of the cantilever beam at \(Re=40\): (a) cross-flow vibration amplitude at the free end of the cantilever (i.e., \(z/L=1\)) as a function of \(U^{*}\), (b) variations of the cross-flow vibration frequency at \(z/L=1\) and the lift coefficient frequency with respect to \(U^{*}\) (The area in gray indicates \(y\)-axis values between \([0.9,1.1]\) and the dashed blue line depicts the calculated vortex-shedding frequency, determined using a Strouhal number of 0.12.), (c-d) power spectra of the vibration amplitudes and fluid loading for \(x_{0}=5D\) at \(U^{*}=9\), (e) tip motion trajectories at \(U^{*}=9\), (f) \(xy\)-plane view of the \(z\)-vorticity contours at \(z/L=1\) and \(U^{*}=11\). The flow is from left to right.
beam's oscillatory response, as shown in Fig. (2e) for the tandem cylinder configurations at \(U^{*}=9\). Within the range of \(U^{*}\) values where the beam exhibits sustained oscillations, a periodic vortex shedding pattern is evident downstream. For the studied tandem cylinder arrangements, the wake of the upstream cylinder remains nearly steady and symmetric across all examined \(U^{*}\) values, as shown in Fig. (2f) for the cases at \(U^{*}=11\). This steady upstream wake is found to delay the formation of vortices behind the cantilever beam and contribute to stabilizing the beam's oscillatory motion in the subcritical \(Re\) regime. Consequently, the range of \(U^{*}\) values associated with sustained oscillations is narrower for the tandem cylinder arrangement, compared to the isolated cantilever beam, as evident in Figs. (2a) and (2b).
### Effect of base excitation
Next, we investigate how an externally applied base excitation alters the coupled dynamics of the isolated cantilever beam. Figure (3a) demonstrates the response characteristics of the beam in terms of the rms value of the cross-flow vibration amplitude as a function of the reduced velocity parameter \(U^{*}\) in two scenarios: when the beam is not subjected to any base excitation (i.e., unperturbed configuration) and when it experiences base excitation in the cross-flow direction. In the cases with base excitation, the excitation amplitude is set at \(A_{b}/D=1.0\), and the excitation frequency is given as \(f_{b}/f_{n}=1.0\).
As shown in Fig. (3a), the beam under the specified base excitation experiences sustained oscillations of larger amplitude throughout the evaluated \(U^{*}\) range, in comparison to the unperturbed configuration. It is also observed that the cross-flow vibration amplitude follows a nearly linear trend with respect to \(U^{*}\) in the presence of base excitation. This linear trend is inversely proportional to \(U^{*}\) and differs from the usual bell-shaped trend seen without the presence of base excitation. Unlike the unperturbed configuration where the frequency of the cross-flow oscillation increases with increasing \(U^{*}\), the beam under the specified base excitation oscillates at the frequency of the base excitation for all examined \(U^{*}\) values, as illustrated in Fig. (3b). In the case of the beam under base excitation, the frequency of the lift coefficient matches the frequency of the cross-flow oscillation across all examined \(U^{*}\) values, as shown in Fig. (3b). Based on the results provided in Fig. (3b), it is evident that the imposition of a base excitation influences the wake behavior in the subcritical \(Re\) regime. More specifically, the dominant frequency of the lift coefficient is found to be dictated by the frequency of the base excitation, independent of the value of \(U^{*}\). A comprehensive examination of the wake dynamics is presented in SS 3.3.
To further assess the effect of the amplitude and frequency of the base excitation on the response characteristic of the beam, we have examined the FSI of the beam at \(U^{*}=7\) and \(U^{*}=19\) under a range of base excitation conditions. The oscillatory behavior of the beam is studied for \(A_{b}/D\in[0.5,1.5]\) and \(f_{b}/f_{n}\in[0.5,1.5]\). The results, expressed as the rms value of the cross-flow vibration amplitude, are provided in Fig. (3c). As shown in Fig. (3c), there is a rise in the amplitude of cross-flow oscillation as the base excitation amplitude is increased. At \(U^{*}=7\), the beam exhibits the largest vibration amplitudes when \(f_{b}/f_{n}=1.0\). In these scenarios, the value of \(A_{y}^{rms}/D\) is approximately 1.1-2.2 times that observed in cases where the excitation frequency is either equal to \(f_{b}/f_{n}=0.5\) or \(f_{b}/f_{n}=1.5\). At \(U^{*}=19\), the value of \(A_{y}^{rms}/D\) shows minimal sensitivity to the frequency of the base excitation, resulting in comparable cross-flow vibration amplitudes. It is noteworthy that, across all examined combinations of \((A_{b}/D,f_{b}/f_{n})\), the oscillations exhibit lower amplitudes at \(U^{*}=19\) compared to those at \(U^{*}=7\). Figure (3c) also provides the values of \(A_{y}^{rms}/D\) as a function of the base excitation parameters, under conditions where the fluid-structure coupling effects are not present (i.e., uncoupled configuration). In these scenarios, the slope of \(A_{y}^{rms}/D\) with respect to \(A_{b}/D\) is found to be more pronounced, with a value of 1.1 when
\(f_{b}/f_{n}=0.5\) and \(3.2\) when \(f_{b}/f_{n}=1.5\), compared to values between \((0.6,1.1)\) when the fluid-structure coupling effects are taken into consideration.
For the cases involving fluid-structure coupling, we find that the base excitation alters the trajectory of the beam's tip motion. As depicted in Fig. (4), depending on the amplitude and frequency of the base excitation, the trajectory of the beam's tip motion manifests as various responses. At \(U^{*}=7\), the trajectories resemble figure-eight and chaotic-type responses. However, at \(U^{*}=19\), only a figure-eight type of response is seen. The distinctive characteristics observed in motion trajectories underscore the significance of base excitation in cantilever flow sensors. Essentially, each motion trajectory functions as a flow signature, aiding such sensors in the discrimination of different flow patterns and the subsequent retrieval of flow-related data.
Figure 4: Qualitative comparison of the tip motion trajectories at \(Re=40\) as a function of \(A_{b}/D\) and \(f_{b}/f_{n}\): (a) \(U^{*}=7\), (b) \(U^{*}=19\).
Figure 3: Response characteristics of the isolated beam under base excitation at \(Re=40\): (a) variation of the cross-flow vibration amplitude at \(z/L=1\) as a function of \(U^{*}\), (b) variation of the cross-flow vibration frequency at \(z/L=1\) and lift coefficient frequency as a function of \(U^{*}\) (The area in gray indicates \(y\)-axis values between \([0.9,1.1]\) and the dashed blue line depicts the calculated vortex-shedding frequency, determined using a Strouhal number of \(0.12\).), (c) variation of the cross-flow vibration amplitude as a function of \(A_{b}/D\) and \(f_{b}/f_{n}\).
### Vorticity dynamics at subcritical Re
In our earlier study (Heydari _et al._ (2022)), we demonstrated that for an isolated cantilever beam, vortex shedding could occur for \(Re\geqslant 22\) within a particular range of the reduced velocity parameter \(U^{*}\). However, in the case of the tandem cylinder arrangement, vortex shedding is found to be present only for \(Re\geqslant 40\) within a narrow range of \(U^{*}\) where the beam undergoes sustained oscillations. Additionally, for the cases involving base excitation, the results of our numerical experiments reveal that the base motion of the beam could significantly influence the critical Reynolds number for vortex shedding \(Re_{c}\). In such cases, by stimulating the pair of vortices in the near-wake region, it becomes possible to reduce \(Re_{c}\) to just above \(Re_{c}\approx 5\). Figure (5) shows the \(z\)-vorticity contours for the isolated beam experiencing base excitation at \(A_{b}/D=1.0\) and \(f_{b}/f_{n}=1.0\) across different Reynolds numbers. According to the contour plots, in the presence of base excitation it becomes possible to induce vortex shedding even at a low Reynolds number of \(Re=10\), where, in the absence of such excitation, the wake would otherwise remain steady and symmetric. It is important to note that the existence of a pair of counter-rotating vortices in the near-wake region, which typically appears for \(Re\gtrapprox 5\)(Jackson (1987)), is considered crucial to induce vortex shedding in the subcritical regime of \(Re\), irrespective of the amplitude and frequency of the base excitation. Further research is needed to pinpoint the critical values for the base excitation parameters necessary to generate vortex shedding for each \((Re,U^{*})\) combination.
## 4 Conclusions
In this paper, we utilized a high-fidelity 3D numerical framework to investigate the FSI of a flexible cylindrical cantilever beam at subcritical Reynolds numbers. The coupled dynamics of the beam was studied for two distinct configurations: (i) tandem cylinder arrangement and (ii) isolated beam under base excitation. The cantilever beam in the tandem arrangement was shown to experience sustained oscillations at \(Re=40\) for a specific range of \(U^{*}\). Figure-eight-shaped tip motion trajectories were observed during the beam's oscillatory response, with a periodic vortex shedding pattern evident downstream. Compared to an isolated cantilever beam, the beam in the tandem arrangement was shown to experience sustained oscillations for a narrower range of \(U^{*}\). Additionally, the vibrations were found to be absent across all studied \(U^{*}\) values when the Reynolds number was taken as \(Re=30\) or below. The dynamics of the isolated beam was also investigated under an externally applied base excitation. Our
Figure 5: \(xy\)-plane view of the \(z\)-vorticity contours at the mid-section of the beam at \(A_{b}/D=1.0\) and \(f_{b}/f_{n}=1.0\): (a) \(Re=5\), (b) \(Re=10\), (c) \(Re=20\), (d) \(Re=40\). (a-d) The top and bottom contours correspond to \(U^{*}=7\) and \(U^{*}=19\), respectively. The flow is from left to right.
results indicated that the beam under base excitation could experience sustained oscillations with various patterns of tip motion trajectories. The frequency of the base excitation was found to control the dominant frequency of the wake and impact the critical Reynolds number for vortex shedding, lowering it to values just above \(Re_{c}\approx 5\). The presented analysis is aimed to broaden our understanding of sustained oscillations in flexible cylindrical cantilevers in the subcritical regime of \(Re\) and has relevance to the development of cantilever flow sensors.
###### Acknowledgements.
The research was enabled in part through computational resources and services provided by The Digital Research Alliance of Canada ([https://alliancecan.ca/](https://alliancecan.ca/)) and the Advanced Research Computing facility at the University of British Columbia ([https://arc.ubc.ca/](https://arc.ubc.ca/)). **Funding.** The authors would like to acknowledge the Natural Sciences and Engineering Research Council of Canada (NSERC) for funding the project. **Declaration of interests.** The authors report no conflict of interest.
|
2309.05942 | End-to-End Testing of Open-Source Hardware Documentation Developed in
Large Collaborations | Large scientific collaborations, often with hundreds or thousands of members,
are an excellent opportunity for a case study in best practices implemented
while developing open source hardware. Using a publicly available design of
timing equipment for gravitational wave detectors as a case study, we evaluated
many facets of the open source hardware development, including practices,
awareness, documentation, and longevity. Two diverse student teams, composed of
high school and college students, participated in an end-to-end exercise of
testing publicly-available documented hardware that originated from more than a
decade ago. We found that the primary value of large collaborations lie in the
ability to cultivate teamwork, provide a diverse set of role-models, and
explore the possibilities of open hardware development of varying complexities.
Learning from the experiences of the student groups, we make constructive
recommendations where the open source hardware community can learn from the
collaborations and vice versa. | Melinda Yuan, Aruna Das, Sunny Hu, Aaroosh Ramadorai, Imaan Sidhu, Luke Zerrer, Jeremiah Alonzo, Daniel Jarka, Antonio Lobaccaro, Leonardo Lobaccaro, Raymond Provost, Alex Zhindon-Romero, Luca Matone, Szabolcs Marka, Zsuzsa Marka | 2023-09-12T03:39:24Z | http://arxiv.org/abs/2309.05942v1 | # End-to-End Testing of Open-Source Hardware Documentation Developed in Large Collaborations
###### Abstract
Large scientific collaborations, often with hundreds or thousands of members, are an excellent opportunity for a case study in best practices implemented while developing open source hardware. Using a publicly available design of timing equipment for gravitational wave detectors as a case study, we evaluated many facets of the open source hardware development, including practices, awareness, documentation, and longevity. Two diverse student teams, composed of high school and college students, participated in an end-to-end exercise of testing publicly-available documented hardware that originated from more than a decade ago. We found that the primary value of large collaborations lie in the ability to cultivate teamwork, provide a diverse set of role-models, and explore the possibilities of open hardware development of varying complexities. Learning from the experiences of the student groups, we make constructive recommendations where the open source hardware community can learn from the collaborations and vice versa.
open source, hardware, large collaborations
## (1) Introduction
Large international collaborations of scientists explore the frontiers of our knowledge and discover game-changing phenomena that captures the imagination of the public worldwide. Whether investigating our genetic heritage or the collision of enigmatic cosmic objects, hardware technology is used on the bleeding edge of human capabilities, often more akin to art than engineering. That is why we refer to the best of these efforts as _instrument science_. The enormous cost of these decades-long projects are often measured in billions of dollars and hundreds, even thousands of scientists--inevitably fully funded by the international taxpayer. Coordinated financial investment is critical for success in fundamental sciences and it places a welcome burden on hardware developers. As a consequence, there is a desire, or even a requirement, to produce hardware that is fully documented and _open to all_. After all, it was paid for by the people.
In addition to being an incredible resource for society as a whole, open source hardware can foster collaboration between scientific teams and the general public. Consequently, open source hardware projects can increase accessibility to and interest in science. For that reason, it is important to continuously evaluate open hardware principles to see if they truly support transparency, reproducibility, and understanding.
In order to explore the practical implications of open hardware developed in large collaborative settings, we worked with undergraduates and high school students with an interest in open-source hardware and design but limited practical experience. The motivated student teams conducted an end-to-end exercise to test the extensive and publicly available documentation written over a decade and a half ago from the design of the Laser Interferometer Gravitational-Wave Observatory (LIGO) Timing System (Bartos et al., 2010; Sullivan et al., 2023).
The LIGO detectors (Harry et al., 2010) are part of the global network of interferometers, that includes Virgo (Acernese et al., 2015), GEO600 (Affeldt et al., 2014; Dooley et al., 2015), and KAGRA (Akatsu et al., 2021; Aso et al., 2013), aiming to observe gravitational waves directly. One hundred years after Albert Einstein predicted the existence of gravitational waves, the first observation was made by the Advanced LIGO detectors in Livingston, Louisiana, and Hanford, Washington. These detectors, while located at the same sites, were an upgrade to the initial LIGO detectors and the culmination of a multi-year team effort of research based on the experience of operating the original detectors for a decade. The Advanced LIGO detectors (Abbott et al., 2016) have ten times greater sensitivity and thus observe a thousand times bigger volume of the Universe compared to the initial installation, which significantly increased the likelihood of gravitational wave detection.
Making discoveries requires coordinating within the global gravitational-wave detector network and with other astronomy and astrophysics observatories that can detect electromagnetic and particle counterparts of gravitational waves and thus provide a complete multi-messenger picture of cosmic events. To support the upgrade of LIGO, the initial LIGO timing system needed to be upgraded as well. A new design (Bartos et al., 2010; Sullivan et al., 2023) was made that ensures the reliable operation of the detectors and also provides precise timing information of observed gravitational wave events. The new design also aimed to strengthen both the diagnostics capability and the ability to be able to track all synchronization errors.
The National Science Foundation (NSF), which provided funding for the design research, mandated that advanced LIGO documentation, including the timing system, be open to the public. Since open science best practices were not mainstream at the time of the design, we decided to test whether outsiders can really make use of the existing public information efficiently and, if not, what changes need to be made. As the design dates back a decade and a half, we also were able to test whether the documentation can survive large timescales.
We considered that undergraduate and high school students, new to the fields of open-source science and astrophysics, were the best proxies for outsiders as they would consider everything with fresh eyes. For that reason, we designed an end-to-end exercise in which students gained familiarity with the timing system design by simulating the process of manufacturing a board as well as converting the original design files to modern open source formats. They then followed the testing procedure for the boards as written by the original team of designers. We conducted surveys of students before and after this process to gauge whether or not the exercise had shifted their viewpoints. The students further reflected on the skills and knowledge they wish they had known prior to the exercise and that educators in academia should know before they introduce students to the field. The high school and undergraduate teams comprised of six students each. Thus, we prioritize qualitative feedback rather than quantitative assessments.
The timing system has successfully provided critical information for over hundred cosmic discoveries detected via their gravitational-wave signatures up to date. From high-school students to faculty, an order of two dozen people worldwide was involved in the timing system project at various stages of design through multiple iterations, testing, manufacturing, installation, maintenance, and remanufacturing over ten years, 2007-2017 (Bartos et al., 2010; Sullivan et al., 2023). There are several key differences between the original teams of undergraduates, graduate students, engineers and scientists who worked on the historical design and manufacturing of the timing system over ten years ago and the latest cohort of undergraduate and high school students contributing to this study. The original team's objective was far from assessing and creating open-source hardware. Instead, they were prioritizing LIGO goals and objectives and delivered a robust mission critical system on time and on budget. Domain experts were also more closely involved during the original design process. In this new iteration of the project, scientists took more of an observer's and mentor's role and allowed the undergraduates to explore the documentation independently with a fresh eye from the open-source hardware viewpoint.
The undergraduate team also remodeled the hardware production process leading up to manufacturing. Since more than a decade passed, they looked for supply chain shortages, cost optimized refinements,
and obsolete items. While they did not manufacture any hardware, they obtained quotations from manufacturing firms to get a sense of the feasibility of production as well as the change in price. They also conducted a survey of electronics design software in the open-source context and looked into whether their team can contribute to design changes. For that, they needed a design software that was not behind a paywall; they identified the best option and experimented with it.
The undergraduates and high school students also conducted in-depth tests. Testing the real boards involved following a step-by-step procedure outlined in the advanced LIGO Timing System documentation by checking parameters such as voltage readings and visual signals. Both the high school and undergraduate teams performed this process on LIGO timing system's Leaf and DuoTone boards (Bartos et al., 2010a,b). Their experiences learned from this process are further summarized in this paper.
## (2) End-to-End Exercise
### Method
Figure 1 describes the end-to-end exercise designed for assessing publicly available documented hardware from the open source point of view. After introductory team meetings, all participating students took a survey that assessed their prior awareness of open-source science in general and open hardware specifically. The high school students were then tasked with collecting and organizing publicly available documentation of the timing system design and manufacturing files through online searches using the public interface of the LIGO Document Control Center (_LIGO Document Control Ceneter_ n.d.)
The undergraduate students were charged with two tasks, which are referred to in this paper as manufacturing and making open source, described below in greater detail. Notably, the students initially had minimal knowledge of both the production and manufacturing of printed circuit boards (PCB) as well as working with PCB design software. Subsequently, the student teams were given safety instructions regarding the use of laboratory spaces and equipment, before conducting any tests of previously manufactured boards, furthering their laboratory proficiency. Finally, at the end of the exercise the students were given an exit survey to assess their earned experiences. The end-to-end exercise lasted for approximately half a year during the academic season. In order to facilitate teamwork, the high school students were advised by their physics teacher who has previous experience in LIGO science. The undergraduate team participated in weekly team meetings, and had access to one of the original engineers on the project at an as needed basis.
Figure 1: Structure of the end-to-end exercise designed for assessing publicly available documented hardware from an open source point of view.
### Simulating Manufacturing
The manufacturing group was charged with obtaining quotes from PCB companies for the Leaf board. 1 By looking at the original PCB design and bill of material files of the Leaf board, they were able to find manufacturers that were (1) domestic, (2) stated that they could provide a full turn key solution, and (3) were ROHS (Restriction of Hazardous Substances) compliant. These companies accepted files in many different formats, the principal three being Altium, Eagle, and KiCad. The acceptance of KiCad is particularly noteworthy as it is one of the only free software suites for electronic design and, thus, the the team evaluated it as the most compatible with open source principles. In the end, the team reached out to around 10 companies and received 7 responses.
Footnote 1: Leaf modules are the terminal points of the timing distribution chain in the LIGO timing system, which has a tree topology. They provide timing information through various parts of the kilometer-scale detector. See (Bartos et al., 2010b; Sullivan et al., 2023)
The quotes that the team ultimately received were typically divided into the price of bare boards (PCBs without electrical components) and assembly (PCBs that contain all the components) with several different lead times to choose from. Additionally, some manufacturers quoted tooling expenses separately. The group received a total of 5 full quotes, and Figure 2 compares the 2022 prices of these quotes, including the price of a quote from 2017 from the latest remanufacturing run of the same board. Figure 2 also compares the range of lead times from the various manufacturers.
### Making Open Source
The group was supplied with the original PCB files used to manufacture the Leaf Board, which were designed in Altium. The archived format provided was designed using Altium 2009 (_Altium Designer_ 2009). While Altium is among the most popular software for PCB design for professionals, its cost is not conducive to open science or academia. Some engineering students can gain Altium access through their respective institutions, but this is not universal and usually expires by their date of graduation. Further, Altium maybe less available to non-engineering students. Given that the team comprised of both engineering and non-engineering students, they set out to find an open source alternative to Altium for PCB design with sufficient capabilities.
Figure 3: (Right) Range of Lead Times. Please note that while fairly long lead times are acceptable for long term projects like large scientific collaborations, they might represent a significant burden for developers, experimenters, and startups that pride themselves in their agility and speed, either fiercely competing globally or driven by burning enthusiasm.
Figure 2: (Left) Price Comparison of Quotes Received. (Please note that the original quote is in 2017 US dollars and the new ones are in 2022 US dollars, and the reader should inflate the 2017 price with about 20% for proper comparison.) Overall, only one quote approximates the historical price and the rest of the quotes are significantly higher. This might in-part signal that custom hardware manufacturing did not follow the overall inflation models in the US, becoming significantly less affordable for US based creators.
In determining which open source software to use, the undergraduate team evaluated both price and operating system and determined that KiCad was most suitable software for the project team. KiCad is free, compatible with both Mac and Windows operating systems, and is already in widespread use among engineers and hobbyists (_KiCad_ 2022). The students utilized the excellent tutorials and online resources to become familiar with the KiCad software.
In their first iteration of the exercise, the students converted the Altium files into KiCad using a third-party tool(Guhring, 2013). However, they discovered that the libraries which Altium uses do not match the libraries used in KiCad, rendering the converted file unusable.
This result led the students to conduct a second iteration of the exercise, in which they eventually settled on a procedure to convert the files manually from Altium to KiCad, which is outlined in Figure 4.
It should be noted that, while this conversion process was an improvement upon the first attempt, it was also not entirely successful. The gerber files were intact as was most of the PCB layout, but there were several problems with this conversion that required additional manual fixing, including that (1) multi-page schematics were split into multiple schematic documents; (2) there was no link between the schematic files and the PCB layout, which can create problems when attempting to modify; (3) the drill hole sizes of the PCB layout were changed; (4) some wiring was missing in the schematic; and (5) there
Figure 4: Comparison of Software Options for Electronics Design
Figure 5: Practical and generally useful process of Altium to KiCad Conversion. Please note the elaborate and time consuming nature of the process.
was no link from the footprints in the schematic to the library.
Once the team had converted the files into KiCad, they were tasked with replacing a custom-made electronics part in the design with a generic version with similar specifications. The custom voltage-control oscillator in the original design was required to fulfill LIGO frequency requirements, but retaining the custom part would have been a barrier to accessibility. Hobbyists or members of the public would not be able to buy this part, and even if they could, custom parts are significantly more expensive than their generic counterparts. Most crucially, the part is not necessary for the broader public; buying the standard available generic part is cheaper, easier, and more useful.
After selecting the new parts, the students replaced the components on the KiCad schematic, after which they reached out to one of the original designers to verify the compatibility of the substitution.
### Testing of Existing Hardware
The group then set out to test the existing hardware by following the steps from two public LIGO documents: "Test Procedure for the Timing Slave Board" (Bartos et al., 2010b) and "Test Procedure for the Clock, Gate, and DuoTone Signal Interface" (Bartos et al., 2010a). These documents were authored by the members of LIGO who designed, produced, and tested the hardware.
During this process, students encountered a series of challenges, which they carefully documented along with the respective solutions that they devised. These are outlined in the next section.
### (3) Challenges and Solutions
### Manufacturing Challenges
#### Challenge 1: Old Design and Obsolete Parts
The fact that the board manufacturing files were originally produced with an older version of Altium from 2009 lead in many cases to additional questions from the production side that needed to be clarified (e.g. drill information), not a simple task for a student new to the field. Further, the team had to learn how to handle obsolete parts to which the contacted manufacturers did not have access. As a result, the companies who supplied the team with quotations requested approval of suggested replacement parts for obsolete or out of stock components. This process of replacing obsolete parts then necessitated that the students evaluate the data sheets of the company recommended replacements, posing a myriad of questions. The parts recommended by the companies often had different functionalities from the original parts, making it difficult for the students to determine whether they were suitable replacements. The student group then did further research into the original parts, though information was often limited due to part discontinuation. Changes in certain components would often necessitate further changes in the bill of materials. Due to these issues, the manufacturing group was often unsure of replacement components and, therefore reached out to original designers to confirm their evaluation and the overall production feasibility.
These aforementioned challenges yielded in some cases difficulties in communication with manufacturers. Not all companies were equally as accommodating with the students, who as stated earlier, were not the original designers, but were doing their first open source hardware project. The team found that smaller companies were usually more responsive, kinder, and willing to provide clarifications and answer additional questions. The larger companies that the students corresponded with were often less willing to provide quotes if they were not guaranteed the order, and were also less accommodating regarding missing information and obsolete parts.
#### Takeaway: Contact with Original Creators is Crucial
In the case of confusion regarding documentation and part replacements, inexperienced individuals should contact those with expertise in engineering and design. The best case scenario, of course, is contact with the original creator of the individual hardware components. The original creator is the most familiar with the design and therefore the most likely to give accurate information. Therefore, when making a hardware project open source, it is crucial for creators to include a way for potential users of the design to contact them, whether it be through email, slack, discord etc. Contact with the original designers or those who keep contributing to the project can increase the longevity of open hardware endeavors.
### Testing Challenges
#### Challenge 2: Outdated Legacy Software
One issue encountered by both teams, but especially by the high school students was working with old versions of software. The testing procedure required test firmware which relied on Altium 2009 and has not been updated since its creation. As a result, the testing of the boards not only required the old version of the software to be installed, but also the old version of the operating system with which the software is compatible with, in this case Windows 7. This required the high school students to find an old computer with the old operating system installed, and then install the old version of Altium. While the students were ultimately able to complete the testing procedure on this old version of the software, the entire process proved not only cumbersome, but would have been virtually impossible without expertise in legacy products which is naturally not the strength of the youngest generation.
#### Takeaway: Update Firmware of Design
The firmware of a design should be regularly updated to keep up with new versions of software that released. Regular updates will maintain the longevity of a design and facilitate use by new users. Alternatively, an install-able snapshot of the original environment and instruction on installation and use should be archived and provided; although it is a poor substitute of a live project.
#### Challenge 3: Test Documentation Written for a Tighter Circle of Audience
Large collaborations consist of people from a multitude of nationalities and education levels -- students, professors, engineers, and scientists all over the world -- thus there is an effort to make documentation in an accessible fashion for all members. In our investigations, both the high school and the undergraduate team encountered difficulties when following some elements of the testing procedure of the boards due to plausible multiple interpretation of text. While documentation was clearly written for scientists who were familiar with the software as well as with the hardware itself, for the truly untrained student, its details were difficult to decipher in several instances. For example, the figures showing the board orientation were not intuitive resulting in difficulty identifying the correct pins for the procedure. Further, he students were unfamiliar with the term "soft LED" and the language of the manual led the students to believe that the LED in question was a physical component on the PCB board, when in fact it was a feature on the Altium screen's control panel. Such examples show that documentation, obvious to the original design team and even were written with including trained students in the writing, may still have accessibility issues and can be a source of confusion for those who are entirely new to a project.
#### Takeaway: Write clearly for the untrained interested mind of the future
One of the ways to ensure accessibility is to include students in the writing of documents cataloging the research process. In the long run, comprehensive documentation would allow individuals to contribute to the project without much initial training. Documentation files thus should be written assuming little prior knowledge of the project itself and should be very specific when referencing hardware components. In the process of making a project open source, creators may also benefit from having a team of undergraduates with little prior knowledge about the project test the documentation to determine its true accessibility.
## (4) Student Recommendations
The following section contains recommendations derived directly through student feedback in the form of exit surveys.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Recommendation & Student Survey Feedback \\ \hline \hline Avoid use of legacy software & _"groups that wish to test... hardware would experience difficulty if they lack access to an expert to clarify confusion or if knowledge of old software"_ \\ \hline Write detailed and explicit documentation & _"figures showing the board orientation were not intuitive,... certain indicators of the success or failure of the testing procedure were unclear."_ \\ \hline Avoid Obsolescent Parts & _"faulty hardware pieces can also prevent open source hardware from reaching its full potential. However, when maintained properly, open source hardware can be a powerful tool for growing scientists' and the general public's knowledge of the latest hardware."_ \\ & _"a project's design needs to take into account obsolescence and part replacements that have happened since the design was created. Regular maintenance and modification of the design over time to account for this will help overcome this problem."_ \\ & _"precise language in documentation is necessary for understanding instructions years later"_ \\ \hline Be conscious of affordability of parts & _"Price might be a limiting factor. Many manufacturers want larger orders so they charge more per board if you're only ordering a few. The cost of each board is also typically in the hundreds of dollars, which might deter people who are just getting into open source hardware."_ \\ \hline Use old projects as learning tools for future projects & _"Considering part obsolescence and explaining what the function of each part of the design is really important because it allows people to adapt and update your designs. I would consider using old projects for inspiration."_ \\ & _"I would consider using old projects because I see how looking retroactively allowed us to pinpoint exactly what problems to target when it comes to making the project open source."_ \\ \hline \end{tabular}
## (5) Discussion and Additional Recommendations
The introduction and exit surveys provided valuable insight into the necessary requirements to making a project fully open sourced. Through this end-to-end exercise, our students have provided a myriad of useful recommendations for other educators wishing to pursue an open source hardware project. In addition to those provided by our students, we also collected input from members of LIGO-Virgo-KAGRA, IceCube, and VERITAS Collaborations2 on how to make hardware developed in large international collaborative settings more openly sourced. The recommendations made from the aforementioned sources were then compiled into two documents: "Guidelines for Open Source Hardware" and "Mentoring and Training Guide," both of which are available online through the Open Source Hardware Association: [https://www.oshwa.org/](https://www.oshwa.org/) (link to documents)
Footnote 2: The LIGO Scientific Collaboration (LSC), the Virgo Collaboration and the KAGRA Collaboration, with over 2000 members together, have joined to perform gravitational wave science using their respective detectors. The IceCube Neutrino Observatory is a research facility at the South Pole in Antarctica. Over 300 scientists work together in IceCube. VERITAS is a ground-based gamma-ray instrument operating in southern Arizona; the respective collaboration has dozens of members.
We have highlighted a few key recommendations below:
* **Advocate for Inclusion of Open Source Hardware Standards in Undergraduate Curriculum:** One common theme among both of our student groups was a lack of knowledge about open source hardware in general. Despite being students of the natural sciences, most had never even worked with a PCB board. Regardless of discipline, basic hardware skills are fundamental to science education and require proper advocacy from educators. OSHWA has been actively working to increase awareness of open source hardware. Our documents, which provide guidelines for academic investigators, represent one such effort. Such guidelines are intended to be used by researchers and educators alike to facilitate the incorporation of open source practices in the undergraduate curriculum. For example, the guidelines include practices such as using Git and GitHub for version control, a practice which may be taught in a class or research setting.
* **Listen to the Student Researchers:** One important takeaway from this exercise is that students know better than anyone else what they know and do not know. Therefore, it is crucial for mentors to listen to the feedback of students, even when it does not necessarily align with their own views. While we often think of a student-mentor relationship as unidirectional, with information flowing from the mentor to the student, the reality is that it is actually a mutual learning process. Especially in the world of open source projects, both the mentor and student are able to learn from each other.
* Open source hardware projects must be search engine optimized in a way that is easy to find online. This requires careful planning on the part of the creator. In addition to following open source standards, obtaining a DOI and using key words also contributes to discoverability. (2) Circle of Openness
- The circle of openness refers to the group of individuals who possess the necessary knowledge to be able to access and utilize a project. Deciding on a circle of openness requires consideration of the previous knowledge required to reproduce the hardware, as well as the time investment and learning curve associated. More information regarding the circle of openness is available in Section 1 of "Guidelines for Open Source Hardware." (link) The key to the success and longevity of an open source project is accessibility to creators of all backgrounds. Our students experienced difficulties with hardware, software, and testing documentation. While some challenges are bound to arise when embarking on a new project of any kind, creators should strive to anticipate and reduce possible areas of confusion as much as possible. This includes avoiding obsolete parts, updating firmware, and writing precise documentation. A member of the Virgo collaboration made a suggestion which we thought was worthwhile to mention: to make "simplified" versions of an open hardware project that is intended for the general public. We recommend this approach as another method of increasing the Circle of Openness.
## (6) Conclusion
Our end-to-end exercise proved to be an extremely valuable resource for understanding ways to improve open source hardware. The students were able to gain awareness of the utility of open source hardware as well as its place in the overall open science ecosystem. Moreover, the educators were able to better understand the needs of their students and devise strategies to help other educators incorporate open source hardware into their program.
While the meticulous design and extensive well-written documentation of the LIGO timing system dates back over a decade and a half, its utility in this exercise sets a precedent for other hardware creators to look not only to the future, but also to the past as a source of inspiration. Only through careful scrutinizing of a previous design was our team able to properly evaluate the longevity of the project. Thus, we encourage other educators and creators alike to constantly look towards past designs and assess their ability to function in a contemporary scientific setting. Only through such explorations of previous work will future projects be improved.
### Future Work
Looking towards the future, one change we hope to see is for hardware education to be integrated into the science curriculum. Open software has already made many strides in this area, as open source tools such as Python have been well integrated into the undergraduate curriculum. With the rise of open source hardware fueled by exercises such as the one described in this paper, we hope to see open hardware gain similar traction in terms of awareness and accessibility as open software.
### Acknowledgements
We appreciate the generous support of Open Source Hardware Association ([https://www.oshwa.org](https://www.oshwa.org)) and the Sloan Foundation, which awarded Dr. Zsuzsa Marka (Columbia University in the City of New York) the Open Source Hardware Trailblazer Fellowship that made this work possible. Special thanks to Zoltan Raics who was involved as an electrical engineer in the original timing boards. We also thank the LIGO-Virgo-KAGRA, IceCube, and VERITAS collaborations and all of their members.
### Funding statement
This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. The authors also gratefully acknowledge support from Columbia University.
### Competing interests
The authors declare that they have no competing interests.
|
2309.16080 | Magnetocaloric effect in $\mathrm{Cu}_{3}$-type compounds using the
Heisenberg antiferromagnetic model in a triangular ring | In this work we present a theoretical investigation into an
antiferromagnetically coupled spin system, specifically ${\rm Cu}_{3}-X$
($\mathrm{X=As,Sb}$), which exhibits an isosceles triangular configuration or
slightly distorted equilateral triangular configuration, as previously
identified in reference {[}Phys. Rev. Lett. \textbf{96}, 107202 (2006){]}. This
system can be effectively represented by the Heisenberg model on a triangular
structure, taking into account the exchange interaction, the
Dzyaloshinskii-Moriya interaction, g-factors and external magnetic field, as
delineated in the aforementioned reference. By using numerical approach we
explore both zero-temperature and finite-temperature behaviors of a ${\rm
Cu}_{3}$-like antiferromagnetically coupled spin system. At zero temperature,
the system displays a 1/3 quasi-plateau magnetization, when the magnetic field
is varied. Moreover, we place particular emphasis on magnetic properties
including magnetization, magnetic susceptibility, entropy, and specific heat at
finite temperatures. Furthermore, we investigate the magnetocaloric effect as a
function of an externally imposed magnetic field, oriented both parallel and
perpendicular to the plane of the triangular structure. Interestingly, these
configurations demonstrate remarkably similar behavior for both orientations of
the magnetic field. Our investigation also includes an analysis of the
adiabatic curve, the Gr\"uneisen parameter, and the variation in entropy when
applied or removed the magnetic field. The magnetocaloric effect is found to be
more prominent in low the temperature region, typically at $T\sim1$K, for both
parallel and perpendicular magnetic fields at $\sim4.5$T and $\sim5$T,
respectively. | G. A. Antonio, J. Torrico, A. S. da Mata, S. M. de Souza, Onofre Rojas | 2023-09-28T00:36:33Z | http://arxiv.org/abs/2309.16080v2 | Magnetocaloric effect in Cu\({}_{3}\)-type compounds using the Heisenberg antiferromagnetic model in a triangular ring
###### Abstract
In this work we present a theoretical investigation into an antiferromagnetically coupled spin system, specifically Cu\({}_{3}-X\) (\(\mathrm{X=As,Sb}\)), which exhibits an isosceles triangular configuration or slightly distorted equilateral triangular configuration, as previously identified in reference [Phys. Rev. Lett. **96**, 107202 (2006)]. This system can be effectively represented by the Heisenberg model on a triangular structure, taking into account the exchange interaction, the Dzyaloshinskii-Moriya interaction, g-factors and external magnetic field, as delineated in the aforementioned reference. By using numerical approach we explore both zero-temperature and finite-temperature behaviors of a Cu\({}_{3}\)-like antiferromagnetically coupled spin system. At zero temperature, the system displays a 1/3 quasi-plateau magnetization, when the magnetic field is varied. Moreover, we place particular emphasis on magnetic properties including magnetization, magnetic susceptibility, entropy, and specific heat at finite temperatures. Furthermore, we investigate the magnetocaloric effect as a function of an externally imposed magnetic field, oriented both parallel and perpendicular to the plane of the triangular structure. Interestingly, these configurations demonstrate remarkably similar behavior for both orientations of the magnetic field. Our investigation also includes an analysis of the adiabatic curve, the Gruneisen parameter, and the variation in entropy when applied or removed the magnetic field. The magnetocaloric effect is found to be more prominent in low the temperature region, typically at \(T\sim 1\)K, for both parallel and perpendicular magnetic fields at \(\sim 4.5\)T and \(\sim 5\)T, respectively.
## I Introduction
The study of spin systems with antiferromagnetic coupling has drawn significant attention in the field of condensed matter physics. These systems exhibit interesting features that arise from the interplay of various factors that can be investigated through their magnetic properties. Moreover, understanding the characteristics of these systems helps to clarify their potential applications in areas such as magnetocaloric materials and spintronics
The Magnetocaloric Effect (MCE) is a phenomenon that has been studied extensively due to its potential applications in magnetic refrigeration and cooling technologies. Initially observed in the late 19th century, it refers to the change in temperature that occurs when a magnetic material is subjected to a varying magnetic field, a phenomenon resulting from the intrinsic magnetic properties of the material [1; 2]. The reversibility of this effect has been confirmed in later studies, sparking significant interest [2]. The MCE has been observed in a variety of materials, such as rare earth alloys, magnetic oxides, and transition metals, with notable instances of a giant magnetocaloric effect (GMCE) driven by structural transitions [3; 4]. In 1951, Darby and colleagues made a pioneering step in the field by designing a two-stage magnetocaloric regenerator using materials with different Curie points, achieving temperature down to the final values as low as 3mK at an induction of 0.42T [5].
The concept of magnetic refrigeration at room temperature was introduced almost a century after the discovery of MCE. In 1976, Brown developed an efficient refrigeration system using gadolinium, marking a significant advancement [6]. Following this, in the late 90s, Gschneidner discovered GMCE at room temperature in gadolinium-germanium-silicon alloys (Ga-Ge-Si) [7]. Around the same time, Zimm proposed a prototype showcasing the feasibility of magnetic refrigeration near room temperature [8]. These developments led to substantial experimental and theoretical research on bulk \((\mathrm{Mn,Fe})_{2}(\mathrm{P,Si})\)-based GMCE materials [9; 10; 11; 12; 7].
Nanoscale materials with GMCE have gained attention due to their high surface-to-volume ratio, enhanced interactions, and rapid thermal response. These characteristics make them valuable for temperature control applications. Examples of such applications include a room-temperature thermal diode [13], a self-pumping magnetic cooling device using Mn-Zn ferrite nanoparticles that achieves efficient energy conservation without external energy input [14], a magnetic cooling device based on a ferrofluid thermomagnetic that can effectively transfer heat over large distances [15], control of ferrofluid droplets in microfluidics [16], and a magnetostructural phase transition in Ni-Mn-Ga films showing a strong MCE at low magnetic fields [17]. Other applications involve gadolinium thick films for energy conversion mechanisms [10; 11] and biomedical applications like magnetic hyperthermia [18] and efficient drug delivery via nanocarriers [19].
Furthermore, the study of magnetic materials has attracted significant attention due to their wide range of potential technological applications in fields such as spintronics, nanoscale engineering, and biomedicine.
This has prompted investigations on \(S=1/2\) antiferromagnetic triangular spin rings, which might be ideal for observing peculiar quantum magnetization due to two doublets. Compounds investigated include spin-frustrated (VO)\({}_{3}^{6+}\)-triangle-sandwiching octadecutungstates as molecular magnets, displaying unusual magnetization jumps due to predicted half-step or 1/3-plateau magnetization [20]. Experiments on a Cu\({}_{3}\) nanomagnet revealed half-step magnetization, hysteresis loops, and an asymmetric magnetization between negative and positive field in fast sweeping external field, which can be ascribed to an adiabatic change of mgmetization[21]. Whereas in [22] was investigated the spin-electric coupling. The \(S=1/2\) spin triangle clusters were also investigated, revealing that the magnetization behavior and spin configurations are significantly affected by the diamagnetic heteroatom (\(X=\text{As}\) and Sb) [23]. These clusters show potential for implementing spin-based quantum gates [24]. Bouammali et al. [25] explored the antisymmetric exchange in a tri-copper(II) complex, highlighting its origins, theoretical implications, and potential for more advanced electronic structure calculations. A spin-frustrated trinuclear copper complex based on triaminoguanidine demonstrates strong antiferromagnetic interactions with negligible antisymmetric exchange [26]. Several other studies have also examined triangular copper structures [27; 28; 29; 30; 31].
On the other hand, theoretical investigations to explore various properties of nanomagnets or magnetic molecular clusters, beyond experimental results, are highly significant. For instance, Kowalewska and Szalowski conducted a theoretical study of the magnetocaloric properties of \(V6\), a polyoxovanadate molecular magnet. Their research, using numerical diagonalization and field ensemble formalism, uncovered highly tunable magnetocaloric effects [32]. Karifova et al. studied the magnetization in antiferromagnetic spin-1/2 XXZ Heisenberg clusters, demonstrating additional magnetization plateaux due to quantum interaction and an enhanced magnetocaloric effect near magnetization shifts [33]. Reference [34] employed exact diagonalization to examine the spin-1/2 Hamiltonian for coupled isosceles Heisenberg triangles, yielding a zero-temperature quantum phase transition diagram and a magnetization profile. They also analyzed the thermodynamic behavior and MCE. Another theoretical study was conducted on a Cu\({}_{5}\) pentameric molecule using a spin-1/2 Heisenberg model, which explored the thermodynamic properties, phase diagram, magnetization, and magnetocaloric effects [35]. Theoretical study of the MCE in paramagnetic PrNi\({}_{2}\) revealed an unexpected inverse effect due to an anomalous increase in magnetic entropy at low temperatures [36]. Several other theoretical investigations can be found in references therein [32; 33; 34; 35; 36].
In this context, a system of interest is Cu\({}_{3}-X\) (X = As, Sb), which adopts an isosceles triangular or slightly distorted equilateral triangular configuration. Previously, Choi et al. [21; 23; 24] have established that the behavior of this system can be effectively described by the Heisenberg model on a triangular structure, incorporating elements such as exchange interaction, Dzyaloshinskii-Moriya interaction, g-factors, and external magnetic fields. Exploring the magnetic properties and thermodynamic behavior of this Cu\({}_{3}\)-like spin system is important as it helps us understand its fundamental characteristics and identify potential advantages for its applications.
The article is organized as follows: in sec. 2 we present the model and analyze some fundamental properties. In sec. 3 we explore main thermodynamic properties. Whereas in sec.4 we discuss the magnetocaloric effect. Finally in sec. 5 we devote our conclusions.
## II Model
In this work, we aim to explore the thermodynamics and magnetic properties of a triangular cluster Na\({}_{9}\)[Cu\({}_{3}\)Na\({}_{3}\)(H\({}_{2}\)O)\({}_{9}(\alpha-X\)W\({}_{9}\)O\({}_{33})_{2}\)] (where \(X=\text{As}\) and Sb) hereinafter referred to as the \(\{\text{Cu}_{3}-X\}\) system[21]. The compound under consideration contains three copper atoms, each of which loses two electrons to form a Cu(II) or Cu\({}^{+2}\) ion. The electron loss in Cu(II) ions occurs from both the 4s and one of the 3d orbitals, resulting in a single unpaired electron and a net magnetic moment with a spin of \(S=1/2\); this behavior can be adequately described by the Heisenberg model within the framework of an isosceles triangular spin ring [21; 23; 24]. Consequently, we adopt the Hamiltonian, as presented in previous work [21; 23; 24], which characterizes Cu\({}_{3}\)-like compounds, as follows
\[\mathbf{H}= \sum_{j=1}^{3}\sum_{\alpha=x,y,z}J_{j,j+1}^{\alpha}S_{j}^{\alpha} S_{j+1}^{\alpha}\] \[+\sum_{j=1}^{3}\Bigl{[}\mathbf{D}_{j,j+1}\cdot(\mathbf{S}_{j} \times\mathbf{S}_{j+1})+\mu_{B}\mathbf{S}_{j}\cdot\mathbf{g}_{j}\cdot\mathbf{ B}_{j}\Bigr{]}, \tag{1}\]
where \(S_{j}^{\alpha}\) denotes the spin-1/2 components of localized Cu\({}_{3}\)-like spin with \(\alpha=\{x,y,z\}\), and \(J_{j,j+1}^{\alpha}\) (simplified as \(J_{j}^{\alpha}\)) represents the exchange interaction parameters between site \(j\) and \(j+1\) for each component (for schematic view see reference [21; 23; 24]). The second term refers to the Dzyaloshinskii-Moriya interaction vector \(\mathbf{D}_{j,j+1}\) denoted as \(\mathbf{D}_{j,j+1}=(D_{j,j+1}^{x},D_{j,j+1}^{y},D_{j,j+1}^{z})\). The site-dependent \(g\)-factors are defined as \(\mathbf{g}_{j}=(g_{j}^{x},g_{j}^{y},g_{j}^{z})\), while the last term corresponds to the magnetic field \(\mathbf{B}\), which we assume to be independent of the spin site on the triangle. Here, \(\mu_{B}\) denotes the Bohr magneton. The specific parameters were obtained using Electron Spin Resonance (ESR) data [21; 23; 24], and these parameters are reproduced in table 1 for both compounds. It is worth mentioning that only \(\mathbf{D}_{1,2}=(D,D,D)\) is isotropic, while \(\mathbf{D}_{2,3}\) and \(\mathbf{D}_{3,1}\) contribute solely to the \(z\)-component, expressed as \(\mathbf{D}_{2,3}=\mathbf{D}_{3,1}=(0,0,D)\). Other interactions, such as the crystal field effect and magneto-crystalline
anisotropy, were disregarded in this study because their contributions are not deemed highly relevant, as supported by references[21; 31].
For convenience, we express the Hamiltonian (1) in units of kelvin (K). Hence, let us redefine \(\mu_{B}\) as \(\hat{\mu}_{B}=\frac{\mu_{B}}{k_{B}}=0.6717156644\) K/T, where \(k_{B}\) denotes the Boltzmann constant. Therefore, the magnetic field \(\mathbf{B}\) is conveniently measured in tesla (T) units. This can be equivalent to setting the Boltzmann constant as \(k_{B}=1\), implying that, for the sake of simplicity, all calculation will be expressed in units of \(k_{B}\).
## III Thermodynamics quantities
The eigenvalues of the above mentioned Hamiltonian (1), can be obtained by direct numerical diagonalization. More details about the energy spectra can be found in reference [21; 23; 24], so let us assume that the eigenvalues can be expressed as follows
\[\mathbf{UHU}^{-1}=\mathbf{E}=\mathrm{diag}\left(\varepsilon_{1},\varepsilon_ {2},\cdots,\varepsilon_{8}\right), \tag{2}\]
where \(\mathbf{U}\) is an \(8\times 8\) matrix that diagonalizes the Hamiltonian (1). It is important to note that this matrix, which naturally depends on the Hamiltonian parameters, can only be obtained numerically for a fixed magnetic field.
Thus, the partition function can symbolically be represented by:
\[\mathcal{Z}=\mathrm{tr}\left(\mathrm{e}^{-\mathbf{E}/T}\right)=\sum_{i=1}^{8 }\mathrm{e}^{-\varepsilon_{i}/T}. \tag{3}\]
Here, the eigenvalues \(\varepsilon_{i}\) (in kelvin units) depend on the Hamiltonian parameters provided in table 1, as well as the magnetic field \(\mathbf{B}\) (in tesla), while \(T\) represents the temperature of the system (in kelvin). In theory, any physical quantity can be derived from the partition function (3). However, as the eigenvalues can only be obtained numerically, physical quantities that require derivatives, such as magnetization and magnetic susceptibility among others, must be calculated with caution. Numerical derivatives may not always provide accurate results, hence it is advisable to avoid them as much as possible. Therefore, we will combine numerical and analytical calculations to safely obtain all physical quantities.
In this regard, the free energy can be denoted by the expression
\[f=-T\ln(\mathcal{Z}). \tag{4}\]
It should be noted that the free energy is also represented in units of \(k_{B}\).
### Internal energy
The first quantity we will discuss is the internal energy, as it directly influences the magnitude of the magnetocaloric effect. As previously stated, the eigenvalues of the Hamiltonian can be obtained using the parameters listed in table 1 and a fixed magnetic field. Formally, the average internal energy can be represented as:
\[U=\langle\mathbf{H}\rangle=\frac{1}{\mathcal{Z}}\mathrm{tr}\left\{\mathbf{H} \mathrm{e}^{-\mathbf{H}/T}\right\}=\frac{1}{\mathcal{Z}}\sum_{i=1}^{8} \varepsilon_{i}\mathrm{e}^{-\varepsilon_{i}/T}. \tag{5}\]
For the purposes of our discussion, we will focus on the \(\mathrm{Cu}_{3}-\mathrm{As}\) compound. The \(\mathrm{Cu}_{3}-\mathrm{Sb}\) compound exhibits analogous characteristics because the parameters given in table 1 are quite similar.
Figure 1a depicts the internal energy (\(U\)) as a function of temperature, assuming a constant parallel external magnetic field to the triangle plane (solid line) and perpendicular to it (dashed line). The internal energy varies slightly between the parallel and perpendicular magnetic fields. Although, as the magnetic field increases, the discrepancy becomes more pronounced. In contrast, panels (b and c) show the internal energy as a function of the parallel and perpendicular external magnetic fields, respectively. These figures assume several fixed temperatures, as specified inside the panel. At zero temperature, we observe a significant change of internal energy at \(B_{\parallel}\approx 4.5\) T. Above this magnetic field, the system aligns entirely parallel to the magnetic field, while for \(B_{\parallel}\lesssim 4.5\) T, the configuration comprises two aligned spins and a third with opposite alignment. We observe similar behavior when the external magnetic field acts perpendicularly to the triangular plane, but the shift occurs at a slightly higher magnetic field, \(B_{\perp}\approx 5\) T. This similarity has previously been observed in energy spectra and zero-temperature magnetization [21; 23; 24]. As temperature increases, this curvature smooths out. In the absence of an external magnetic field, these spin moments are oriented randomly, leading to a higher internal energy state. When an external magnetic field is applied, the spins align with the field, reducing the internal energy of the compound. This variation of energy manifests as a change in the compound temperature, representing the core of the magnetocaloric effect.
\begin{table}
\begin{tabular}{|r|r|r|r|} \hline Magnetic parameters & notation & \{\(\mathrm{Cu}_{3}-\mathrm{As}\)\} & \{\(\mathrm{Cu}_{3}-\mathrm{Sb}\)\} \\ \hline \hline \(J_{1,2}^{z}=J_{1,2}^{y}\) & \(J_{1}\) & 4.50 K & 4.49 K \\ \hline \(J_{2,2}^{z}\) & \(J_{1}^{z}\) & 4.56 K & 4.54 K \\ \hline \(J_{2,3}^{x}=J_{2,3}^{y}=J_{3,1}^{z}=J_{3,1}^{y}\) & \(J_{2}\) & 4.03 K & 3.91 K \\ \hline \(J_{2,3}^{z}=J_{3,1}^{z}\) & \(J_{2}^{z}\) & 4.06 K & 3.96 K \\ \hline \(D_{1,2}^{z}=D_{2,3}^{z}=D_{3,1}^{y}\) & \(D\) & 0.529 K & 0.517 K \\ \hline \(D_{1,2}^{z}=D_{1,2}^{y}\) & \(D\) & 0.529 K & 0.517 K \\ \hline \(g_{1}^{z}=g_{1}^{y}\) & \(g_{1}\) & 2.25 & 2.24 \\ \hline \(g_{2}^{x}=g_{2}^{y}\) & \(g_{2}\) & 2.10 & 2.11 \\ \hline \(g_{3}^{x}=g_{3}^{y}\) & \(g_{3}\) & 2.40 & 2.40 \\ \hline \(g_{1}^{z}=g_{2}^{z}=g_{3}^{z}\) & \(g_{z}\) & 2.06 & 2.07 \\ \hline \end{tabular}
\end{table}
Table 1: Magnetic parameters of the \(\{\mathrm{Cu}_{3}-X\}\) compounds, where \(X\) denotes either As or Sb, as extracted from reference[23].
### Entropy
Entropy calculation is relevant as it plays a crucial role in the MCE. Essentially serving as the "driving force" to understand both the direct and inverse MCE, which we will discuss subsequently. As such, entropy is fundamental to understand the mechanism of MCE and is pertinent in applications such as magnetic refrigeration. Consequently, entropy can be derived from the internal energy with the following relation
\[\mathcal{S}=\frac{\langle\mathbf{H}\rangle-f}{T}. \tag{6}\]
In Fig. 2a the entropy is illustrated in the plane of temperature (in kelvin) and parallel external magnetic field (tesla). It is worth to note that \(B_{\parallel}\approx 4.5\)T where the entropy increases very fast in the low temperature region, this is because the region dominated by two spins aligned and one spin aligned oppositely, changes to a region with all aligned spins to magnetic field. Similarly, in panel (b) is illustrated the entropy as in the plane of temperature and perpendicular external magnetic field. Although the plot is quite similar to panel (a) there are some slight difference like a strong change occurs at little bit higher magnetic field \(B_{\perp}\approx 5\)T. It is worth to mention also in absence of external magnetic field the system is two-fold degenerate, so the entropy leads to \(\mathcal{S}\rightarrow\ln(2)\).
### Specific heat
Specific heat is of significant importance to the magnetocaloric effect (MCE) as it fundamentally influences the amount of heat absorbed or released during the application or removal of a magnetic field. It quantifies the amount of heat required to change a compound temperature by a certain amount. Therefore, we can use the following relation to obtain the specific heat:
\[C=\frac{\langle\mathbf{H}^{2}\rangle-\langle\mathbf{H}\rangle^{2}}{T^{2}}, \tag{7}\]
where
\[\langle\mathbf{H}^{2}\rangle=\frac{1}{Z}\sum_{i=1}^{8}\varepsilon_{i}^{2} \mathrm{e}^{-\varepsilon_{i}/T}. \tag{8}\]
It is worth to mention that the specific heat can depend analytically of temperature, after eigenvalues is found numerically.
Figure 3a presents the specific heat as a function of temperature and the parallel external magnetic field
Figure 1: (a) Internal energy \(U\) as a function of temperature, solid lines corresponds to \(B_{\perp}\), while dashed lines corresponds to \(B_{\parallel}\). (b-c) Internal energy \(U\) as a function of perpendicular and parallel external magnetic field, respectively. These plots are specifically for the \(\mathrm{Cu_{3}-As}\) compound.
Figure 3: (a) Specific heat \(C\) as a function of temperature and parallel external magnetic field. (b) Specific heat \(C\) as a function of temperature and perpendicular external magnetic field. For the \(\mathrm{Cu_{3}-As}\) compound.
Figure 2: (a) Entropy \(\mathcal{S}\) in the plane of temperature (in kelvin) and parallel external magnetic field (tesla). (b) Entropy \(\mathcal{S}\) in the plane of temperature (in kelvin) and perpendicular external magnetic field (tesla). These visualizations are based on the \(\mathrm{Cu_{3}-As}\) compound.
\(B_{\parallel}\). An anomalous behavior is noticeable at \(B_{\parallel}\approx 4.5\) T, which manifests as an unusual peak in the low-temperature region. Additionally, two peculiar peaks appear at \(B_{\parallel}\sim 2\) T, whereas other regions with a fixed magnetic field exhibit only one anomalous peak. Panel (b) illustrates an analogous plot, albeit with the perpendicular magnetic field \(B_{\perp}\). The specific heat plots mainly resemble those in panel (a), with the exception of the absence of a minimum at \(B_{\perp}\sim 2\) T. The low-temperature anomaly occurs at \(B_{\parallel}\approx 5\) T. For a system with temperature around \(T\sim 1\) K, the specific heat exhibits unusual behavior, absorbing or releasing heat more efficiently for a given temperature change. High specific heat is a beneficial property as it can enhance the overall efficiency and effectiveness of the triangular system. From the perspective of the MCE, this translates into more efficient magnetic cooling or heating.
### Magnetization
We will now discuss magnetization, which has a key role in the MCE. The strength of the MCE is directly related to the change in magnetization of the compound in response to variation in temperature and the applied magnetic field. In our case, the magnetization can be derived without taking numerical derivatives, through the following relation
\[\left\langle\left(\frac{\partial\mathbf{H}}{\partial B_{k}}\right)\right\rangle =\frac{1}{Z}\text{tr}\left\{\mathbf{H}_{B_{k}}\text{e}^{-\mathbf{H}/T} \right\}=\frac{1}{Z}\text{tr}\left\{\tilde{\mathbf{H}}_{B_{k}}\text{e}^{- \mathbf{E}/T}\right\}, \tag{9}\]
where \(\tilde{\mathbf{H}}_{B_{k}}=\mathbf{U}\left(\frac{\partial\mathbf{H}}{\partial B _{k}}\right)\mathbf{U}^{-1}\), and \(k=\{\parallel,\perp\}\). This method is a typical procedure to avoid numerical derivatives, as the Hamiltonian can be derived in relation to \(B_{k}\) analytically. Therefore, the magnetization becomes
\[M_{k}=-\frac{1}{\mathfrak{g}_{k}}\left\langle\left(\frac{\partial\mathbf{H}}{ \partial B_{k}}\right)\right\rangle, \tag{10}\]
where \(\mathfrak{g}_{k}\) is a constant normalization chosen for convenience, defined as follows: \(\mathfrak{g}_{k}=\frac{1}{3}\sum_{i=1}^{3}g_{i}^{k}\). The values of \(\mathfrak{g}_{\parallel}\) is 2.25 for both compounds, while the values of \(\mathfrak{g}_{\perp}\) are 2.06 and 2.07 for As and Sb, respectively.
Figure 4a presents the magnetization as a function of the parallel magnetic field, \(B_{\parallel}\), at various temperature values, including zero-temperature. Note that a 1/3 quasi-plateau feature appears, fading as temperature increases to around 1K. Similarly, panel (b) reports magnetization as a function of a perpendicular external magnetic field. Here, the 1/3 quasi-plateau becomes more noticeable, with effects largely mirroring those in the previous panel. Conversely, panel (c) displays magnetization as a function of temperature, considering multiple external magnetic fields parallel to the triangle plane. In this panel, the quasi-plateau region converges to \(M\sim 0.5\), with a saturated region at \(M\to 1.5\). A significant curvature change occurs at around 1 K. Lastly, panel (d) illustrates the magnetization as a function of temperature for an external magnetic field perpendicular to the triangular plane. The magnetization behavior closely resembles that in panel (c), but with a more noticeable convergence to the 1/3 quasi-plateau in low-temperature regions. Here again, the main curvature change happens approximately at 1 K.
### Magnetic Susceptibility
Magnetic susceptibility is another relevant quantity for studying the MCE as it determines how easily a material can be magnetized or demagnetized. To obtain this quantity, we can follow a procedure similar to the previous one. Thus the magnetic susceptibility, can be given by
\[\chi_{k}=\frac{1}{\mathfrak{g}_{k}^{2}T}\left\{\left\langle\left(\frac{ \partial\mathbf{H}}{\partial B_{k}}\right)^{2}\right\rangle-\left\langle\left( \frac{\partial\mathbf{H}}{\partial B_{k}}\right)\right\rangle^{2}\right\}. \tag{11}\]
Figure 5a illustrates the magnetic susceptibility times temperature (\(T\chi_{\parallel}\)) for Cu\({}_{3}-\)As compound, plotted against temperature (in kelvin) and parallel external magnetic field (tesla). It is noteworthy that at \(B_{\parallel}\approx 4.5\)
Figure 4: (a) Magnetization versus parallel magnetic field at various temperatures. (b) Magnetization versus perpendicular magnetic field at the same temperatures. (c) Temperature-dependent magnetization for different parallel magnetic fields for several parallel magnetic fields. (d) Analogously, temperature-dependent magnetization for a set of perpendicular magnetic fields. The study considers the Cu\({}_{3}-\)As compound.
T, \(T\chi_{\parallel}\) maintains a constant value of around \(T\chi_{\parallel}\approx 0.3\). This means that the magnetic susceptibility inversely depends on the temperature in the low-temperature region. An alike finite value is observed for the null magnetic field. This is due to the shift from regions dominated by two aligned spins and one oppositely aligned spin, leading to complete alignment with the external magnetic field. The \(\mathrm{Cu_{3}-X}\) compound, with higher magnetic susceptibility, can be magnetized or demagnetized more readily, resulting in greater thermal energy transfer and a more significant temperature change, roughly at \(T\lesssim 1\) K. Similarly, panel (b) depicts the product of magnetic susceptibility and temperature, \(T\chi_{\perp}\), in the plane of temperature and perpendicular external magnetic field. Although the behavior is quite similar to panel (a), there are slight differences, such as the pronounced change occurring at a slightly higher magnetic field, \(B_{\perp}\approx 5\) T. Therefore, magnetic susceptibility affects the magnitude of the temperature change observed during the MCE and could play a crucial part in enhancing the magnetic refrigeration systems.
## IV Magnetocaloric effect
The Magnetocaloric Effect (MCE) refers to the thermal response of a material to the change in an external magnetic field. It holds potential for practical applications such as energy-efficient cooling technologies. Accordingly, we will discuss aspects like the isentropic curve and Gruneisen parameter.
### Isentropic curve
In magnetic systems, isentropic curves or adiabatic temperature curves provide a useful tool to visualize and understand the MCE. In the context of MCE, an isentropic curve represents a process that occurs at constant entropy in a magnetic field-temperature phase diagram.
In Figure 6a, the isentropic curve for \(\left\{\mathrm{Cu_{3}-As}\right\}\) is illustrated for a parallel magnetic field. Between null magnetic field and \(B_{\parallel}\approx 4.5\) T, the system exhibits the first step of magnetization, with two spins aligned parallel to the magnetic field while the third one is aligned antiparallel. Surely, for \(B_{\parallel}\gtrsim 4.5\) T, all spins become aligned with the magnetic field. The isentropic curve shows high sensitivity at relatively low entropy, and as temperature increases, the crossover region between these two states manifests as a minimum. This minimum gradually disappears around \(T\sim 1\) K. Notably, strong slopes of the isentropic curves occur around the minimum, indicating a large MCE in this region. Similarly, in Figure 6b, the isentropic curve for \(\left\{\mathrm{Cu_{3}-As}\right\}\) is depicted for a perpendicular magnetic field. The behavior of the isentropic curve is largely analogous to the previous case, with the only difference being that the minimum occurs at approximately \(B_{\perp}\approx 5\) T. These curves provide insight into the temperature changes of a system in response to the application or removal of an external magnetic field. Additionally, the shape of the isentropic curve can provide valuable information about possible zero-temperature magnetic phase transitions in the \(\left\{\mathrm{Cu_{3}-As}\right\}\) compound.
### Gruneisen parameters
The Gruneisen parameter plays a essential role in understanding the MCE, which refers to the change in temperature of a material resulting from variations in an applied magnetic field. This effect has significant applications in magnetic cooling technologies. The Gruneisen parameter quantifies the relationship between the change in temperature of the compound and the magnetic field under constant entropy conditions. Specifically, the Gruneisen parameter \(\Gamma\) is defined as the ratio of the temperature derivative of the magnetization per mole to the molar specific heat,
Figure 5: (a) Magnetic susceptibility times temperature, \(T\chi_{\parallel}\) in the plane of temperature and parallel external magnetic field. (b) Magnetic susceptibility \(T\chi_{\perp}\) in the plane of temperature and perpendicular external magnetic field.
Figure 6: (a) Plot of isentropic \(\mathcal{S}\) (in units of \(k_{B}\)) curve temperature \(T\left(\mathrm{K}\right)\) against a parallel magnetic field \(B_{\parallel}\left(\mathrm{T}\right)\). (b) Plot of isentropic curve temperature \(T\left(\mathrm{K}\right)\) as a function of magnetic field \(B_{\perp}\left(\mathrm{T}\right)\). Both plots are for the \(\mathrm{Cu_{3}-As}\) compound.
\[\Gamma_{k}=-\frac{1}{C}\bigg{(}\frac{\partial M}{\partial T}\bigg{)}_{B_{k}}=- \frac{\bigg{(}\frac{\partial\mathcal{S}}{\partial B_{k}}\bigg{)}_{T}}{T\big{(} \frac{\partial\mathcal{S}}{\partial T}\big{)}_{B_{k}}}=\frac{1}{T}\bigg{(} \frac{\partial T}{\partial B_{k}}\bigg{)}_{\mathcal{S}}, \tag{12}\]
To avoid numerical derivatives, an equivalent expression of the Gruneisen parameter can be derived
\[\Gamma_{k}=\frac{1}{\mathfrak{g}_{k}}\frac{\langle\mathbf{H}_{B_{k}}\mathbf{H} \rangle-\langle\mathbf{H}_{B_{k}}\rangle\langle\mathbf{H}\rangle}{\langle \mathbf{H}^{2}\rangle-\langle\mathbf{H}\rangle^{2}}. \tag{13}\]
Further research has explored these phenomena by analyzing the adiabatic temperature change, which demonstrates a strong correlation with the magnetic entropy change[37].
In Figure 7a, the Gruneisen parameter is illustrated as a function of the parallel magnetic field \(B_{\parallel}\) for various fixed temperatures. The Gruneisen parameter shows significant changes in response to an applied magnetic field, with the most notable variations occurring at \(B_{\parallel}\sim 1\) T for temperatures around \(T\sim 1\) K. As the temperature increases, the magnitude of these changes decreases. Another region where the Gruneisen parameter becomes relevant is around \(B_{\parallel}\approx 4.5\) T, exhibiting a strong variation. However, as the temperature increases, the magnitude of the Gruneisen parameter at this field strength decreases, eventually diminishing. Panel (b) presents an equivalent quantity obtained by applying a perpendicular magnetic field \(B_{\perp}\), yielding results equivalent to the previous case. Additionally, in panel (c), we depict the variation of \(\Gamma\) as a function of temperature for different external parallel magnetic fields. A significant change in the Gruneisen parameter \(\Gamma_{\parallel}\) is observed for temperatures below \(T\sim 2\) K. When the magnetic field is lower than \(B_{\parallel}\sim 2\) T, \(\Gamma_{\parallel}\) is positive, whereas for \(B_{\parallel}\gtrsim 2\) T, this parameter becomes negative. Moreover, for temperatures \(T\gtrsim 2\) K, the Gruneisen parameter decreases significantly. The final panel is similar to panel (c), but for a perpendicular magnetic field. Some differences arise, such as a stronger \(\Gamma_{\perp}\) compared to the parallel case for low magnetic fields (\(B_{\perp}\sim 0.5\) T) and the low-temperature region. Conversely, for large magnetic fields (\(B_{\perp}\sim 4.0\) T), the Gruneisen parameter (\(\Gamma_{\perp}\)) is weaker than in the parallel case. In conclusion, the study of Cu\({}_{3}-\)As reveals a significant Gruneisen parameter at around \(B\sim 5\) T, indicating a prominent MCE. This finding holds crucial implications for the selection and design of magnetic refrigeration systems.
Last but not least, the MCE can also be analyzed through the variation in entropy, \(\Delta\mathcal{S}=\mathcal{S}(0,T)-\mathcal{S}(B,T)\), associated with the magnetic phase. This variation occurs due to the alignment or realignment of spins,
Figure 7: (a) Grüneisen parameter \(\Gamma_{\parallel}\) as a function of parallel magnetic field \(B_{\parallel}\), for a range of temperatures. (b) Grüneisen parameter \(\Gamma_{\perp}\) as a function of perpendicular magnetic field. (c) \(\Gamma_{\parallel}\) as a function of temperature for a set of parallel magnetic field. (d) \(\Gamma_{\perp}\) as a function of temperature for a number of perpendicular magnetic field. The magnetic field is in units of tesla, while temperature is measured in kelvin. For the Cu\({}_{3}-\)As compound.
Figure 8: (a) Entropy variation \(\Delta\mathcal{S}\) as a function of parallel magnetic field \(B_{\parallel}\), for a set of fixed temperatures. (b) Entropy variation \(\Delta\mathcal{S}\) as a function of perpendicular magnetic field. (c) \(\Delta\mathcal{S}\) as a function of temperature for a variety of parallel magnetic field. (d)\(\Delta\mathcal{S}\) as a function of temperature for different perpendicular magnetic fields.
resulting in changes in the disorder and order of the Cu\({}_{3}-X\) compound and leading to a temperature change. Therefore, \(\Delta\mathcal{S}\) is an important quantity to explore the magnetocaloric performance.
In Fig.8a, we illustrate \(\Delta\mathcal{S}\) as a function of the parallel external magnetic field \(B_{\parallel}\) for several fixed temperatures. We observe that at a temperature of \(T\sim 0.1\) K, the entropy remains almost constant (\(\Delta\mathcal{S}\sim\ln(2)\approx 0.7\)). This is due to the system being roughly double degenerate at null magnetic field, and the degeneracy is broken by the presence of a magnetic field. There is a slight depression at \(T\approx 4.5\) K, indicating a change in the dominant phases at this magnetic field. This behavior changes significantly as the temperature increases. For \(B_{\parallel}\lesssim 4.5\) T, \(\Delta\mathcal{S}\) decreases significantly, while for \(B_{\parallel}\gtrsim 4.5\) T, it becomes larger. Panel (b) shows an analogous behavior but for the perpendicular external magnetic field \(B_{\perp}\). The only difference is that the depression occurs at \(B_{\perp}\lesssim 5.0\)T. Furthermore, in panel (c), we present \(\Delta\mathcal{S}\) as a function of temperature for several fixed parallel magnetic fields \(B_{\parallel}\). For magnetic fields below \(B_{\parallel}\lesssim 4.5\) T, \(\Delta\mathcal{S}\) decreases monotonically. However, for stronger magnetic fields \(B_{\parallel}\gtrsim 4.5\) T, a maximum appears, indicating a peak in \(\Delta\mathcal{S}\). Similarly, panel (d) depicts \(\Delta\mathcal{S}\) as a function of temperature, assuming a fixed perpendicular magnetic field. The behavior is mainly similar to panel (c), although \(\Delta\mathcal{S}\) does not decrease monotonically. Additionally, for strong magnetic fields, it also exhibits a maximum, as observed in the previous panel.
## V Conclusions
In this paper, we conduct a theoretical exploration of the Cu\({}_{3}-X\) antiferromagnetic spin system (where X = As, Sb), which is identified by its isosceles or slightly distorted equilateral triangular configurations, as detailed in reference [21; 23; 24]. This system can be accurately depicted using the Heisenberg model on a triangular structure, incorporating factors like the exchange interaction, Dzyaloshinskii-Moriya interaction, g-factors, and external magnetic fields.
Recently, Cu\({}_{3}-X\) has garnered significant attention due to its fundamental properties [21; 23; 24]. Furthermore, the scientific community has shown a growing interest in the exploration of several magnetic compounds [25; 26; 27; 28; 29; 30; 31] due to their diverse potential in areas such as spintronics, nanotechnology, and biomedicine.
Our investigation uses a numerical approach to analyze both zero-temperature and finite-temperature behaviors of the Cu\({}_{3}\)-like spin system. At zero temperature, the system exhibits twofold degenerate energy in the absence of a magnetic field and a 1/3 quasi-plateau magnetization when the magnetic field is varied. At finite temperatures, our focus primarily lies on analyzing magnetic properties such as magnetization, magnetic susceptibility, entropy, and specific heat.
In addition, we examine the MCE in relation to an externally applied magnetic field, oriented both parallel and perpendicular to the plane of the triangular structure. The Cu\({}_{3}-X\) displays remarkably consistent behavior for both orientations of the magnetic field. We also extend our study to include the evaluation of the isentropic curve, the Gruneisen parameter, and the variation in entropy during the application or removal of the magnetic field. Therefore, in the low temperature region below \(T\sim 1\)K and for approximately 4.5T and 5T for parallel and perpendicular magnetic fields, respectively, our results confirm that the MCE is more prominent in this region. This study could contribute to the research and development of nano-compounds with triangular structures, potentially improving the performance of the Magnetocaloric Effect (MCE). Such advancements may be especially intriguing for applications in the cryogenic temperature range that utilize moderate magnetic fields.
###### Acknowledgements.
G. A. A. thanks CAPES, O. R. and S. M. de Souza thanks CNPq and FAPEMIG for partial financial support. A. S. M also thanks FAPEMIG (APQ-01294-21) for the partial funding.
|
2309.05771 | RawHash2: Mapping Raw Nanopore Signals Using Hash-Based Seeding and
Adaptive Quantization | Summary: Raw nanopore signals can be analyzed while they are being generated,
a process known as real-time analysis. Real-time analysis of raw signals is
essential to utilize the unique features that nanopore sequencing provides,
enabling the early stopping of the sequencing of a read or the entire
sequencing run based on the analysis. The state-of-the-art mechanism, RawHash,
offers the first hash-based efficient and accurate similarity identification
between raw signals and a reference genome by quickly matching their hash
values. In this work, we introduce RawHash2, which provides major improvements
over RawHash, including a more sensitive quantization and chaining
implementation, weighted mapping decisions, frequency filters to reduce
ambiguous seed hits, minimizers for hash-based sketching, and support for the
R10.4 flow cell version and various data formats such as POD5 and SLOW5.
Compared to RawHash, RawHash2 provides better F1 accuracy (on average by 10.57%
and up to 20.25%) and better throughput (on average by 4.0x and up to 9.9x)
than RawHash. Availability and Implementation: RawHash2 is available at
https://github.com/CMU-SAFARI/RawHash. We also provide the scripts to fully
reproduce our results on our GitHub page. | Can Firtina, Melina Soysal, Joël Lindegger, Onur Mutlu | 2023-09-11T18:56:48Z | http://arxiv.org/abs/2309.05771v5 | # RawHash2: Accurate and Fast Mapping of Raw Nanopore Signals using a Hash-based Seeding Mechanism
###### Abstract
Summary:Raw nanopore signals can be analyzed while they are being generated, a process known as real-time analysis. Real-time analysis of raw signals is essential to utilize the unique features that nanopore sequencing provides, enabling the early stopping of the sequencing of a read or the entire sequencing run based on the analysis. The state-of-the-art mechanism, RawHash, offers the first hash-based efficient and accurate similarity identification between raw signals and a reference genome by quickly matching their hash values. In this work, we introduce RawHash2, which provides major improvements over RawHash, including a more sensitive chaining implementation, weighted mapping decisions, frequency filters to reduce ambiguous seed hits, minimizers for hash-based sketching, and support for the R10.4 flow cell version and various data formats such as POD5. Compared to RawHash, RawHash2 provides better \(F_{1}\) accuracy (on average by \(3.44\%\) and up to \(10.32\%\)) and better throughput (on average by \(2.3\times\) and up to \(5.4\times\)) than RawHash.
Availability and Implementation:RawHash2 is available at [https://github.com/CMU-SAFARI/RawHash](https://github.com/CMU-SAFARI/RawHash). We also provide the scripts to fully reproduce our results on our GitHub page.
## 1 Introduction
Nanopore technology can sequence long nucleic acid molecules up to more than two million bases at high throughput [1]. As a molecule moves through a tiny pore, called a _nanopore_, ionic current measurements are generated at a certain throughput (e.g., around 450 bases per second [2, 3]). These electrical measurements, known as _raw signals_, can be used to 1) identify individual bases in the molecule with computational techniques such as _basecalling_[4] and 2) analyze raw signals directly _without_ translating them to bases [5].
Computational techniques that can analyze the raw signals while they are generated at a speed that matches the throughput of nanopore sequencing are called _real-time analysis_. Figure 1 shows the two unique benefits that real-time analysis offers. First, unlike traditional analysis pipelines in genomics, real-time analysis allows for overlapping sequencing time with analysis time as raw signals can be analyzed while they are being generated. Second, based on the analysis, computational mechanisms can stop the sequencing of a read or the entire sequencing run early without sequencing the entire molecule or the sample using techniques known as Read Until [6] and Run Until [7]. The development of accurate and fast mechanisms for real-time analysis has the potential to significantly reduce the time and cost of genome analysis.
There are several mechanisms that can perform real-time analysis of raw nanopore signals to achieve accurate and fast genome analysis [2, 3, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15]. Most of these solutions have three main limitations. First, many mechanisms offer limited scalability or support on resource-constrained devices due to their reliance on either 1) deep neural networks (DNNs) for real-time base translation, which are usually computationally intensive and power-hungry [7, 16], or 2) specialized hardware such as ASICs or FPGAs [9, 10, 11]. Second, while some mechanisms can directly analyze raw signals without base translation, offering an efficient alternative for real-time analysis [2, 3], they often compromise on accuracy or performance when applied to larger genomes. Third, machine learning-based methods frequently necessitate retraining or reconfiguration [10, 12, 17], adding a layer of complexity and reducing their flexibility for general use cases, such as read mapping to any genome.
Among the existing works, RawHash [5] is the first mechanism that can accurately perform real-time analysis of raw nanopore signals for large genomes without translating them to bases. To facilitate this real-time analysis, RawHash employs a standard seed-and-extend mechanism [18] to identify similarities between raw signals and a reference genome. To achieve this, RawHash introduces the _first_ hash-based mechanism that 1) generates hash values from raw signals and 2) quickly matches these hash values with those generated from reference genomes at high accuracy. This hash-based approach obviates the need for more computationally expensive methods, such as distance computations used in a prior state-of-the-art work [3]. Despite its strengths in accuracy and performance, particularly for large genomes like the human genome, RawHash exhibits several limitations that require further improvements. First, RawHash utilizes a simple chaining algorithm, akin to that used in Sigmap [3], without incorporating penalty scores used in minimap2 [19], which constrains its ability for more sensitive mapping. Second, RawHash performs chaining on all seed hits without filtering any of these seed hits, which substantially increases the workload of the chaining algorithm due to a large number of seed hits to chain. Third, the decision-making mechanism in RawHash for mapping reads to a reference genome in real-time relies on a manual and fixed ordering of conditions based on chaining scores. A
Figure 1: Two main benefits of real-time analysis with nanopore sequencing.
more robust and statistical approach that incorporates features beyond chaining scores can provide additional insights for making more sensitive and quick mapping decisions. Fourth, while the hash-based mechanism in RawHash is compatible with existing sketching techniques such as minimizers [19, 20], strobemers [21], and fuzzy seed matching as in BLEND [22], the benefits of these techniques are unknown for raw signal analysis as they are not used in RawHash. Such evaluations could potentially provide additional insights on how to use the existing hash-based sketching techniques and reduce storage requirements while maintaining high accuracy. Fifth, RawHash lacks the support for recent advancements, including support for the R10.4 flow cell version and the new default data format of Oxford Nanopore Technologies (ONT), POD5, which supersedes the older FAST5 format. The integration of these features would enhance the adoption of RawHash.
In this work, our goal is to address the aforementioned limitations of RawHash by improving its mechanism. To this end, we propose RawHash2 to improve RawHash in five directions. First, to improve the accuracy of chaining and subsequently read mapping, we implement a more sophisticated chaining algorithm that incorporates penalty scores (as in minimap2). Second, to improve the performance of chaining by reducing its workload, RawHash2 provides a filter that removes seeds frequently appearing in the reference genome, known as a _frequency filter_. Third, we introduce a statistical method that utilizes multiple features for making mapping decisions based on their weighted scores to eliminate the need for manual and fixed conditions to make decisions. Fourth, we extend the hash-based mechanism to incorporate and evaluate the minimizer sketching technique, aiming to reduce storage requirements without significantly compromising accuracy. Fifth, to enable better adoption of RawHash2, we integrate support for R10 chemistry and the new default data format, POD5.
Compared to RawHash, our extensive evaluations on five genomes of varying sizes and six different real datasets show that RawHash2 provides higher accuracy (by 3.44% on average and 10.32% at maximum) and better read mapping throughput (by 2.3\(\times\) on average and 5.4\(\times\) at maximum). We make the following contributions:
* We propose substantial algorithmic improvements to the state-of-the-art tool, RawHash. These include more sensitive chaining with penalty scores, a frequency filter, mapping decisions based on a weighted sum of several features that can contribute to the decision, the minimizer sketching technique, the POD5 file format, and R10 chemistry support. We evaluate the benefits of these changes extensively.
* We provide the support for the R10.4 flow cell version and evaluate it with RawHash2.
* We provide support for the POD5 format and evaluate its benefits compared to the previous standard format, FAST5.
## 2 Methods
RawHash is a mechanism to find similarities between raw signals by quickly matching their hash values. We provide the details of the RawHash mechanism in Supplementary Section S1. RawHash2 provides substantial improvements over RawHash in five key directions. First, to provide more accurate mapping, RawHash2 improves the chaining algorithm in RawHash with penalty scores to generate chains from seed hits with better sensitivity. Second, to reduce the workload in chaining for improved performance, we integrate a frequency filter to quickly eliminate the seed hits that occur too frequently, based on a set frequency threshold, before they are used in the chaining process. Third, to make more accurate and quick mapping decisions, RawHash2 determines whether a read should be mapped at a specific point during sequencing by using a weighted sum of multiple features, which are derived from chaining scores and mapping quality, rather than manually checking if certain scores surpass fixed thresholds. Fourth, to reduce the storage requirements of seeds, the minimizer sketching technique is incorporated into RawHash2, taking advantage of RawHash's unique ability to integrate hash-based sketching techniques with its seed generation mechanism. Fifth, RawHash2 includes support for the latest features introduced by ONT, such as new file formats and updated flow cell versions to enable better adoption of RawHash2.
### _Chaining with Penalty Scores_
To identify the similarities between a reference genome (i.e., target sequence) and a raw signal (i.e., query sequence), the series of seed hits within close proximity in terms of their matching positions are identified using a dynamic programming (DP) algorithm, known as _chaining_. Using a chaining terminology similar to that of minimap2 [19], a seed hit between a reference genome and a raw signal is usually represented by a 3-tuple (\(x\), \(y\), \(w\)) value, known as _anchor_, where \(w\) represents the length of the region that a seed spans, the start and end positions of a matching interval in a reference genome and a raw signal is represented by \([x-w+1,x]\) and \([y-w+1,y]\), respectively. The chain of anchors within close proximity is identified by calculating the optimal chain score \(f(i)\) of each anchor \(i\), where \(f(i)\) is calculated based on predecessors of anchor \(i\) when anchors are sorted by their reference positions. To calculate the chain score, \(f(i)\), with dynamic programming, RawHash performs the following computation as used in Sigmap [3].
\[f(i)=\max\big{\{}\max_{i>j\geq 1}\{f(i)+\alpha(j,i)\},\,w_{i}\big{\}} \tag{1}\]
where \(\alpha(j,i)=\min\big{\{}\min[y_{i}-y_{j},x_{i}-x_{j}],\,w_{i}\big{\}}\) is the length of the matching region between the two anchors. Although such a design is useful when identifying substantially fewer seed matches using a seeding technique based on distance calculation as used in Sigmap, RawHash identifies a larger number of seed matches as it uses hash values to identify the matching region, which is usually faster than a distance calculation with the cost of reduced sensitivity.
To identify the correct mapping regions among such a large number of seed matches, RawHash2 uses a more sensitive chaining technique as used in minimap2 by integrating the gap penalty scores such that the chain score of an anchor \(i\) is calculated as shown in Equation 2:
\[f(i)=\max\big{\{}\max_{i:j\geq 1}[f(i)+\alpha(j,i)-\beta(j,i)],\,w_{i}\big{\}} \tag{2}\]
where \(\beta(i,i)=\gamma_{c}\big{(}(y_{i}-y)-(x_{i}-x_{j})\big{)}\) is the penalty score calculated based on the gap distance, \(l\), between a pair of anchors \(i\) and \(j\) where \(\gamma_{c}(l)=0.01\cdot w\cdot|l|+0.5\log_{2}|l|\). Based on the chain score calculation with gap costs, RawHash2 integrates similar heuristics, mapping quality calculation, and the same complexity when calculating the chaining scores with the gap penalty as described in minimap2 [19].
### Frequency Filters
RawHash2 introduces a two-step frequency filtering mechanism to 1) reduce the computational workload of the chaining process by limiting the number of anchors it processes and 2) focus on more unique and potentially meaningful seed hits. First, to reduce the number of queries made to the hash table for identifying seed hits, RawHash2 eliminates non-unique hash values generated from raw signals that appear more frequently than a specified threshold. Second, RawHash2 evaluates the frequency of each seed hit within the reference genome and removes those that surpass a predefined frequency threshold, which reduces the overall workload of the chaining algorithm by providing a reduced set of more unique seed hits.
### Weighted Mapping Decision
RawHash performs mapping while receiving chunks of signals in real-time, as provided by nanopore sequencers. It is essential to decide if a read maps to a reference genome as quickly as possible to avoid unnecessary sequencing. The decision-making process in RawHash is based on a series of conditional checks involving chain scores. These checks are performed in a certain order and against fixed ratios and mean values, making the decision mainly rigid and less adaptive to variations.
To employ a more statistical approach that can generalize various variations between different data sets and genomes, RawHash2 calculates a weighted sum of multiple features that can impact the mapping decision. To achieve this, RawHash2 calculates normalized ratios of various mapping quality metrics and chain scores, such as the ratio of the mapping quality to the maximum mapping quality (i.e., 60), mapping quality ratio between the best chain and the mean quality of all chains, and the ratio of the chain score between the best and the mean score of all chains. These ratios are combined into a weighted sum as follows: \(w_{\text{sum}}=\sum_{i=1}r_{i}\times w_{i}\), where \(r_{i}\) is a ratio of a particular metric, and \(w_{i}\) is the weight assigned for that particular metric. The weighted sum, \(w_{sum}\), is compared against a predefined threshold value to decide if a read is considered to be mapped. RawHash2 maps a read if the weighted sum exceeds the threshold. Such a weighted sum approach allows RawHash2 to adaptively consider multiple aspects of the data and eliminates the potential effect of the ordering of these checks to achieve improved mapping accuracy while maintaining computational efficiency.
### Minimizer Sketching and Fuzzy Seeding
RawHash provides the opportunity to integrate the existing hash-based sketching techniques such as minimizers [19, 20] for reduced storage requirements and fuzzy seed matching [22] for improved sensitivity in seeding.
To reduce the storage requirements of storing seeds in raw signals and due to their widespread application, RawHash2 integrates minimizers in two steps. First, RawHash2 generates hash values for seeds in both the reference genome and the raw signal. Second, within each window comprising \(w\) hash values, the minimum hash value is selected as the minimizer. These minimizer hash values serve the same purpose as in RawHash for identifying similarities via hash tables while significantly reducing the number of hash values that need to be stored and queried during the mapping process.
### Support for New Data Formats and Flow Cells
To enable better and faster adoption, RawHash2 incorporates support for 1) recent data formats for storing raw signals, namely POD5 and SLOW5 [23] as well as the existing FAST5 format, and 2) the latest flow cell versions due to two main reasons. First, transitioning from the FAST5 to the POD5 file format is crucial for broad adoption, as POD5 is the new standard file format introduced by Oxford Nanopore Technologies (ONT). Second, since no tool can map raw signals from the more recent flow cell versions (e.g., R10.4 chemistry), RawHash2 becomes the first tool that supports mapping the raw signals from R10.4 flow cells. This positions RawHash2 at the forefront of adaptability and versatility in the analysis of diverse nanopore sequencing data.
## 3 Results
### Evaluation Methodology
We implement the improvements we propose in RawHash2 directly on the RawHash implementation. Similar to RawHash, RawHash2 provides the mapping information using a standard pairwise mapping format (PAF).
We compare RawHash2 with the published state-of-the-art works UNCALLED [2], Sigmap [3], RawHash [5] in terms of throughput, accuracy, and the number of bases that need to be processed before stopping the sequencing of a read to estimate the benefits in sequencing time and, potentially the cost. For throughput, we calculate the number of bases that each tool can process per second, which is essential to determine if the tool is at least as fast as the speed of sequencing from a single nanopore. In many commonly used nanopore sequencers, a nucleic acid molecule passes through a pore at around 450 bases per second [2, 3].
For accuracy, we analyze two use cases: 1) read mapping and 2) contamination analysis. To identify the correct mappings, we generate the ground truth mapping output in PAF by mapping the basecalled sequences of corresponding raw signals to their reference genomes using minimap2 [19]. We use UNCALLED paststats to compare the mapping output from each tool with their corresponding ground truth mapping output to calculate precision (\(P=\mathit{TP}/(\mathit{TP}+\mathit{FP})\)), recall (\(R=\mathit{TP}/(\mathit{TP}+\mathit{FN})\)), and \(F_{1}\) (\(F_{1}=2\times(P\times R)/(P+R)\)) values, similar to RawHash [5]. For read mapping, we compare the tools in terms of their precision, recall, and F-1 scores. For contamination analysis, the goal is to identify if a particular sample
is contaminated with a certain genome (or set of genomes), which makes the precision metric more important for such a use case. For this use case, we compare the tools in terms of their precision in the main paper and show the full results (i.e., precision, recall, and \(F_{1}\)) in the Supplementary Table S1.
For estimating the benefits in sequencing time and the cost per read, we identify the average sequencing length before making the mapping decision for a read. For all of our analyses, we use the default parameters of each tool as we show in Supplementary Table S7 with the real datasets we show in Supplementary Table S6. Our datasets include raw signals from both R9.4 and R10.4 flow cell versions. Although RawHash2 does not use the minimizer sketching technique by default to achieve the maximum accuracy, we evaluate the benefits of minimizers in RawHash2, which we refer to as RawHash2-Minimizer. Since RawHash2 is the only tool that can perform real-time mapping of raw R10.4 nanopore signals, we show the corresponding results when using the R10.4 dataset without comparing these results with the existing tools.
### Throughput
Figure 2 shows the throughput result of each tool. We make two key observations. First, we find that RawHash2 provides average throughput 38.2\(\times\), 8.1\(\times\), and 2.3\(\times\) better than UNCALLED, Sigmap, and RawHash, respectively. Such a speedup, specifically over the earlier work RawHash, is achieved by reducing the workload of chaining with the filtering technique and the more sensitive chaining implementation that allows RawHash2 to use more relaxed parameters without reducing accuracy. Second, we find that RawHash2-Minimizer enables reducing the computational requirements for mapping raw signals and enables improving the throughput by 3.3\(\times\) compared to RawHash2, while the other computational resources, such as the peak memory usage and CPU time in both indexing and mapping, and the mean time spent per read are also significantly reduced as shown in Supplementary Tables S3 and S4, and Supplementary Figure S2, respectively. We find that RawHash2 and RawHash2-Minimizer significantly reduce the computational overhead of mapping raw signals to reference genomes, enabling better scalability to even larger genomes.
### Accuracy
Table 1 shows the accuracy results for read mapping and contamination analysis. We make two key observations. First, we find that RawHash2 improves the overall read mapping accuracy in terms of the \(F_{1}\) score in all metrics and the best precision for contamination analysis compared to RawHash. This is mainly achieved because the more sensitive chaining implementation with penalty scores in RawHash2 can identify the correct mappings more accurately, which is mainly observed in the substantial increase in recall with a slight decrease in precision compared to RawHash. Second, RawHash2-Minimizer provides precision similar to that of RawHash2 in most cases, with an exception for the human genome. For SARS-Cov-2, RawHash2-Minimizer provides better accuracy than Sigmap. We conclude that RawHash2 and RawHash2-Minimizer can provide combined improvements in terms of both accuracy and throughput (Supplementary Figure S3), which shows the clear benefits of RawHash2 over RawHash, while the minimizer sketching technique can be competitive for particular use cases where high precision is needed, such as contamination analysis.
### Sequencing Time and Cost
Table 2 shows the average sequencing lengths in terms of bases and chunks that each tool needs to process before stopping the sequencing process of a read. Processing fewer bases can significantly help reduce the overall sequencing time and potentially the cost spent for each read by enabling better utilization of nanopores without sequencing the reads unnecessarily. We make two key observations. First, RawHash2 reduces the average sequencing length by 1.69\(\times\) compared to RawHash. This shows that RawHash2 can reduce the sequencing time and cost more than RawHash can. Second, as the genome size increases with the Green Algae and Human genome datasets, RawHash2 provides the smallest average sequencing lengths compared to all tools. We conclude that RawHash2 is the best tool to reduce the sequencing time and cost per read as it provides the smallest average sequencing lengths for longer genomes.
### Evaluating POD5 and R10.4
In Supplementary Table S5 and S2, we show the results when using POD5 and R10.4, respectively. We make two key observations. First, we find that POD5 provides significant speedups in the total elapsed time, especially for multi-threaded analysis. This is because thread utilization increases significantly with
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & \multicolumn{1}{c}{**UNCALLED**} & **Sigmap** & **RawHash** & **RawHash2** & **RawHash2-Minimizer** \\ \hline \multicolumn{6}{c}{Read Mapping} \\ \hline D1 & Precision & 0.9547 & **0.9929** & 0.9868 & 0.9857 & 0.9862 \\ \multicolumn{6}{c}{_SARS-Cov-2_} & Recall & **0.9910** & 0.5540 & 0.8735 & 0.8842 & 0.7080 \\ \multicolumn{6}{c}{_F_1} & **0.9725** & 0.7112 & 0.9267 & 0.9322 & 0.8150 \\ \hline D2 & Precision & 0.9816 & 0.9842 & 0.9573 & **0.9864** & 0.9781 \\ \multicolumn{6}{c}{_E. coli_} & Recall & **0.9647** & 0.9504 & 0.9009 & 0.8934 & 0.7058 \\ \multicolumn{6}{c}{_F_1} & **0.9731** & 0.9670 & 0.97282 & 0.9376 & 0.8674 \\ \hline D3 & Precision & 0.9459 & 0.9356 & **0.9862** & 0.9567 & 0.9547 \\ \multicolumn{6}{c}{_F_1} & **0.9366** & 0.9123 & 0.8412 & 0.8924 & 0.7792 \\ \multicolumn{6}{c}{_F_1} & **0.9412** & **0.9475** & 0.9079 & 0.9244 & 0.8581 \\ \hline D4 & Precision & 0.8836 & **0.9741** & 0.9691 & 0.9264 & 0.9198 \\ \multicolumn{6}{c}{_Green Algae_} & Recall & 0.7778 & **0.8897** & 0.7015 & 0.8639 & 0.8711 \\ \multicolumn{6}{c}{_F_1} & **0.8273** & **0.9349** & 0.8199 & 0.9501 & 0.7760 \\ \hline D5 & Precision & 0.4657 & 0.4287 & **0.8959** & 0.8530 & 0.8111 \\ \multicolumn{6}{c}{_Human HIG01_} & Recall & 0.2397 & 0.2641 & 0.4054 & **0.4317** & 0.1862 \\ \multicolumn{6}{c}{_F_1} & 0.3196 & 0.3268 & 0.5582 & **0.3759** & 0.3028 \\ \hline \multicolumn{6}{c}{_Contamination_} \\ \hline D1 and D5 & Precision & 0.9378 & 0.7556 & 0.8733 & **0.9393** & 0.9330 \\ \multicolumn{6}{c}{_Best results are highlighted with **bolded**_ **uv.} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mapping accuracy.
Figure 2: Throughput of each tool. Values inside the bars show the throughput ratio between each tool and a nanopore. Bars below the red line indicate the failed real-time analysis.
faster read and write operations, which shifts the bottleneck in the entire workload from the read and write operations to multithreaded mapping computations. Second, we find that RawHash2 can perform accurate and fast analysis when using raw signals from R10.4, although RawHash2 achieves lower accuracy with R10.4 than using R9.4. We believe this is mainly because the parameters related to identifying events in raw signals (i.e., segmentation) are mainly optimized for R9.4, but there is still room for optimization when using R10.4. Our future work will focus on improving these segmentation parameters and techniques to achieve higher accuracy with R10.4.
We conclude that RawHash2 is the only work that can provide accurate and fast analysis when using the recent features released by ONT.
## 4 Conclusion
We introduce RawHash2, a tool that provides substantial improvements over the previous state-of-the-art mechanism RawHash. We make five key improvements over RawHash: 1) more sensitive chaining, 2) reduced seed hits with filtering mechanisms, 3) more accurate mapping decisions with weighted decisions, 4) the first minimizer sketching technique for raw signals, and 5) integration of the recent features from ONT. We find the RawHash2 provides substantial improvements in throughput and accuracy over RawHash. We conclude that RawHash2, overall, is the best tool for mapping raw signals due to its combined benefits in throughput, accuracy, and reduced sequencing time and cost per read compared to the existing mechanisms, especially for longer genomes.
## Acknowledgments
We thank all members of the SAFARI Research Group for the stimulating and scholarly intellectual environment they provide. We acknowledge the generous gift funding provided by our industrial partners (especially by Google, Huawei, Intel, Microsoft, VMware), which has been instrumental in enabling the decade+ long research we have been conducting on accelerating genome analysis. This work is also partially supported by the Semiconductor Research Corporation (SRC), the European Union's Horizon programme for research and innovation [101047160 - BioPIM] and the Swiss National Science Foundation (SNSF) [200021213084].
|
2302.14268 | Self-Supervised Category-Level Articulated Object Pose Estimation with
Part-Level SE(3) Equivariance | Category-level articulated object pose estimation aims to estimate a
hierarchy of articulation-aware object poses of an unseen articulated object
from a known category. To reduce the heavy annotations needed for supervised
learning methods, we present a novel self-supervised strategy that solves this
problem without any human labels. Our key idea is to factorize canonical shapes
and articulated object poses from input articulated shapes through part-level
equivariant shape analysis. Specifically, we first introduce the concept of
part-level SE(3) equivariance and devise a network to learn features of such
property. Then, through a carefully designed fine-grained pose-shape
disentanglement strategy, we expect that canonical spaces to support pose
estimation could be induced automatically. Thus, we could further predict
articulated object poses as per-part rigid transformations describing how parts
transform from their canonical part spaces to the camera space. Extensive
experiments demonstrate the effectiveness of our method on both complete and
partial point clouds from synthetic and real articulated object datasets. | Xueyi Liu, Ji Zhang, Ruizhen Hu, Haibin Huang, He Wang, Li Yi | 2023-02-28T03:02:11Z | http://arxiv.org/abs/2302.14268v1 | Self-Supervised Category-Level Articulated Object Pose Estimation with Part-Level SE(3) Equivariance
###### Abstract
Category-level articulated object pose estimation aims to estimate a hierarchy of articulation-aware object poses of an unseen articulated object from a known category. To reduce the heavy annotations needed for supervised learning methods, we present a novel self-supervised strategy that solves this problem without any human labels. Our key idea is to factorize canonical shapes and articulated object poses from input articulated shapes through part-level equivariant shape analysis. Specifically, we first introduce the concept of part-level SE(3) equivariance and devise a network to learn features of such property. Then, through a carefully designed fine-grained pose-shape disentanglement strategy, we expect that canonical spaces to support pose estimation could be induced automatically. Thus, we could further predict articulated object poses as per-part rigid transformations describing how parts transform from their canonical part spaces to the camera space. Extensive experiments demonstrate the effectiveness of our method on both complete and partial point clouds from synthetic and real articulated object datasets. The project page with code and more information can be found at: equi-articulated-pose.github.io.
## 1 Introduction
Articulated object pose estimation is a crucial and fundamental computer vision problem with a wide range of applications in robotics, human-object interaction, and augmented reality Katz & Brock (2008); Mu et al. (2021); Labbe et al. (2021); Jiang et al. (2022); Goyal et al. (2022); Li et al. (2020). Different from 6D pose estimation for rigid objects Tremblay et al. (2018); Xiang et al. (2017); Sundermeyer et al. (2018); Wang et al. (2019), articulated object pose estimation requires a hierarchical pose understanding on both the object-level and part-level Li et al. (2020). This problem has been long studied on the instance level where an exact CAD model is required to understand the pose of a specific instance. Recently, there is a trend in estimating category-level object pose such that the algorithm can generalize to novel instances. Despite such merits, supervised category-level approaches always assume rich annotations that are extremely expensive to acquire Li et al. (2020); Chi & Song (2021); Liu et al. (2022). To get rid of such restrictions, we tackle this problem under a self-supervised setting instead.
Given a collection of unsegmented articulated objects in various articulation states with different object poses, our goal is to design a network that can acquire a category-level articulated object pose understanding in a self-supervised manner without any human labels such as pose annotations, segmentation labels, or reference frames for pose definition.
The self-supervised category-level articulated object pose estimation problem is highly ill-posed since it requires the knowledge of object structure and per-part poses, which are usually entangled with part shapes. Very few previous works try to solve such a problem or even similar ones. The most related attempt is the work of Li et al. (2021). It tackles the unsupervised category-level pose estimation problem but just for rigid objects. They leverages SE(3) equivariant shape analysis to disentangle the global object pose and shape information so that a category-aligned canonical object space can emerge. This way the category-level object poses could be automatically learned by predicting a transformation from the canonical space to the camera space. Going beyond rigid objects, estimating
articulated object poses demands more than just global pose and shape disentanglement. It requires a more fine-grained disentanglement of part shape, object structure such as part adjacency relationship, joint states, part poses, and so on.
To achieve such fine-grained disentanglement, we propose to leverage part-level SE(3) equivariant shape analysis. Especially, we introduce the concept of part-level SE(3) equivariant features to equip equivariance with a spatial support. The part-level SE(3) equivariant feature of a local region should only change as its parent part transforms but should not be influenced by the transformation of other parts. This is in contrast to the object-level SE(3) equivariant feature for a local region, which is influenced by both the region's parent part and other parts. To densely extract part-level SE(3) equivariant features from an articulated shape, we propose a novel pose-aware equivariant point convolution operator. Based on such features, we are able to achieve a fine-grained disentanglement which learns three types of information from input shapes: 1) _Canonical part shapes_, which are invariant to input pose or articulation changes and are category-aligned to provide a consistent reference frame for part poses; 2) _Object structure_, which is also invariant to input pose or articulation changes and contains structural information about the part adjacency relationships, part transformation order, and joint parameters such as pivot points; 3) _Articulated object pose_, which is composed of a series of estimated transformations. Such transformations include per-part rigid transformations which assembles canonical part shapes into a canonical object shape, per-part articulated transformation which articulates the canonical object shape to match the input articulation state, and a base part rigid transformation transforming the articulated canonical object to the camera space. To allow such disentanglement, we guide the network learning through a self-supervised part-by-part shape reconstruction task that combines the disentangled information to recover the input shapes.
With the above self-supervised disentanglement strategy, our method demonstrates the possibility of estimating articulated object poses in a self-supervised way for the first time. Extensive experiments prove its effectiveness on both complete point clouds and partial point clouds from various categories covering both synthetic and real datasets. On the Part-Mobility Dataset Wang et al. (2019), our method without the need for any human annotations can already outperform the iterative pose estimation strategy with ground-truth segmentation masks on both complete and partial settings by a large margin, _e.g._ reduce the rotation estimation error by around 30 degrees on complete shapes and by 40 degrees on partial shapes. Besides, our method can perform on par with to or even better than supervised methods like NPCS Li et al. (2020). For instance, we can achieve an average of 7.9\({}^{\circ}\) rotation estimation error on complete shapes, comparable to NPCS's 5.8\({}^{\circ}\) error. We can even outperform NPCS on some specific categories such as partial Eyeglasses. Finally, we prove the effectiveness of our part-level SE(3) equivariance design and the fine-grained disentanglement strategy in the ablation study. Our main contributions are summarized as follows:
* To our best knowledge, we are the first that tackles the self-supervised articulated object pose estimation problem.
* We design a pose-aware equivariant point convolution operator to learn part-level SE(3)-equivariant features.
* We propose a self-supervised framework to achieve the disentanglement of canonical shape, object structure, and articulated object poses.
## 2 Related Works
**Unsupervised Part Decomposition for 3D Objects.** Decomposing an observed 3D object shape into parts in an unsupervised manner is a recent interest in shape representation learning. Previous works always tend to adopt a generative shape reconstruction task to self-supervise the shape decomposition. They often choose to represent parts via learnable primitive shapes Tulsiani et al. (2017); Kawana et al. (2020); Yang and Chen (2021); Paschalidou et al. (2021); Deng et al. (2020); Zhu et al. (2020); Chen et al. (2020) or non-primitive-based implicit field representation Chen et al. (2019); Kawana et al. (2021). Shape alignment is a common assumption of such methods to achieve consistent decomposition across different shapes.
**Articulated Object Pose Estimation.** Pose estimation for articulated objects aims to acquire a fine-grained understanding of target articulated objects from both the object level and the part level. The prior work Li et al. (2020) proposes to estimate object orientations, joint parameters, and per-part
poses in a fully-supervised setting. They define Articulation-aware Normalized Coordinate Space Hierarchy (ANCSH), composed of the canonical object space and a set of canonical part spaces, as a consistent representation for articulated objects to support pose estimation. In this work, we also want to estimate a hierarchy of articulation-aware object poses but in a totally unsupervised setting. Instead of hand-crafting normalized coordinate spaces, we wish to let them be automatically induced during learning.
**SE(3) Equivariant Networks.** Recently, there is a trend of pursuing SE(3)-equivariant and invariant features through network design Weiler et al. (2018); Thomas et al. (2018); Fuchs et al. (2020); Zhao et al. (2020); Chen et al. (2021). Equivariance is achieved by designing kernels Thomas et al. (2018); Fuchs et al. (2020) or designing feature convolution strategies Chen et al. (2021); Zhao et al. (2020). In this work, we design our part-level SE(3) equivariant feature network based on Equivariant Point Network Chen et al. (2021) for articulated object pose estimation. Common SE(3) equivariant feature of a local region would be affected by both its parent part's and other parts' rigid transformations. By contrast, its part-level SE(3) equivariant feature would only be affected by its parent part.
## 3 Method
We present our method for self-supervised category-level articulated object pose estimation. We first propose to learn part-level SE(3) equivariant features through a novel pose-aware equivariant point convolution module (sec. 3.1). Based on such features, we then design a disentanglement strategy to factorize an arbitrarily posed 3D point cloud into three types of information. Such information includes a canonical shape with category-aligned pose and articulation state, the object structure describing the part adjacency and joints, as well as the articulated object pose (sec. 3.2). We find part-level SE(3) equivariant features are key to achieve the factorization above. Further, we adopt a part-by-part shape reconstruction task that combines the factorized information for shape reconstruction to self-supervise the factorization (sec. 3.3). Our method assumes a category-level setting where input shapes have the same kinematic chain. For notations frequently used in the following text, \(N\), \(C\), and \(K\) denote the number of points, feature dimension, and the number of parts per shape respectively.
### Part-level SE(3)-equivariant Network
We first elaborate on our part-level SE(3) equivariant network. The network \(\phi(\cdot)\) operates on a point cloud \(X=\{\mathbf{x}_{i}|1\leq i\leq N\}\) with per-point pose and outputs part-level SE(3) equivariant features for all points \(F=\{F_{i}=\phi(X)[i]|1\leq i\leq N\}\). Here the pose of a point refer to the pose of that point's parent part. We introduce the concept of part-level equivariant feature to differentiate from object-level equivariant features in Chen et al. (2021), where the per-point feature changes equivariantly with the global transformation applied to the object. Part-level equivariant feature \(F_{i}\) of each point \(x_{i}\) changes equivariantly with the rigid transformation applied to its parent part, but remains invariant to transformations of other parts. We develop our network based on the Equivariant Point Network (EPN) Chen et al. (2021) with a novel pose-aware equivariant point convolution module to support part-level equivariance. In the following text, we would briefly review EPN, and then continue with our pose-aware equivariant point convolution.
Figure 1: Overview of the proposed self-supervised articulated object pose estimation strategy. The method takes a complete or partial point cloud of an articulated object as input, factorizes canonical shapes, object structure, and the articulated object pose from it. The network is trained by a shape reconstruction task. **Left:** A high-level abstraction of our pipeline. **Right:** An illustrate of decomposed information for shape reconstruction. Green lines (\(\leftarrow\)–) denote the iterative pose estimation process.
**Equivariant Point Network.** EPN takes a point cloud \(X\) containing \(N\) points and a rotation group \(G\) with \(|G|\) elements as input and extracts \(C\)-dimensional per-point per-rotation features, forming a feature matrix \(F\in\mathbb{R}^{N\times C\times|G|}\). \(F\) is rotational and translational equivariant to a specific rigid transformation group \(G_{A}\) induced by \(G\). The rotational equivariant transformation for each rotation element \(g\in G\) in the feature domain is a corresponding matrix permutation of \(F\) along the last dimension. The translational equivariance achieved by EPN is essentially translational invariance. Simply using relative point coordinates for convolution allows \(F\) to remain the same while translating the input point cloud.
**Pose-aware Equivariant Point Convolution.** For part-level SE(3) equivariant features, we design a pose-aware point convolution strategy that operates on a point cloud with per-point poses. While conducting convolution within a group of points, our core idea is to align point poses to the pose of the group center. Since we use the pose of a point to refer to the pose of its parent part, such alignment could cancel out the influence of the varying articulation states on the geometric description of each point. Intuitively speaking, if a point comes from the same part as the group center, information will just get aggregated as a normal convolution. While a point comes from a different part from the group center, pose alignment will canonicalize the articulation state so that the convolution outcome remains the same regardless of the articulation state change. Our pose-aware convolution strategy allows aggregating context information from different parts but avoids feature changing as the articulation changes. Equipping EPN with this strategy, we are able to achieve part-level equivariance since the feature of each point only changes as its parent part transforms but remains invariant to the transformation of other parts. We then formally define our convolution operator. Taking a point cloud \(X\) and the per-point pose \(P=\{P_{i}|1\leq i\leq N\}\) as input, our convolution operator for the point \(x_{i}\)'s feature at the rotation element \(g\) is as follows:
\[(\mathcal{F}*h_{1})(x_{i},g)=\sum_{x_{j}\in\mathcal{N}_{x_{i}}} \mathcal{F}(x_{j},g\mathbf{R}_{i}\mathbf{R}_{j}^{-1})h_{1}(g(x_{i}-P_{i}P_{j} ^{-1}x_{j})), \tag{1}\]
where \(\mathcal{F}(x_{i},g)\) is an input function, \(h_{1}(x_{i},g)\) is a kernel function, \(\mathcal{N}_{x_{i}}\) is the set of points in the neighbourhood of \(x_{i}\), \(P_{i}\) and \(P_{j}\) denote the input pose of point \(x_{i}\) and point \(x_{j}\) respectively, \(\mathbf{R}_{i}\) and \(\mathbf{R}_{j}\) is their rotation components. We prove that using the above convolution within EPN leads to part-level equivariance in the Appendix A.2. We highlight that we adopt an iterative pose estimation strategy (see Appendix A.3 for details) for the per-point poses and rotations in Eq. 1, which are initialized to be identity in the first iteration.
### Part Shape, Structure, and Pose Disentanglement
To obtain a fine-grained understanding of an articulated object, we disentangle three types of information from the input: 1) Canonical shape; 2) Object structure; 3) Articulated object pose. To be more specific, we first use the designed part-level SE(3)-equivariant network to extract per-point features from an input shape. We then leverage a self-supervised slot-attention module to group the featured points, forming a set of featured parts for the disentanglement. We predict a canonical shape for each part to induce the category-level canonical part spaces required by part pose definition. Then we disentangle structure and pose-related information that gradually transform canonical part shapes to the observed shape. First, we predict _part-assembling parameters_ to transform each canonical part shape to form the canonical object shape. After that, the _kinematic chain_, _joint parameters_ and _joint states_ are predicted to articulate the canonical object shape into the observed articulation state. Finally, a _base part rigid transformation_ is predicted to further transform the resulting articulated object to the observed shape in the camera space. We will elaborate details of the above designs in the following text.
**Part Proposal.** The part proposal module groups \(N\) points in the input shape \(X\) into \(K\) parts for per-part equivariant features extraction. It learns an invariant grouping function that maps \(X\) together with a point feature matrix \(F\) to a point-part association matrix \(\mathbf{W}\in\mathbb{R}^{N\times K}\). Specifically, we adopt an attention-pooling operation for the per-point invariant feature together with a slot attention module Locatello et al. (2020) for the grouping purpose. Based on the proposed parts, we can group points in the input shape \(X\) into \(K\) point clouds \(\{X_{i}|1\leq i\leq K\}\) and compute the per-part equivariant feature \(\{F_{i}|1\leq i\leq K\}\).
**Shape: Canonical Part Shape Reconstruction.** With per-part equivariant features, we aim to predict a canonical shape for each part which should be aligned within a certain category so that the category-level part pose can be defined. The canonical shape for each part should be invariant to every parts' rigid transformations. Thus, we adopt an SE(3)-invariant canonical shape reconstruction module constructed based on an SO(3)-PointNet module as utilized in Li et al. (2021). The reconstruction module converts per-part equivariant features \(F_{i}\) into per-part invariant features through attention pooling first and then predicts an SE(3)-invariant shape \(Z_{i}\) for each part.
**Structure: Kinematic Chain Prediction.** In addition to the canonical shape of each part, we also need to understand the kinematic chain of a shape. The kinematic chain defines how different parts are connected and the order they get transformed when a cascaded transformation happens., _i.e._ from chain leaves to the chain root. To estimate the kinematic chain for a given shape, we first construct an adjacency confidence graph from object parts and then extract its maximum spanning tree consisting of the set of confident adjacency edges. We set the part with the largest degree in the graph to be the root of the tree, which will also serve as the base part of the object. The transformation order is further predicted as the inverse DFS visiting order of the tree. Notice the kinematic chain should not be affected by the articulated input pose, we therefore leverage per-part SE(3)-invariant features for estimation.
**Structure: Joint Parameters Prediction.** For each pair of adjacent parts, we will then infer their joint parameters, including an invariant pivot point \(\mathbf{p}_{i,j}^{v}\) and a joint axis orientation hypothesis \(\mathbf{u}_{i}^{g}\) for each rotation element \(g\in G_{g}\). For pivot points, we treat them as invariant properties and still adopt an invariant shape reconstruction module for prediction. Specifically, we predict the pivot point \(\mathbf{p}_{i,j}^{v}\) between every two adjacent parts \(i,j\) from their equivariant feature \((F_{i},F_{j})\) using an invariant shape reconstruction module Li et al. (2021). For joint axis orientations, we regress an axis orientation hypothesis \(\mathbf{u}_{i}^{g}\) for part \(i\) corresponding to each rotation group element \(g\in G_{g}\) from its equivariant feature \(F_{i}\).
**Pose: Part-assembling Parameters Prediction.** Part-assembling parameters transform the predicted canonical part shapes to assemble a canonical object shape. As parameters connecting invariant canonical shapes, they should be invariant to every parts' rigid transformations as well. Here, we simply predict a translation vector \(\mathbf{p}_{i}^{c}\in\mathbb{R}^{3}\) for each part \(i\). We predict them through invariant shape reconstruction modules from per-part equivariant feature \(\{F_{i}|1\leq i\leq K\}\). We can then assemble predicted canonical part shapes together to form the canonical object shape: \(Z=\{Z_{i}+\mathbf{p}_{i}^{c}|1\leq i\leq K\}\).
**Pose: Joint States Prediction.** Joint states describe the articulation state of an object. For each part \(i\), we predict a joint state hypothesis for each rotation element \(g\in G\) from its equivariant feature \(F_{i}\), _i.e._ a rotation angle \(\theta_{i}^{g}\) for a revolute part or a translation scalar \(s_{i}^{g}\) for a prismatic part. We can therefore articulate the canonical object shape based on the predicted kinematic chain and joint states with the base part fixed, so as to match the object articulation from the input observation.
**Pose: Base Part Rigid Transformation.** The base part rigid transformation needs to transform the articulated canonical object shape to the camera space. Since we have previously predicted joint states hypotheses for all rotation element \(g\), we will also need multiple base transformation hypotheses correspondingly. We simplify the base part transformation to be a rotation, which proves to be effective in practice. A straightforward way is to use the rotation matrix corresponding to each rotation element \(g\) as the base transformation hypothesis. We follow this idea but also predict an additional residual rotation as a refinement. By transforming the articulated shape of the canonical object shape via the predicted base part rigid transformation, we can align the resulting shape with the observed input object.
**Articulated Object Pose.** With the above predicted quantities, we can calculate per-rotation articulated object pose hypotheses for an input articulated object \(X\), including three parts: 1) translation \(\mathbf{p}_{i}^{c}\) of each part \(i\) which assembles category-aligned canonical parts into a canonical object; 2) per-rotation articulated transformation of the canonical object based upon the predicted kinematic chain, joint parameters and per-rotation joint states; 3) per-rotation base part rigid transformation which transforms the articulated canonical object into the camera space. The rigid transformation hypothesis
for each part \(i\) corresponding to each rotation element \(g\in G\) is denoted as \(P_{i}^{g}=(\mathbf{R}_{i}^{g},\mathbf{t}_{i}^{g})\). We treat them as part pose hypotheses.
### Shape Reconstruction-based Self-supervised Task
Based on the reconstructed canonical part shapes and predicted per-rotation part pose hypotheses, we can get per-rotation shape reconstruction for each part \(i\): \(\{Y_{i}^{g}=\mathbf{R}_{i}^{g}Z_{i}+\mathbf{t}_{i}^{g}|g\in G\}\). A part-by-part reconstruction task is adopted to self-supervise the network. Besides, we add a regularization term for each predicted joint so that the joint indeed connects two parts.
**Shape Reconstruction-based Self-supervised Loss.** The per-rotation shape reconstruction for the whole object can be calculated by concatenating all part reconstructions: \(Y^{g}=\{Y_{i}^{g}|1\leq i\leq K\}\). We then adopt a min-of-N loss between the input observation \(X\) and the reconstructed posed point clouds:
\[\mathcal{L}_{rec}=\min_{g\in G}d(X,Y^{g}), \tag{2}\]
where \(d:\mathbb{R}^{N_{X}\times 3}\times\mathbb{R}^{N_{Y}\times 3}\rightarrow\mathbb{R}\) denotes the distance function between two point clouds and could be unidirectional or bidirectional Chamfer Distance as an example.
**Regularization for Joint Prediction.** Predicted joints should connect adjacent parts and support natural articulations. However, just supervising joint parameters from the reconstruction loss is not sufficient for the needs above. Therefore, we devise a point-based joint constraint term for each predicted joint \((\mathbf{u}_{i}^{g_{0}},\mathbf{p}_{i,j}^{v})\), where \(g_{0}=\text{argmin}_{g\in G}d(X,Y^{g})\) (Eq. 2). Specifically, given the predicted pivot point \(\mathbf{p}_{i,j}^{v}\) and joint orientation \(\mathbf{u}_{i}^{g_{0}}\), we independently randomly sample a set of points from the joint by shifting the pivot point \(\mathbf{p}_{i,j}^{v}\): \(P_{i,j}^{v}=\{\mathbf{p}_{i,j}^{v,k}|0\leq k\leq K^{v}\}\). The joint regularization loss term is as follows:
\[\mathcal{L}_{reg}=\sum_{(i,j)\in\mathcal{E}_{\mathcal{T}}}d(P_{i,j}^{v},Z_{i}^ {2})+d(P_{i,j}^{v},Z_{j}^{2})+d(P_{i,j}^{v},Z_{i}^{1})+d(P_{i,j}^{v},Z_{j}^{1}),\]
where \(Z_{i}^{1}\) and \(Z_{i}^{2}\) are shapes of the part \(i\) in the canonical object space before and after its articulated transformation, \(\mathcal{E}_{\mathcal{T}}\) is the set of adjacent parts, \(d(X_{1},X_{2})\) is the unidirectional Chamfer Distance function from point cloud \(X_{1}\) to \(X_{2}\).
Our final self-supervised shape reconstruction loss is a linear combination of the above two loss terms: \(\mathcal{L}=\mathcal{L}_{rec}+\lambda\mathcal{L}_{reg}\), where \(\lambda\) is a hyper-parameter.
Figure 2: Visualization for qualitative evaluation. For every two lines, the first line draws the results of our method, and the second line draws those of NPCS. Every three shapes from the left side to the right side are the input point cloud (**Input**), reconstruction (**Recon.**), and the reconstructed canonical object shape (**Canon.**). **We do not assume input shape alignment but align them here when drawing just for a better view.** Please zoom in for details.
## 4 Experiments
We evaluate our method on the category-level articulated object pose estimation task (sec. 4.2) to demonstrate its effectiveness. Besides, we also test its performance on two side tasks that can be completed by our network at the same time, namely part segmentation (sec. 4.3), and shape reconstruction (sec. 4.4).
### Datasets
Following previous literature Li et al. (2020), we choose seven categories from three datasets for evaluation on both complete shapes and rendered partial point clouds: 1) Four categories from the Part-Mobility Wang et al. (2019) dataset, namely Oven, Washing Machine, Laptop (denoted as Laptop (S)), and Eyeglasses with revolute parts. 2) One category, Drawer with prismatic parts, from SAPIEN dataset Thomas et al. (2011). 3) Two categories from a real dataset HOJ4D Liu et al. (2022), namely Safe and Laptop (denoted as Laptop (R)) with revolute parts. Please refer to the Appendix B.1 for data preparation details.
### Category-level Articulated Object Pose Estimation
**Metrics.** Following Li et al. (2020), we use the following metrics to evaluate our method: 1) Part-based pose-related metrics, namely per-part rotation error \(R_{err}\binom{\cdot}{}\) and per-part translation error \(T_{err}\), both in the form of mean and median values; 2) Joint parameters, namely joint axis orientation errors \(\theta_{err}(^{\circ})\) in degrees and joint position error \(d_{err}\), both in the form of mean values. Please refer to the Appendix B.8 for details of our evaluation strategy.
**Baselines.** Since there is no previous works that have exactly the same setting with ours, we choose NPCS Li et al. (2020), a **supervised** pose estimation method for articulated objects, and ICP, a traditional pose estimation approach, as our baseline methods. To apply them on our articulated objects with arbitrary global poses, we make the following modifications: 1) We change the backbone of NPCS to EPN Chen et al. (2021) (denoted as "NPCS-EPN") and add supervision on its discrete rotation mode selection process to make it work on our shapes with arbitrary global pose variations. We do observe that the NPCS without EPN will fail to get reasonable results on our data (see Appendix B.5 for details). Beyond part poses, we also add a joint prediction branch for joint parameters estimation. 2) We equip ICP with ground-truth segmentation labels (denoted as "Oracle ICP") and register each part individually for part pose estimation. Notice that Oracle ICP cannot estimate joint parameters.
**Experimental Results.** Table 1 presents the experimental results of our method and baseline methods on complete point clouds. We defer the results on partial point clouds to Table 7 in the Appendix B.4. We can make the following observations: 1) As a self-supervised strategy, our average and per-category performance are comparable to that of the supervised baseline NPCS-EPN. We can even sometimes outperform NPCS-EPN such as the joint axis orientation estimation on Safe. 2) Without any human label available during training, our method can outperform the Oracle ICP with ground-truth segmentation labels by a large margin in all categories. As a further discussion, the poor performance of Oracle ICP may be caused by the part-symmetry related problem. It would add ambiguity on part poses especially when we treat each part individually for estimation. Please refer to Appendix C for more discussions. For a qualitative evaluation and comparison, we visualize the input objects, reconstructions, and the predicted canonical object shapes by our method and NPCS in Figure 2. Our method is able to reconstruct category-level aligned canonical shapes, which serve as good support for estimating category-level articulated object poses.
### Part Segmentation
**Evaluation Metric and Baselines.** The metric used for this task is Segmentation IoU (MIoU). We choose three position-based segmentation strategies, namely BAE-Net Chen et al. (2019), NSD Kawana et al. (2020), BSP-Net Chen et al. (2020) and one motion-based segmentation method ICP ICP as our baselines for this task. For BAE-NEt and BSP-Net, we generate data in their implicit
representation using the data generation method described in IM-NET Chen & Zhang (2019). We improve the evaluation strategy for NSD and BSP-Net considering the global pose variation of our data (see Appendix B.3 for details).
### Shape Reconstruction
**Evaluation Metric and Baselines.** We choose to use Chamfer L1 as our evaluation metric for shape reconstruction. To demonstrate the superiority of part-by-part reconstruction for articulated objects over the whole shape reconstruction, we choose EPN which treats them as rigid objects for reconstruction as the baseline.
**Experimental Results.** As shown in Table 3, our method can consistently outperform the EPN-based whole shape reconstruction. We suppose part-by-part reconstruction where only simple parts should be recovered makes the reconstruction an easier problem for networks than recovering the whole shape.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Ours} & \multicolumn{2}{c|}{Walking} & \multirow{2}{*}{Expolumn}{3}{c|}{LuppyG} & \multirow{2}{*}{Sidc} & \multirow{2}{*}{LuppyG} & \multirow{2}{*}{Danner} & \multirow{2}{*}{Avg.} \\ & & Machine & & & & & & & \\ \hline RDE-Net Chen & **0.000** & **0.007** & 37.19* & 66.21 & 79.83* & 66.35 & 22.83* & 47.50 \\ NSD Kmeans et al. (2016) & 60.9 & 56.43 & 53.31 & 80.88 & 71.30 & 76.86 & 35.61 & 61.85 \\ RSP-Net Chen (2016) & 67.34 & 62.52 & 54.28 & 79.41 & 75.59 & 81.33 & 42.15 & 66.22 \\ Oracle
## 5 Ablation Study
In this section, we try to ablate some crucial designs in the method to demonstrate their effectiveness, including part-level feature accumulation, pose-aware point convolution, and joint regularization.
**Part-level Feature Accumulation.** We use a grouping module to group points into parts for part-level features in our method. To demonstrate the effectiveness of using part-level features for part shape, structure, pose disentanglement, we ablate part-level features and only use features from the whole shape for part-level properties prediction, similar to those used in Kawana et al. (2021); Chen et al. (2019). Table 5 compares their performance. For each metric, we report its per-category per-part average value. It can be observed that part-level features can help with part-based properties prediction, letting the network achieve better performance on all pose-related metrics.
**Pose-aware Point Convolution.** Our method contains a pose-aware equivariant feature convolution design for part-level SE(3) equivariant feature learning. To demonstrate the superiority of part-level equivariance over common global equivariance, we compare the model's performance when using part-level equivariant features (With \(\mathcal{L}_{reg}\) (Pose.)) with the one using global equivariant features (With \(\mathcal{L}_{reg}\)) in table 4. For each metric, its per-category per-part average value is reported. The network using part-level equivariant features could consistently outperform the one using only global equivariant features on all metrics.
**Joint Regularization.** Besides reconstruction loss, we additionally add a joint regularization term to predict joints that connect two adjacent parts. Beyond acquiring joint-related parameters, joint regularization could improve the pose estimation performance as well, especially for translation prediction, as shown in Table 4.
## 6 Conclusion
In this work, we propose a self-supervised strategy for category-level articulated object pose estimation without any annotations. Leveraging part-level SE(3) equivariant features, we propose a part shape, structure, pose disentanglement strategy that successfully accomplish the category-level articulated object pose estimation task. A part-by-part shape reconstruction task is adopted to self-supervise the network learning. Experiments prove the effectiveness of our method and our core ideas. This work can reduce the annotation efforts for solving this tasks and would also promote further thinkings on designing part-level equivariant networks.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline Method & Seg. IoU & Mean \(R_{err}(\cdot)\) & Median \(R_{err}(\cdot)\) & Mean \(T_{err}(\cdot)\) & Median \(T_{err}(\cdot)\) & Joint & Chamfer L1 \\ \hline No \(L_{reg}\) & 76.40 & 11.74 & 10.87 & 0.070 & 0.060 & & 0.038 \\ With \(\mathcal{L}_{reg}\) & 74.32 & 10.40 & 9.30 & 0.072 & 0.073 & 22.010.111 & 0.032 \\ With \(\mathcal{L}_{reg}\) (Pose) & **76.90** & **9.21** & **8.40** & **0.052** & **0.047** & **19.720.103** & **0.025** \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study w.r.t. the effectiveness of joint regularization for part pose estimation and the design of pose-aware equivariant feature communication (denoted as “Pose.”). Reported values are per-category per-part average values. Please refer to the caption of Table 5 for the data format of “Joint”.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline Method & \multirow{2}{*}{Oven} & \multirow{2}{*}{
\begin{tabular}{c} Wahling \\ Machine \\ \end{tabular} } & \multirow{2}{*}{Eyghes} & \multirow{2}{*}{Laptop (S)} & \multirow{2}{*}{Safe} & \multirow{2}{*}{Laptop (R)} & \multirow{2}{*}{Drawer} & \multirow{2}{*}{Avg.} \\ \cline{1-1} \cline{6-10} EPN L et al. (2021) & 0.033 & 0.051 & 0.028 & 0.029 & 0.030 & 0.028 & 0.057 & 0.036 \\ Ours & **0.025** & **0.049** & **0.025** & **0.024** & **0.026** & **0.026** & **0.045** & **0.031** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison between the shape reconstruction performance of different methods on all categories. Metric used in this task is Chamfer L1. The smaller, the better.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline Method & Seg. IoU & Mean \(R_{err}(\cdot)\) & Median \(R_{err}(\cdot)\) & Mean \(T_{err}(\cdot)\) & Median \(T_{err}(\cdot)\) & Joint & Chamfer L1 \\ \hline No \(L_{reg}\) & 76.40 & 11.74 & 10.87 & 0.070 & 0.060 & & 0.038 \\ With \(\mathcal{L}_{reg}\) & 74.32 & 10.40 & 9.30 & 0.072 & 0.073 & 22.010.111 & 0.032 \\ With \(\mathcal{L}_{reg}\) (Pose) & **76.90** & **9.21** & **8.40** & **0.052** & **0.047** & **19.720.103** & **0.025** \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study w.r.t. the effectiveness of accumulating part-level features for part-based properties prediction. Reported values are per-category per-part average values on all categories. “Joint” represents joint parameter estimation errors, with the value in the format of “Mean \(\theta_{err}\)/Mean \(d_{err}\)”. |
2309.17413 | Momentum-space imaging of ultra-thin electron liquids in delta-doped
silicon | Two-dimensional dopant layers ($\delta$-layers) in semiconductors provide the
high-mobility electron liquids (2DELs) needed for nanoscale quantum-electronic
devices. Key parameters such as carrier densities, effective masses, and
confinement thicknesses for 2DELs have traditionally been extracted from
quantum magnetotransport. In principle, the parameters are immediately readable
from the one-electron spectral function that can be measured by angle-resolved
photoemission spectroscopy (ARPES). Here, buried 2DEL $\delta$-layers in
silicon are measured with soft X-ray (SX) ARPES to obtain detailed information
about their filled conduction bands and extract device-relevant properties.
This study takes advantage of the larger probing depth and photon energy range
of SX-ARPES relative to vacuum ultraviolet (VUV) ARPES to accurately measure
the $\delta$-layer electronic confinement. The measurements are made on
ambient-exposed samples and yield extremely thin ($\approx 1$ $nm$) and dense
($\approx$ $10^{14}$ $cm^2$) 2DELs. Critically, this method is used to show
that $\delta$-layers of arsenic exhibit better electronic confinement than
$\delta$-layers of phosphorus fabricated under identical conditions. | Procopios Constantinou, Taylor J. Z. Stock, Eleanor Crane, Alexander Kölker, Marcel van Loon, Juerong Li, Sarah Fearn, Henric Bornemann, Nicolò D'Anna, Andrew J. Fisher, Vladimir N. Strocov, Gabriel Aeppli, Neil J. Curson, Steven R. Schofield | 2023-09-29T17:18:57Z | http://arxiv.org/abs/2309.17413v1 | # Momentum-space imaging of ultra-thin electron liquids in \(\delta\)-doped silicon
###### Abstract
**Abstract: Two-dimensional dopant layers (\(\delta\)-layers) in semiconductors provide the high-mobility electron liquids (2DELs) needed for nanoscale quantum-electronic devices. Key parameters such as carrier densities, effective masses, and confinement thicknesses for 2DELs have traditionally been extracted from quantum magnetotransport. In principle, the parameters are immediately readable from the one-electron spectral function that can be measured by angle-resolved photoemission (ARPES). Here, we measure buried 2DEL \(\delta\)-layers in silicon with soft X-ray (SX) ARPES to obtain detailed information about their filled conduction bands and extract device-relevant properties. We take advantage of the larger probing depth and photon energy range of SX-ARPES relative to vacuum ultraviolet (VUV) ARPES to accurately measure the \(\delta\)-layer electronic confinement. Our measurements are made on ambient-exposed samples and yield extremely thin (\(<\) 1 nm) and dense (\(\sim\)10\({}^{14}\) cm-2) 2DELs. Critically, we use this method to show that \(\delta\)-layers of arsenic exhibit better electronic confinement than \(\delta\)-layers of phosphorus fabricated under identical conditions.**
Two-dimensional (2D) quantum-confined electronic systems have long been venues for discoveries in fundamental physics and the development of new devices [1]. Technological 2D systems have traditionally consisted of planar heterostructures and field-effect devices, particularly in compound semiconductors [2]. In recent years, there has similarly emerged strong interest in 2D electron states in van der Waals systems, such as graphene, and the transition metal dichalcogenides for future nanoscale and quantum-electronic devices [3, 4, 5]. Understandably, there is also strong interest in fabricating 2D electron states in the world's leading technological semiconductor, silicon. This is largely driven by the requirements of proposed nano- and quantum-electronic applications employing atomically abrupt dopant
profiles, e.g., the famed Kane solid-state quantum computer and related designs [6, 7, 8]. 2D electron states can be created in silicon via so-called \(\delta\)-doping, which involves the physical [9] or chemical [10] deposition of dopant atoms onto a silicon surface, followed by silicon overgrowth to produce sharp, 2D doped layers (Figure 1a). At high doping concentrations, such \(\delta\)-layers yield quantum-confined 2D conductive planes with electronic properties significantly different to those of the bulk silicon host [11].
The thinnest \(\delta\)-layers prepared in silicon to date have relied on the chemical delivery of phosphorous [10], arsenic [12] or boron [13], with the resulting out-of-plane atomic distributions of dopant atoms having \(\sim\)1 nm thicknesses [14, 15, 16, 17]. The electronic thicknesses of these layers have also been estimated using quantum magnetoresistance [18], with similar results [19]. Such thicknesses are comparable to the wavelength of the conduction electrons, and the corresponding energy level quantisation was observed in planar junction tunnelling spectroscopy more than three decades ago [9, 20, 21]. Vacuum ultraviolet angle-resolved photoemission spectroscopy (VUV-ARPES) measurements of phosphorous \(\delta\)-layers in silicon have also revealed quantised states, yet the origin of these quantised states was incorrectly attributed to the more exotic degeneracy lifting mechanism, valley interference [22, 23, 24, 25]. To justify the anomalously large valley splitting energies reported, the authors cited density functional theory (DFT) calculations that were made for perfectly ideal, one-atom-thick \(\delta\)-layers. However, DFT calculations of \(\delta\)-layers with even a single atom deviation from a perfectly-thin \(\delta\)-layer show the valley splitting reduces to \(\sim\)1 meV [26]. Such small valley-splitting energies cannot presently be observed in ARPES measurements, and it has since been acknowledged that the observed splitting is due to confinement [27, 28], as first suggested in the 1980s [9, 20, 21]. Moreover, as discussed in Refs. [22, 23], the short inelastic mean free path of the ejected electrons in VUV-ARPES (\(\lambda_{e}\approx 0.5\) nm) means the signal for previous ARPES measurements [23, 28, 29] does not directly originate from the \(\delta\)-layer (that is up to \(4A_{e}\) beneath the surface), but is instead a near-surface resonance enhancement that enables only a small fraction of the wavefunction to be probed [23]. Furthermore, because VUV-ARPES has limited momentum resolution along the surface normal, it was impossible to measure a corresponding momentum spread whose inverse would be the key parameter of the 2DEL, namely the electronic thickness, from which the origin and level quantisation of the 2DEL can be deduced.
In this paper, we report comprehensive soft X-ray ARPES (SX-ARPES) measurements of \(\delta\)-layers in silicon. The high photon energies of SX-ARPES (\(h\nu=300\) - \(1600\) eV) give access to a much longer electron mean free path (\(\lambda_{e}\approx 2\) nm), which permits the extraction of electrons from depths of several nanometres beneath the surface [30]. This enables us to directly probe \(\delta\)-layers underneath the native surface oxide of samples exposed to ambient after their fabrication, whilst maintaining a very sharp out-of-plane \(k_{z}\) momentum resolution, \(\Delta k_{z}\), which is equal to \(\Delta k_{z}=\lambda_{e}^{-1}\)[31]. Our experiments therefore differ qualitatively from the previous VUV-ARPES [22, 23, 24, 25]. We present, for the first time, energy and momentum maps resolved with high momentum resolution in the plane perpendicular to the \(\delta\)-layer, revealing the detailed \(\delta\)-layer band structure in the \(k_{z}\)-\(k_{\parallel}\) plane. Our measurements conclusively demonstrate that the \(\delta\)-layer band structure is non-dispersive in the plane perpendicular to the \(\delta\)-layer in a manner significantly more convincing than a previous attempt using VUV-ARPES \(k_{z}\)-binding energy
scans [22]. Moreover, exactly as for photoemission tomography of molecules [32, 33, 34], our \(k_{z}\) momentum dependencies are related via a Fourier transform to electron densities in real space, and thus measure directly the real-space thicknesses of the occupied quantised electronic states that constitute the 2DEL. We apply this method to investigate the optimisation of \(\delta\)-layer electronic thickness in silicon, and to compare \(\delta\)-layers fabricated with arsenic and phosphorus. We show that arsenic \(\delta\)-layers are significantly more electronically confined than phosphorus \(\delta\)-layers prepared under identical conditions, and we determine the carrier density via a Luttinger analysis of the Fermi surface.
Our SX-ARPES experiments feature an X-ray spot size of (10 \(\times\) 73) \(\upmu\)m\({}^{2}\), which is comparable to the size of the Hall-bars used for quantum magnetotransport measurements. Next-generation light sources together with new optics will enable SX-nanoARPES with better energy resolution and sub-micron spot sizes [35], thus providing a tool complementary to X-ray inspection of integrated circuit morphology [36, 37] and chemical composition in the sense that it will image the electrons switched in devices. While such ARPES measurements have already been conducted in the UV regime [38, 39], extension to the SX regime will offer an enhanced bulk sensitivity for probing buried heterostructures or interfaces. Although scanning microwave microscopy [40] also images the conduction electrons in devices, it does not yield their three-dimensional momentum distribution. However, SX-nanoARPES, along with the methods and analysis we present here, can do so, greatly expanding the possibilities for characterizing semiconductor nanostructures and devices.
## Background
The dynamic behaviour of conduction electrons in bulk silicon is determined by a set of 6 degenerate conduction band valleys, with minima at equivalent points in reciprocal space along the \(<\)100\(>\) directions [41]. Bulk electron doping causes these valleys to become occupied and, at high doping levels, will result in ellipsoidal Fermi surfaces, one around each minimum (Figure 1b). However, when electrons are confined to 2D planes, as for \(\delta\)-doping, the Bloch wavevector component in the \(k_{z}\) direction is no longer a good quantum number, and the energy becomes quantised into discrete levels, \(E_{n}\). The in-plane wavevector components \(k_{x}\) and \(k_{y}\) remain good quantum numbers and the electronic states can be described using the formalism of effective mass theory [42].
According to elementary quantum mechanics, the degree of confinement is governed by the potential created by the \(\delta\)-layer, the effective mass of the electrons, and the number of wavefunction nodes. Since the \(\delta\)-doping breaks the degeneracy of the six valleys, the two valleys centred at \(k_{x}=k_{y}=0\) are characterised by a single, in-plane, transverse effective mass and the quantised states are correspondingly labelled \(n\Gamma\) (where \(n\) is the subband number), while the remaining four in-plane valleys are characterised by in-plane longitudinal and transverse effective masses and are labelled \(m\Delta\) (where \(m\) is the subband number) [43, 44, 45]. Subsequently, in the direction of quantisation the \(n\Gamma\) and \(m\Delta\) subbands derive from bands with a heavy and light effective mass respectively, leading to different spectra for states derived from different valleys. The right-hand panel of Figure 1a shows a self-consistent Schrodinger-Poisson model of how the \(n=1\) and \(n=2\) wavefunctions (labelled \(1\Gamma\) and \(2\Gamma\)) for electrons
with a heavy mass bracket the \(m=1\) wavefunction (labelled \(1\Delta\)) for the lighter, and hence less confined, electron; the simulation in Figure 1a was performed using the electron density and electronic thickness extracted from our SX-ARPES measurements of a 2 nm overgrown arsenic \(\delta\)-layer, as described below. Moreover, our calculations treat the \(n\Gamma\) and \(m\Delta\) subbands as standing wave solutions that originate from the superposition of two plane waves moving with \(\pm k_{z}\) momenta, confined by the boundary of the \(\delta\)-layer and in the absence of so-called valley interference [11].
In practice, the \(\delta\)-layer wave function is characterised by an envelope function in the z-direction that decays with distance away from the \(\delta\)-layer, combined with an oscillatory Bloch wave component established by the bulk conduction states from which the \(\delta\)-layer is derived. The Fourier spectrum of such a state is peaked about the values of \(k_{z}\) corresponding to its Bloch wave origins and is oscillatory in \(k_{z}\) at multiples of the reciprocal lattice vector [30], [46], [47]. Thus, the Fermi surface picture of Figure 1b is transformed by the replacement of conduction ellipsoids with states that do not disperse in \(k_{z}\), and can be visualised, from the standpoint of an ARPES experiment, as being cylindrical or elliptic-cylindrical in shape (Figure 1c); the extent of these states in \(k_{z}\) is inversely proportional to the electronic (not chemical) real space thickness of the \(\delta\)-layer [25], [30]. A 2D system confined along \(z\) by an infinitely deep and infinitesimally narrow potential would yield states with infinitely long profiles along \(k_{z}\), while at the other extreme, for a fully three-dimensional doped system, the states should return to reside within the ellipsoidal Fermi-surfaces shown in Figure 1b. For real layers of some finite thickness, a phenomenological equation for the thickness of the layer is [30]:
\[\delta z=\frac{1}{\delta k_{z}-\delta k_{\infty}}, \tag{1}\]
where \(\delta k_{z}\) is the extent of the 2D valley state in \(k_{z}\), and \(\delta k_{\infty}\) is the corresponding length of the state for the same electron doping level in the absence of 2D confinement. We determine \(\delta k_{z}\) and \(\delta k_{\infty}\) experimentally from our SX-ARPES data by measuring the longitudinal extent of the out-of-plane (\(\Gamma\)) valley, and the in-plane (\(\Delta\)) valleys respectively. Careful measurement of these quantities and application of Equation 1 thus produces a direct measure of the electronic thickness, \(\delta z\), of the \(\delta\)-layers.
Figure 1d summarises our results for the electronic thickness of the \(\delta\)-layer. Here we show the longitudinal extent of the in-plane and out-of-plane valleys versus their transverse extent. The data clusters into two groups, for the \(\Gamma\) and \(\Delta\) valleys, respectively. In particular, the \(\Delta\) valleys lie along a straight line characterising the ellipsoidal shape of the bulk silicon conduction band valleys (as set by the ratio of the bulk longitudinal and transverse effective masses). In stark contrast, the \(\Gamma\) valleys appear elongated in the longitudinal direction and are therefore grouped together in the top left of the plot. This lengthening of the states in \(k_{z}\) is characteristic of 2D electronic states, to be discussed further below.
### \(\delta\)-layer carrier density and Fermi-surface measurements
We fabricated \(\delta\)-layer samples using either phosphorus or arsenic as the dopant species. The _Methods_ sections gives details of the sample preparations. Secondary ion mass spectrometry
(SIMS) and Hall effect measurements confirmed the anticipated highly peaked dopant distributions and dopant electrical activations for all samples (see _Supplementary Information_).
In Figure 2 we show the SX-ARPES Fermi surface maps acquired from a phosphorus (Figure 2a-d) and an arsenic (Figure 2e-h) \(\delta\)-layer. The schematic Brillouin zone diagrams at the left of the figure illustrate the planes through which each of the Fermi surface slices have been taken: Figure 2b,f show \(k_{x}\)-\(k_{z}\) slices that cut through two \(\Gamma\) and two \(\Delta\) valleys, illustrated by the purple plane in the schematics. Figure 2c,g and Figure 2d,h show \(k_{x}\)-\(k_{y}\) slices at different \(k_{z}\) values, as indicated by the green and orange planes in the schematics, respectively.
The degeneracy breaking due to \(\delta\)-layer confinement is readily apparent for both samples: the four \(\Delta\)-valleys in the \(k_{x}\)-\(k_{y}\) slices (Figure 2c,g) are uniform in size and shape, as expected, while in the \(k_{x}\)-\(k_{z}\) slices (Figure 2b,f) we find the two \(\Gamma\)-valleys (at \(\pm k_{z}\)) appear significantly larger and brighter than the \(\Delta\)-valleys. The main difference in intensity occurs because of the different in-plane effective masses of the two types of valleys, resulting in a different electronic density of states and hence measured spectral weights [44].
We can determine the 2D carrier density of the samples by analysing the area enclosed by each valley in the \(k_{x}\)-\(k_{y}\) plane; in other words, determining the total area enclosed by the four \(\Delta\) valleys in Figure 2c,g and also the \(k_{x}\)-\(k_{y}\) slice through the two \(\Gamma\) valleys, one of which is shown in Figure 2d,h. We find that the resulting total carrier density for all samples lie within the range \((0.88\pm 0.10)\times 10^{14}\) cm\({}^{-2}\), consistent with Hall effect measurements for all but one of the samples considered (see _Supplementary Information_). This concurs with our expectations, as at the self-saturation limit of \(\delta\)-doping, 1 in every 4 silicon (001) surface atoms is replaced with a dopant, corresponding to a density of \(\approx\)1.4\(\times 10^{14}\) cm\({}^{-2}\)[48]. We attribute the reduced measured carrier density to the deactivation of some donors via effects such as clustering (particularly for arsenic) [49], [50] and chemical interaction with oxygen atoms where the native oxidation of the surface and \(\delta\)-layer overlap. Furthermore, we find that the carriers are equally distributed within the \(\Gamma\) and \(\Delta\) subbands (see _Supplementary Information_), in agreement with the theoretical predictions of Ref. [42] and our own Schrodinger-Poisson modelling (Figure 1a), in contrast to previous VUV-ARPES that showed an unoccupied \(\Delta\) band [27].
### 6-layer thickness determination
As discussed above, an electronically 2D \(\delta\)-layer should be dispersionless in \(k_{z}\), and therefore its \(\Gamma\) valley should be a regular cylinder, rather than ellipsoidal. In addition, the extent of the state in \(k_{z}\) provides a direct measure of the confinement thickness of the state. With this in mind, we have performed a quantitative analysis of four \(\delta\)-layer samples, as shown in Figure 3. Two of the samples were phosphorous \(\delta\)-layers and two were arsenic \(\delta\)-layers, and for each dopant species we have performed a nominal silicon overgrowth of 2 nm and 3 nm. Figure 3a summarises our approach to determine the \(\delta\)-layer confinement from the high-resolution Fermi surface maps of the \(+k_{z}\)\(\Gamma\)-valleys (Figure 3d-g), and a comparable \(+k_{y}\)\(\Delta\)-valley (Figure 3b). We note that measurements were also made on samples overgrown with 1 and 4 nm of silicon. For the former, no conduction states were observed, which we attribute to the complete oxidation of the \(\delta\)-layer when the sample was exposed to ambient for transport to the
synchrotron. For the latter, the spectral intensity of the conduction states became incredibly weak, due to the electron escape depth being smaller than the \(\delta\)-layer depth, making the analysis extremely difficult.
We have used an automated procedure to extract the edges of the \(+k_{\mathrm{z}}\) valleys: for each horizontal line-profile cut of the Fermi surface, we find the edges of the valleys, whose positions are shown as pairs of white dots on Figure 3d-g. For the arsenic \(\delta\)-layer samples, two distinct peaks in each line-cut along \(k_{x}\) are resolved and tracked. These two peaks correspond to the cusps of the parabolic dispersion of the electrons in \(k_{x}\). For the phosphorous \(\delta\)-layer samples, the peaks along \(k_{x}\) could not be resolved directly, so instead the FWHM was measured. For each value in \(k_{z}\), the separation between these two dots along the \(k_{x}\) direction gives a measure of the Fermi wavevector, \(k_{F}\), and these values of \(k_{F}\) are plotted against \(k_{z}\) in the corresponding panels Figure 3h-k. For each of the four \(\delta\)-layer samples, we see that \(k_{F}\) remains constant as a function of \(k_{z}\) to within the uncertainties of our measurements, demonstrating that each of the four samples are dispersionless in \(k_{z}\), as expected. For comparison, in Figure 3b,c, we apply the same analysis to one of the in-plane \(\Delta\) valleys to plot \(k_{F}\) as a function of \(k_{y}\). Here we see that \(k_{F}\) is not constant, but instead exhibits the expected dispersion corresponding to the longitudinal effective mass, from which we extract a value of \((0.90\pm 0.05)m_{e}\), in agreement with its accepted value [51].
The analysis in Figure 3h-k provides a measure of the length of these features in \(k_{z}\), i.e., \(\delta k_{z}\). We obtain the corresponding 3D width, \(\delta k_{\infty}\) from the analysis of the in-plane valley in Figure 3c. Using these values, we then extract the real space electronic thickness of the \(\delta\)-layer using Equation 1. We find that for the arsenic \(\delta\)-layer samples, \(\delta z=5.4\pm 0.1\) A, whereas for the phosphorus \(\delta\)-layer samples, \(\delta z=9.7\pm 4.1\) A. A summary of the \(\delta\)-layer thickness measurements using SIMS and SX-ARPES is shown in Table 1, where the physical dopant confinement and electronic thicknesses are stated respectively. In all cases, we find that arsenic \(\delta\)-layers offer a better confinement relative to phosphorus, achieving sub-nm electronic thicknesses. We attribute this to the smaller diffusion coefficient of arsenic in silicon [52], which, under the same preparation conditions, sustains a more confined \(\delta\)-layer than phosphorous [12]. Additionally, the \(\delta\)-layer thickness was further confirmed by directly fitting the ARPES \(k_{z}\)-response to the convolution of Lorentzian spectral functions and by taking the Fourier Transform of the probability density function solutions from a Schrodinger-Poisson model of \(\delta\)-layers (see _Supplementary Information_). In all instances, a mutual agreement was found.
### \(\delta\)-layer subband energies and comparing to theory
The analysis of Figure 2 and Figure 3 provide, for each of our samples, a measure of the carrier density and electronic thickness, respectively. These parameters can be used to create an electrostatic model of the \(\delta\)-layer (Figure 1a right) that we have used as the basis of self-consistent Schrodinger-Poisson modelling of the state quantisation in \(k_{z}\) (details of the calculations can be found in _Supplementary Information_). Based on these measured parameters, our calculations show that each of our \(\delta\)-layer samples should support \(1\Gamma\), \(2\Gamma\) and \(1\Delta\) states. Additionally, in good agreement with our results, Figure 4b shows that the occupancy of the \(\delta\)
layer subbands is also distributed evenly amongst the valleys, in good agreement with our experimental results [42].
To further compare these calculations with experiment, we have measured the in-plane band dispersion and \(k_{Z}\) state quantization directly. Figure 4c-f show measurements of the band dispersion, \(E_{B}(k_{x})\), taken through the centroid of the \(+k_{Z}\) valley for each of the four samples discussed in Figure 3. We have performed a careful two-component fit to this data [23], analysing both iso-\(E_{B}\) and iso-\(k_{x}\) slices for each data point, as illustrated on the side and top of each panel in Figure 4c-f. Each dataset is best described by two parabolic dispersions, readily interpretable as the \(1\Gamma\) and \(2\Gamma\) states expected from the theoretical calculations. A similar analysis of the \(\Delta\) valley dispersion is provided in the _Supplementary Information_, showing in this case that only a single \(1\Delta\) state is observed experimentally. The measured binding energies of these states have been added to the theoretically predicted curves in Figure 4a, and there is good agreement between our calculated and measured band energies in each case.
## Conclusions
We have presented the most comprehensive SX-ARPES measurements of dopant \(\delta\)-layers in silicon, and revealed that at the high arsenic densities considered, there are three flavours of electrons derived from their confinement along the transverse and longitudinal directions of the conduction band minima of bulk silicon. Our data show that the arsenic \(\delta\)-layer samples host the thinnest technological 2D electron liquids ever fabricated in silicon and are close to ideal 2D electron systems with a thickness comparable to the silicon lattice parameter; our thinnest arsenic \(\delta\)-layer has an electronic thickness of \(0.45\pm 0.04\) nm. Moreover, we compared arsenic and phosphorus \(\delta\)-layer samples and found that in all cases, the arsenic samples outperformed the phosphorus ones in two-dimensionality. All our samples are technologically relevant, having been exposed to ambient after their fabrication, demonstrating the remarkable stability of these ultra-thin, dense \(\delta\)-layer systems and the capability of SX-ARPES to fully characterise their conduction bands directly and non-destructively. The fact that we can engineer such ultrathin high carrier density liquids represents yet another capability which can be exploited for new nano- and quantum-electronic applications in silicon.
## Methods
**Sample fabrication:** Silicon \(n\)-type (\(10\,\Omega\) cm) Si(001) substrates were degassed and flash annealed to \(\sim 1200^{\circ}\)C under ultra-high vacuum (\(<5\times 10^{-10}\) mbar). This procedure is known to produce atomically clean surfaces with uniform atomically flat terraces of with widths of 10s to 100s of nanometres [53]. The atomically clean and flat surfaces were exposed to a saturation dose of phosphine or arsine, and then annealed at \(350^{\circ}\)C for 2 minutes to substitutionally incorporate the dopants. The dopant layer was then encapsulated by overgrowing either 2 or 3 nm of silicon using a silicon sublimation source, with a deposition rate of 1 ML/min. During the silicon overgrowth, we controlled the temperature of the sample in three steps to maximise the dopant confinement, following the so-called locking-layer
procedure [12], [15]: the first 1.3 nm of silicon was grown at room temperature, followed by a rapid thermal anneal at 500\({}^{\circ}\)C for 15 s and a low-temperature epitaxial growth at 250\({}^{\circ}\)C for the remainder of the overgrowth. The samples were then removed from vacuum and exposed to ambient for their transport to the soft X-ray ARPES facility [54] at the Swiss Light Source.
**SX-ARPES experiments:** The ARPES measurements were performed at the soft X-ray ARPES facility [54] of the ADRESS beamline [55] at the Swiss Light Source, PSI, Switzerland. The accessible photon energy range is \(h\nu=300-1600\) eV, with a photon flux of up to \(10^{13}\) photons / s / (0.01% BW). To maximise the coherent spectral function (impaired by the thermal atomic motion [56]), the experiments were performed at a base temperature of 12 K, using circular polarised light. The combined (beamline and analyser) energy resolution varied from 50 meV at \(h\nu=400\) eV, to 90 meV at around 700 eV. The photoelectron momentum \(k_{x}\) was directly measured through the emission angle along the analyser slit, \(k_{y}\) is varied through the tilt rotation and \(k_{z}\) is varied through \(h\nu\). The angular resolution of the ARPES analyser (PHOIBOS-150) is 0.1\({}^{\circ}\). Other relevant details of the SX-ARPES experiments, including experimental geometry can be found in Ref. [54].
**Acknowledgements**
We acknowledge helpful discussions with Oliver Warschkow, the beamtime support provided by Alla Chikina and Niels B. M. Schroter and the excellent technical support from Leonard Nue. The project was financially supported by the Engineering and Physical Sciences Research Council (EPSRC) project EP/M009564/1, the EPSRC Centre for Doctoral Training in Advanced Characterisation of Materials (EP/L015277/1), the Paul Scherrer Institute (PSI) and the European Union Horizon 2020 Research and Innovation Program, within the Hidden, Entangled and Resonating Order (HERO) project (810451). Procopios Constantin was partially supported by Microsoft Corporation.
**Data availability statement**
The data that support the findings of this study are openly available on Zenodo (zenodo.org) at [https://doi.org/10.5281/zenodo.7813819](https://doi.org/10.5281/zenodo.7813819). |
2309.08533 | Automated dermatoscopic pattern discovery by clustering neural network
output for human-computer interaction | Background: As available medical image datasets increase in size, it becomes
infeasible for clinicians to review content manually for knowledge extraction.
The objective of this study was to create an automated clustering resulting in
human-interpretable pattern discovery.
Methods: Images from the public HAM10000 dataset, including 7 common
pigmented skin lesion diagnoses, were tiled into 29420 tiles and clustered via
k-means using neural network-extracted image features. The final number of
clusters per diagnosis was chosen by either the elbow method or a compactness
metric balancing intra-lesion variance and cluster numbers. The amount of
resulting non-informative clusters, defined as those containing less than six
image tiles, was compared between the two methods.
Results: Applying k-means, the optimal elbow cutoff resulted in a mean of
24.7 (95%-CI: 16.4-33) clusters for every included diagnosis, including 14.9%
(95% CI: 0.8-29.0) non-informative clusters. The optimal cutoff, as estimated
by the compactness metric, resulted in significantly fewer clusters (13.4;
95%-CI 11.8-15.1; p=0.03) and less non-informative ones (7.5%; 95% CI: 0-19.5;
p=0.017). The majority of clusters (93.6%) from the compactness metric could be
manually mapped to previously described dermatoscopic diagnostic patterns.
Conclusions: Automatically constraining unsupervised clustering can produce
an automated extraction of diagnostically relevant and human-interpretable
clusters of visual patterns from a large image dataset. | Lidia Talavera-Martinez, Philipp Tschandl | 2023-09-15T16:50:47Z | http://arxiv.org/abs/2309.08533v1 | Automated dermatoscopic pattern discovery by clustering neural network output for human-computer interaction
###### Abstract
Background: As available medical image datasets increase in size, it becomes infeasible for clinicians to review content manually for knowledge extraction. The objective of this study was to create an automated clustering resulting in human-interpretable pattern discovery.
Methods: Images from the public HAM10000 dataset, including 7 common pigmented skin lesion diagnoses, were tiled into 29420 tiles and clustered via k-means using neural network-extracted image features. The final number of clusters per diagnosis was chosen by either the elbow method or a compactness metric balancing intra-lesion variance and cluster numbers. The amount of resulting non-informative clusters, defined as those containing less than six image tiles, was compared between the two methods.
Results: Applying k-means, the optimal elbow cutoff resulted in a mean of 24.7 (95%-CI: 16.4-33) clusters for every included diagnosis, including 14.9% (95% CI: 0.8-29.0) non-informative clusters. The optimal cutoff, as estimated by the compactness metric, resulted in significantly fewer clusters (13.4; 95%-CI 11.8-15.1; p=0.03) and less non-informative ones (7.5%; 95% CI: 0-19.5; p=0.017). The majority of clusters (93.6%) from the compactness metric could be manually mapped to previously described dermatoscopic diagnostic patterns.
Conclusions: Automatically constraining unsupervised clustering can produce an automated extraction of diagnostically relevant and human-interpretable clusters of visual patterns from a large image dataset.
Pre-peer review version 1
Footnote 1: This is the pre-peer reviewed version of the following article: _Talavera-Martinez L, Tschandl P. Automated dermatoscopic pattern discovery by clustering neural network output for human-computer interaction. J Eur Acad Dermond Vennerol. 2023_, which has been published in final form at [https://doi.org/10.1111/jdv.19234](https://doi.org/10.1111/jdv.19234). This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.
## I Introduction
In dermatology, but also other visual medical fields, the recognition and description of specific samples of diseases is important for a precise diagnosis and for the formulation of differential diagnoses. Apart from clinical dermatology, there have been a plethora of pattern-descriptions of symptoms of disease in dermatoscopy in recent decades [1], especially for the diagnosis of skin tumors and inflammatory diseases [2], which are used for teaching and diagnosis in daily practice. These descriptions were mostly based on mono- or multicentric case collections that were reviewed manually by a few authors and evaluated on possible repetitions of patterns [3, 4, 5]. As clinical image data collections are increasing in size [6, 7], entirely manual review for discovering diagnostic patterns is not a realistic scenario anymore. In addition to an insurmountable workload, interrater disagreement may be a hindering factor in identifying and describing objective, valid and teachable pattern groups [1].
Increasingly, neural networks - especially convolutional neural networks (CNN) - are described as an aid for diagnostic classification of medical images. In the field of dermatology, CNNs were described to have at least an equal accuracy as dermatologists in experimental settings for classifying clinical and dermatoscopic images, and shown to improve physicians' diagnostic accuracy when applied in diverse interactive settings [8, 9]. Such algorithms can not only classify images but also label anatomic areas [10], rate psoriasis [11], or retrieve similar images to a case, by implicitly analyzing patterns and pattern combinations after training to categorize images into distinct classes [12]. Therefore, we hypothesize that convolutional neural networks could be helpful in the extraction of diagnostically relevant patterns in medical image collections. Recent reports have shown unsupervised techniques when having limited label data [13].
The goal of this study was to create an automated workflow to extract diagnostic relevant pattern candidates for review by doctors and researchers, with dermatoscopic images of skin tumors as an example (Fig. 1). Eventually, from a big dataset with thousands of images, this should enable human-computer interaction and return an interpretable number of visually distinct patterns, by obtaining as few redundant or uninformative patterns as possible.
The approach we propose is a pipeline based on machine learning that consists of extracting deep features from CNNs, and applying an unsupervised clustering algorithm to these features. The clustering shall be constrained by a custom compactness metric that, in contrast to the well known elbow-metric, should better balance retrieval of all relevant patterns while at the same time keeping redundant information low.
## II Materials and Methods
### _Data and processing_
This non-interventional retrospective study was conducted on public image data only, specifically the HAM10000 dataset [14]. This dataset is composed of 10015 dermatoscopic images
of pigmented lesions with annotations on both the diagnosis and segmentation of the lesion area [8]. To focus on patterns rather than full images, analyses were performed on a tile-level. We extracted square subregions (tiles) of an image by a sliding window with a size of 128x128 pixels with 25% overlap, discarding tiles with \(<60\%\) lesion area. In sum, 29420 tiles were extracted, Suppl. Fig. S1 shows two example cases with resulting extracted tiles, and Suppl. Table S1 the number of tiles per diagnosis. To ensure approximately equal representation of diagnoses, included nevi were limited to a random subsample of 1100 cases, and resulting tiles limited to a maximum random subsample of 850 tiles. To reduce the influence of changes in illumination color, we applied color constancy correction [15] to all tiles (Suppl. Fig. S2).
### _Neural network and feature extraction_
A VGG16 [16] architecture, pretrained on ImageNet data, was fine-tuned to classify tiles into one of seven diagnoses included in the HAM10000 dataset. Training was performed with all 29420 tiles, using 70% for training and the remaining 30% for validation during a single training run, ensuring no overlap of tiles of the same image between sets. This training run was only performed as a means to parameterize the model, thus as knowledge discovery rather than classification accuracy was the goal, the complete training dataset was also used for cluster analyses downstream. Data augmentation steps were flips in both horizontal and vertical directions, random 90\({}^{\circ}\) rotations, and zooms. Training was performed with a batch size of 32, using the Adam [17] optimizer, a weighted categorical cross entropy loss, with an initial learning rate of 1-e5, and an early stopping policy based on validation loss. For extracting features from image tiles by the fine-tuned model, the numerical state of the layer before the classification layer was obtained, resulting in a 1280-length vector. Neural network experiments were conducted using tensorflow [18] and python 3.8. Experiments were repeated with EfficientNet-B0 [19] and a convolutional autoencoder, with results for those two models shown in the supplementary data. For the autoencoder, we trained the model from scratch with a mean-squared error loss, and extracted the features from the flattened embedding space.
### _Clustering_
Resulting extracted features are normalized and used as input to an unsupervised clustering algorithm, specifically k-means [20] with cosine distance as a distance metric. This calculation was performed using scikit-learn v1.1.2 [21] and scipy v1.9.0 [22]. To automatically obtain the optimal number of clusters without further intervention from a user, either the elbow method (optimal value as calculated by yellowbrick v1.5 [23]) or a custom compactness metric (W) was applied. The latter method is based on the assumption that each lesion, thus also the tiles that comprise it, on average only show one or two dermatoscopic patterns. Thus, the proposed metric measures both the similarity of the clusters to which tiles of the same image have been assigned, and the number of different clusters the tiles were assigned to in respect to the total number of clusters. The metric was implemented as follows:
\[I=\{img_{1},...,img_{q}\}\]
\[T_{q}=\{t_{1},...,t_{j}\}\]
\[C_{q}=\{c_{1},...,c_{i}\}\]
\[W=\frac{1}{M}\times\sum_{q=1}^{M}\left(\frac{K}{min(n_{clst},L)}\times\sum_{L }^{j=1}cosDst\left(\frac{1}{K}\sum_{i=1}^{K}(c_{i}),t_{j}\right)\right)\]
where \(M\) is the number of images \(I\) in the experiment, \(T_{q}\) are the \(L\) tiles of an image \(img_{q}\), \(K\) is the number of unique clusters to which \(T_{q}\) belong, \(n_{clst}\) are the total number of clusters used in the experiment. Reiterating, the first half of W for an image ensures tiles are spread to as few clusters as possible, and the second half of W ensures the distance of tiles to the common center of clusters, that tiles are assigned to, is low.
### _Classification ability_
To assess classification ability of the two clustering cutoff methods, clusters were created not only for each diagnosis separately, but also for the whole dataset spanning all diagnoses. The frequency of diagnoses contained in a resulting cluster was noted as a multi-class probability for a classification task. Test images from the ISIC 2018 challenge Task 3 [24, 25] were tiled and preprocessed as above (resulting in 10254 tiles from 1304 lesions with sufficient lesion area depicted), and probabilities of the closest cluster of each tile averaged. The top-1 class of the resulting probabilities was taken as a prediction, and accuracy as well as mean recall [24] calculated.
Fig. 1: Processing overview - Dermatoscopic images (upper left) are used as source data, and a neural network is trained for classification on tiled lesion-area tiles. Features are extracted from lesion tiles with this trained network, on which an unsupervised k-means clustering is applied to find pattern groups. Up to the closest 7 lesion-tiles within a cluster (examples shown for clusters within the BCC class) are stored as representatives of a pattern and presented to a human reader for qualitative interpretation.
### _Manual pattern descriptions_
Top-7 tiles of clusters, created for every diagnosis in the dataset separately with VGG16 feature vectors, k-means and the described compactness metric, were inspected by a dermatologist with substantial experience in dermatoscopy (PT). Patterns were scored for redundancy, i.e. showing the same pattern as another cluster of the diagnosis, informativeness, i.e. whether any reproducible pattern can be identified, number of patterns, and previous description, i.e. whether the pattern was already identified and described in the literature. A pattern was defined as a change in color and/or structure covering the majority of the image tile.
### _Statistics_
Differences of paired values were compared using a one-sample t-test after checking for normality assumptions. Statistical analyses were performed using R Statistics v4.1.0 [26], and plots created with ggplot2 [27]. A two sided p-value \(<.05\) was regarded as statistically significant, with a Bonferroni-Holm type correction applied.
## III Results
### _Pattern interpretability_
Applied on clusters of every diagnosis separately, the elbow method created a mean of 24.7 (95%-CI: 16.4-33) clusters for a diagnosis, whereas the compactness metric resulted in significantly less clusters (13.4; 95%-CI 11.8-15.1; p=0.03; Fig. 1(a)). The proportion of uninformative clusters was higher when using the elbow method (14.9%; 95% CI: 0.8-29.0) than using the compactness metric (7.5%, 95% CI: 0-19.5; p=0.017; Fig. 1(b)).
Qualitative interpretation of the clusters resulting from the compactness metric, for 93.6% (88 of 94) of diagnosis-specific clusters, at least one recognizable consistent pattern could be identified by a dermatologist. Identified patterns could be mapped to 53 unique known diagnostic descriptions from previous literature of at least 29 publications (Suppl. Table S3). Only 51 clusters could be described with one pattern alone, whereas 30 clusters encompassed two, and 7 clusters three recognizable patterns in combination. The proportion of redundant clusters within a diagnosis ranged from 0% (basal cell carcinoma and melanoma) to 27.3% (dermatofibroma and vascular lesions).
### _Retained classification performance_
Application of clustering on the whole dataset with all diagnoses included, using the elbow method resulted in a higher number of clusters than the compactness metric (42 vs. 7), as well as a higher mean recall (46.3 vs. 34.6) and accuracy (43.4%; 95%-CI 40.7-46.2 vs. 32.2%; 95%-CI 29.7-34.8) for predictions on the ISIC2018 test set. Clusters of the compactness metric were rarely able to predict actinic keratoses, and almost never dermatofibroma (Fig. 3).
## IV Discussion
With ever growing image datasets, human interpretation of available data becomes increasingly difficult, and herein we present an automated analysis pipeline which via automated image processing is supposed to aid human-computer interaction for diagnostic marker discovery. Providing only information on the diagnosis and image area, we were able to showcase that the presented workflow is able to reproduce a major fraction of diagnostic patterns in dermatoscopy described in the literature.
In contrast to other publications [6, 24, 28] trying to optimize for the best diagnostic accuracy of a neural network model, herein we propose a metric to constrain k-means clustering to optimize for human interpretability in a truly interactive human-computer interaction workflow. The proposed compactness metric reduces the information to a digestible amount, shown by the significant reduction in overall clusters (Fig 1(a)), alongside a reduction of non-informative information shown by the significant reduction of noninformative clusters (Fig 1(b)). These improvements, though come at a cost, namely a reduced diagnostic accuracy when applied in an automated classification setting. This underlines that the training proposed herein could be useful for human-computer interaction and interpretability, but not for safely predicting diagnoses as a standalone application. As biases from automated predictions of image data through neural networks is a significant problem [29], datasets should be inspected for potential biases. Although not explicitly shown in this pilot experiment, the proposed workflow may enable medical personnel and researchers to identify highly prevalent biases in a qualitative manner. It is certainly not a complete
Fig. 3: Confusion matrices showcasing performance of predictions via averaged nearest neighbor cluster-probabilities constrained by (a) compactness metric (7 clusters), or (b) the elbow method (42 clusters). Values within cells show proportions within one ground-truth class (=row).
Fig. 2: Number of overall (a) and uninformative (b) clusters per diagnosis when constraining cluster numbers via either the compactness or elbow method.
solution, as based on the failure to classify rare classes (Fig 3a) we hypothesize that biases on rare classes will equally not be detectable.
Through qualitative analysis of resulting clusters we found that for most it is not possible to find a single pattern to describe them, but the majority needed at least a combination of two patterns (Supp. Table S3). This finding may help in designing future annotation and pattern analysis studies, and we hypothesize that studies trying to annotate and analyze for a single structure may not be representing real patterns. Interestingly, this may be a missing link between descriptive and "metaphoric" language [1], as the former is more suitable for distinct and concise descriptions, but metaphoric language inherently tries to capture structure combinations. A further interesting insight was that the frequency of redundant clusters was not equally distributed, but higher in dermatofibroma and vascular lesions. This could be sourced by the fact that these diagnoses in general show less variability in their patterns, but also that the used dataset through small sample size for these diagnoses is not covering the real visual variability. Finally, it is also interesting to note that by qualitatively comparing different network architectures (Supp. Fig. S4 - S10), one can identify different utility of them for the purpose of pattern discovery. While an Autoencoder is mainly detecting color blobs, edges, corners and curves, it is focussing less on detailed structures. The top-7 tiles from clusters created by using EfficientNetB0, as a representative of a modern architecture with higher diagnostic accuracy than VGG16 [19], were less homogeneous and thus harder to interpret. Thus, despite being not ideal for classification, we hypothesize that VGG16, through its inner architecture, is a well fit for extracting features of interpretable mid-level patterns useful for human-computer interaction. Future studies should show feasibility of implementation of this workflow not only for present dermatoscopic datasets [30], but also other imaging modalities such as dermatopathology and clinical images.
### _Limitations_
This pilot study was supposed to showcase the general feasibility of the proposed process. Applicability to nonpigmented tumors, other localisations, inflammatory cases and darker skin types cannot be estimated as those were not included in the source datasets. The process at its core is analyzing substructures of dermatoscopic images, thus overall architecture is not integrated but could theoretically be overcome by changing the tile size and minimal lesion area. The latter is a relevant consideration when applying the workflow, as with the initially chosen tile size and lesion area constraints, some test cases with a very small lesion depicted did not produce any tile.
## Acknowledgements
Lidia Talavera-Martinez was a beneficiary of the scholarship BES-2017-081264 granted by the Ministry of Economy, Industry, and Competitiveness of Spain under a program co-financed by the European Social Fund. She is also part of the R&D&i Project PID2020-113870GB-I00, funded by MCIN/AEI/10.13039/50110 0011033/.
## Data availability
Used image data are openly available at [https://doi.org/10.7910/DVN/DBW86T](https://doi.org/10.7910/DVN/DBW86T) (Harvard Dataverse). Resulting Clusters and qualitative evaluations are available in the supplementary material of this article.
|
2309.11076 | Symbolic Regression on Sparse and Noisy Data with Gaussian Processes | In this paper, we address the challenge of deriving dynamical models from
sparse and noisy data. High-quality data is crucial for symbolic regression
algorithms; limited and noisy data can present modeling challenges. To overcome
this, we combine Gaussian process regression with a sparse identification of
nonlinear dynamics (SINDy) method to denoise the data and identify nonlinear
dynamical equations. Our simple approach offers improved robustness with
sparse, noisy data compared to SINDy alone. We demonstrate its effectiveness on
a Lotka-Volterra model, a unicycle dynamic model in simulation, and hardware
data from an NVIDIA JetRacer system. We show superior performance over
baselines including 20.78% improvement over SINDy and 61.92% improvement over
SSR in predicting future trajectories from discovered dynamics. | Junette Hsin, Shubhankar Agarwal, Adam Thorpe, Luis Sentis, David Fridovich-Keil | 2023-09-20T05:44:49Z | http://arxiv.org/abs/2309.11076v2 | # GPSINDy: Data-Driven Discovery of Equations of Motion
###### Abstract
In this paper, we consider the problem of discovering dynamical system models from noisy data. The presence of noise is known to be a significant problem for symbolic regression algorithms. We combine Gaussian process regression, a nonparametric learning method, with SINDy, a parametric learning approach, to identify nonlinear dynamical systems from data. The key advantages of our proposed approach are its simplicity coupled with the fact that it demonstrates improved robustness properties with noisy data over SINDy. We demonstrate our proposed approach on a Lotka-Volterra model and a unicycle dynamic model in simulation and on an NVIDIA JetRacer system using hardware data. We demonstrate improved performance over SINDy for discovering the system dynamics and predicting future trajectories.
## I Introduction
An accurate model of dynamics plays a important role in the design and operation of robots. In many cases, it is desirable to obtain analytic expressions over black box models, as analytic models extrapolate well beyond the training dataset and are more suitable for system analysis. One approach that has received significant attention is the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm [1]. It uses symbolic regression--a least-squares-based method--to learn the system dynamics purely from data using a predefined set of candidate functions provided by the user. SINDy is simple in its approach but suffers from several potential drawbacks in practice. In particular, the accuracy of the learned solution relies heavily on the selection of proper candidate function terms, and measurement noise in the data can significantly degrade the performance of SINDy for even simple systems [2]. SINDy also requires derivative data which may be difficult to measure directly and so must be obtained via finite differencing or other approximation methods. Approximation can add additional error, further exacerbating the difficulty of learning the system dynamics [3]. In addition, ordinary least squares does not lead to a sparse representation, and so [1] suggests using either LASSO regression or a sequential optimization procedure called Sequentially Thresholded Least Squares (STLS) where at each step the candidate library is pruned via a thresholding procedure to eliminate the terms that correspond to the smallest coefficients. This can present challenges in the case where noise is present, since some terms can become "lost" in the noise [2]. _In this work, we propose to filter noise using Gaussian process regression to learn the system dynamics purely from data using SINDy._
The major advantage of SINDy lies in its ability to discover interpretable parametric models while balancing accuracy and parsimony in the learned solution. While other data-driven methods have had success in identifying models from data [4, 5, 6], insufficient data often limit the effectiveness of such techniques across diverse application areas, and the models they discover do not allow insight into the underlying structure of the system [7]. In contrast, SINDy has been applied across a wide variety of scientific disciplines to understand the underlying structure of physical phenomena [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. In the field of robotics, it has been used to learn the dynamics of actuated systems for the purpose of control [27, 28, 29, 30, 31], such as learning the model of a jet engine prior to applying feedback linearization and sliding mode control [32]. SINDy is promising because it is based on a simple sparse linear regression that is highly extensible and requires comparatively less data than other model learning methods such as neural networks [7].
However, noise remains a problem. A growing body of research based on SINDy seeks to mitigate the impact of noise on identifying the correct system dynamics. Reactive SINDy [33] uses vector-valued functions with SINDy to uncover underlying biological cell structures from noisy data, but the method can only be applied if the data stem from dynamic systems in an equilibrium state. PiDL-SINDy [34] utilizes a physics-informed neural network with sparse regression to learn the system dynamics, and DSINDy [3] and a modified SINDy using automatic differentiation (AD) [35] simultaneously de-noise the data and identify the governing equations. However, PiDL-SINDy [34] and AD-SINDy [35] run into computational bottlenecks and challenges with the structure of their optimization problems. Derivative-based approaches show promise, but DSINDy makes assumptions on the structure of its function library that may not be true in practice. ESINDy [7] proposes a statistical framework to compute the probabilities of candidate functions from an ensemble of models identified from noisy data, but its approach uses an extension of STLS, which may be subject to the same issues as STLS.
Advancements in non-parametric based approaches also tackle the problem of learning governing equations from noisy data. Neural networks have been used to parameterize the state derivatives through a blackbox differential equation solver [36], and Gaussian processes have been used to infer parameters of linear equations from scarce and noisy observations [37] and generate vector fields to learn nonlinear differential equations [38]. Gaussian process regression is particularly effective as an interpolation tool and at reducing the noise in measurement data [5].
Our main contribution is an application of Gaussian process regression to filter input data for symbolic regression algorithms, which helps to alleviate the issues caused by measurement noise. Our approach exploits the complementary strengths of parametric and non-parametric techniques for model identification by learning the relationship between the state and the time derivatives through Gaussian processes, and then finding the analytic expressions for how the dynamics evolve over time using SINDy. This is especially useful for noisy, real-world data that comes from actual hardware as opposed to simulation data.
We test our method on simulated data as well as measurements collected from hardware experiments. We compare our results against SINDy and a model discovered using a neural network trained on the same data and show improvement over SINDy and the neural network-trained model.
## II Problem Formulation
Consider a system characterized by unknown dynamics
\[\dot{\mathbf{x}}(t)=f(\mathbf{x}(t),\mathbf{u}(t)), \tag{1}\]
where \(t\in\mathbb{R}\), \(\mathbf{x}(t)\in\mathbb{R}^{n}\) denotes the state of the system at time \(t\), and \(\mathbf{u}(t)\in\mathbb{R}^{m}\) denotes the control input. We presume that \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) in (1) is unknown, meaning we have no prior knowledge of the system dynamics or its structure. Instead, we assume that we have access to a dataset \(\mathbf{X}\) consisting of a sequence of \(r\in\mathbb{N}\) state measurements corrupted by noise and control inputs \(\mathbf{U}\) taken at discrete times \(t_{1},t_{2},\ldots,t_{r}\), given by
\[\mathbf{X} =\{\mathbf{x}(t_{1})+\mathbf{\epsilon}_{1},\mathbf{x}(t_{2})+\mathbf{\epsilon}_{2 },\ldots,\mathbf{x}(t_{r})+\mathbf{\epsilon}_{r}\} \tag{2}\] \[\mathbf{U} =\{\mathbf{u}(t_{1}),\mathbf{u}(t_{2}),\ldots,\mathbf{u}(t_{r})\},\]
where \(\mathbf{\epsilon}_{i}\sim\mathcal{N}(0,\theta_{n}^{2}I)\). We assume that the derivatives of the state with respect to time \(\dot{\mathbf{x}}(t)\in\mathbb{R}^{n}\) corresponding to \(\mathbf{x}(t)\) are not directly measurable. Thus, they must be approximated using only the available data, e.g. using (central) finite differencing. Let \(\dot{\mathbf{X}}\) be the approximate state derivatives with respect to time of the points in the dataset \(\mathbf{X}\) in (2) after applying the corresponding control input in \(\mathbf{U}\), such that \(\mathbf{X}_{i}\) is the derivative of \(\mathbf{X}_{i}\) using control input \(\mathbf{U}_{i}\).
Intuitively, we can view \(\mathbf{X}\) and \(\dot{\mathbf{X}}\) as matrices in \(\mathbb{R}^{r\times n}\) where the \(i^{\rm th}\) row \(\mathbf{X}_{i}\) corresponds to the state at time \(t_{i}\)
\[\mathbf{X}=\begin{bmatrix}\,\mathbf{-}\,\mathbf{X}_{1}\,\mathbf{-}\\ \,\mathbf{-}\,\mathbf{X}_{2}\,\mathbf{-}\\ \,\vdots\\ \,\mathbf{-}\,\mathbf{X}_{r}\,\mathbf{-}\end{bmatrix}, \tag{3}\]
and the \(i^{\rm th}\) row \(\dot{\mathbf{X}}_{i}\) is the time derivative of \(\mathbf{X}_{i}\). Note that because the state measurements \(\mathbf{X}\) are corrupted by measurement noise, the approximation of the derivatives of the state with respect to time in \(\dot{\mathbf{X}}\) are coarse approximations of the derivative that may be highly inaccurate or exaggerated.
We assume that the dynamics can be described by a linear combination of relatively few elementary function terms such as polynomials of varying degrees, sinusoidal terms, or exponential functions. For instance,
\[\dot{\mathbf{x}}(t)=\mathbf{\Theta}(\mathbf{x}(t),\mathbf{u}(t))^{\top}\mathbf{\Xi}, \tag{4}\]
where \(\mathbf{\Theta}(\mathbf{x}(t),\mathbf{u}(t))\in\mathbb{R}^{p}\) is the candidate function library of elementary basis functions evaluated at the current state \(\mathbf{x}(t)\) and control input \(\mathbf{u}(t)\) and \(\mathbf{\Xi}\in\mathbb{R}^{p\times n}\) is a matrix of real-valued coefficients that weight the candidate function terms. For simplicity, using the dataset \(\mathbf{X}\), the applied control inputs \(\mathbf{U}\), and the state derivatives \(\dot{\mathbf{X}}\), we can write the relationship between the datasets via
\[\dot{\mathbf{X}}=\mathbf{\Theta}(\mathbf{X},\mathbf{U})^{\top}\mathbf{\Xi}. \tag{5}\]
In practice, (5) does not exactly hold as the data \(\mathbf{X}\) is corrupted by noise, and the approximation of \(\dot{\mathbf{X}}\) introduces additional error as given by
\[\dot{\mathbf{X}}=\mathbf{\Theta}(\mathbf{X},\mathbf{U})^{\top}\mathbf{\Xi}+\sigma_{n}\mathbf{Z}, \tag{6}\]
where \(\mathbf{Z}\) is a matrix of independent, identically distributed zero-mean Gaussian entries and \(\sigma_{n}\) is the magnitude of the standard deviation of the noise.
To find \(\mathbf{\Xi}\), one can use ordinary least-squares with noisy \(\mathbf{X}\) and \(\dot{\mathbf{X}}\) to find the model \(f\) from (1). However, this approach does not lead to a sparse representation, instead overfitting the model to the data and finding a solution with nonzero elements in every element of \(\mathbf{\Xi}\). Sparsity is desirable as the solution is composed of a linear combination of relatively few columns in \(\mathbf{\Theta}(\mathbf{X},\mathbf{U})\). In contrast, LASSO [39] has been shown to work well with this type of noisy data, using \(L_{1}\) regularization to promote sparsity.
**Problem 1** (LASSO for Symbolic Regression).: _We seek to solve the LASSO problem_
\[\mathbf{\xi}_{j}=\operatorname*{argmin}_{\mathbf{\xi}\in\mathbb{R}^{p}}\lVert\mathbf{ \Theta}(\mathbf{X},\mathbf{U})^{\top}\mathbf{\xi}-\dot{\mathbf{X}}_{j}\rVert_{2}+\lambda\lVert \mathbf{\xi}\rVert_{1}, \tag{7}\]
_where the optimization variable \(\mathbf{\xi}_{j}\in\mathbb{R}^{p}\) is the \(j^{\rm th}\) column of \(\mathbf{\Xi}\) from (6), \(\dot{\mathbf{X}}_{j}\) is the \(j^{\rm th}\) column of \(\dot{\mathbf{X}}\) from (6), and \(\lambda>0\) is the \(L_{1}\) regularization parameter._
Solving the LASSO problem yields a solution that is more sparse in representation than one found using least-squares. However, as shown in our experiments, using noisy data in (7) leads to the identification of a model that does not accurately capture the system dynamics and does not extrapolate well for prediction. Thus, mitigating the issues caused by noise is essential to identifying the correct dynamics.
## III Approach
Various methods exist for de-noising data such as discrete domain wavelet filtering [40], bandpass filtering [40], total variation regularization [41], and neural networks [42]. In this work, we propose to use a smoothing technique based on Gaussian process regression, or Kriging, to alleviate the issues caused by noise in symbolic regression and improve the accuracy of the analytical model.
Like SINDy, Gaussian process regression yields a model for relating input and output data. Unlike SINDy, Gaussian
process regression is _non_-parametric; it models a probability distribution of the data \(\dot{\mathbf{X}}\), which from (5), is a function of \(\mathbf{X}\). This distribution is described by a mean function \(m(\cdot)\) and covariance kernel function \(k(\cdot,\cdot)\), and the negative log-likelihood of the data \(\dot{\mathbf{X}}\) is given by
\[-\log p(\dot{\mathbf{X}})= \frac{1}{2}\big{(}\dot{\mathbf{X}}-m(\mathbf{X})\big{)}^{\top}(K+\sigma_{n }^{2})^{-1}\big{(}\dot{\mathbf{X}} \tag{8}\] \[-m(\mathbf{X})\big{)}+\frac{1}{2}\log|K|+\frac{n}{2}\log(2\pi),\] with \[K= k(\mathbf{X},\mathbf{X}),\]
where \(\sigma_{n}\) is a tunable hyperparameter that controls noise variance. The performance of Gaussian process regression also depends heavily on the choice of the kernel. Depending on the chosen kernel, the regression can exhibit notable shortcomings in extrapolation relative to parametric regression. Specifically, the predicted mean can revert to the mean function deduced from the training dataset [5]. In this work, we consider the standard squared-exponential kernel, characterized by its tunable hyperparameters: \(\sigma_{f}\) (signal variance) and \(\sigma_{l}\) (length scale)
\[k(\mathbf{X}_{i},\mathbf{X}_{j})=\sigma_{f}^{2}\exp\big{(}-\frac{1}{2\sigma_{l}^{2}}|| \mathbf{X}_{i}-\mathbf{X}_{j}||^{2}\big{)}. \tag{9}\]
In practice, the hyperparameters are determined by minimizing (8) with respect to \(\sigma_{f}\), \(\sigma_{l}\), and \(\sigma_{n}\). To use Gaussian process regression as a de-noising tool for the purpose of discovering system dynamics from noisy data, we first assume that \(\dot{\mathbf{X}}\) was generated from a Gaussian process at training points \(\mathbf{X}\). Now, let \(\dot{\mathbf{X}}_{*}\) be a random Gaussian vector generated from a Gaussian process at desired test outputs \(\mathbf{X}_{*}\). We define the joint distribution for \(\dot{\mathbf{X}}\) and \(\dot{\mathbf{X}}_{*}\) as
\[\begin{bmatrix}\dot{\mathbf{X}}_{*}\\ \dot{\mathbf{X}}\end{bmatrix}\sim\mathcal{N}\bigg{(}\begin{bmatrix}0\\ 0\end{bmatrix},\begin{bmatrix}K(\mathbf{X}_{*},\mathbf{X}_{*})&K(\mathbf{X}_{*},\mathbf{X})\\ K(\mathbf{X},\mathbf{X}_{*})&K(\mathbf{X},\mathbf{X})+\sigma_{n}^{2}I\end{bmatrix}\bigg{)}, \tag{10}\]
where \(n\) is the number of training points, and \(n_{*}\) is the number of test points. \(K(\mathbf{X},\mathbf{X}_{*})\) denotes the \(n\times n_{*}\) matrix of the covariances evaluated at all pairs of training and test points, \(K(\mathbf{X},\mathbf{X})\) is a \(n\times n\) matrix of covariances, and likewise for \(K(\mathbf{X}_{*},\mathbf{X})\) and \(K(\mathbf{X}_{*},\mathbf{X}_{*})\). To obtain smoothed estimates of \(\dot{\mathbf{X}}\) evaluated at the test points, we condition the distribution of the training data on the test data to compute the posterior mean
\[\dot{\mathbf{X}}_{GP}=K(\mathbf{X}_{*},\mathbf{X})[K(\mathbf{X},\mathbf{X})+\sigma_{n}^{2}I]^{-1} \,\dot{\mathbf{X}}. \tag{11}\]
Likewise, the joint distribution of the training and test points for the state measurements \(\mathbf{X}\) is given by
\[\begin{bmatrix}\mathbf{X}_{*}\\ \mathbf{X}\end{bmatrix}\sim\mathcal{N}\left(\begin{bmatrix}0\\ 0\end{bmatrix},\begin{bmatrix}K(\mathbf{t}_{*},\mathbf{t}_{*})&K(\mathbf{t}_{*},\mathbf{t})\\ K(\mathbf{t},\mathbf{t}_{*})&K(\mathbf{t},\mathbf{t})+\theta_{n}^{2}I\end{bmatrix}\right),\]
where \(K(\mathbf{t},\mathbf{t}_{*})\) denotes the \(n\times n_{*}\) matrix of the covariances and similarly for \(K(\mathbf{t},\mathbf{t})\), \(K(\mathbf{t}_{*},\mathbf{t})\) and \(K(\mathbf{t}_{*},\mathbf{t}_{*})\). \(\theta_{n}\) is the noise variance hyperparameter for \(\mathbf{X}\). The kernel for \(\mathbf{X}\) is characterized by its tunable hyperparameters: \(\theta_{f}\) (signal variance) and \(\theta_{l}\) (length scale)
\[k(t_{i},t_{j})=\theta_{f}^{2}\exp\big{(}-\frac{1}{2l\theta_{l}^{2}}||t_{i}-t_{j }||^{2}\big{)}. \tag{12}\]
To calculate smoothed estimates of \(\mathbf{X}\) evaluated at the test points \(\mathbf{t}_{*}\), we compute the posterior mean
\[\mathbf{X}_{GP}=K(\mathbf{t}_{*},\mathbf{t})[K(\mathbf{t},\mathbf{t})+\theta_{n}^{2}I]^{-1}\,\mathbf{X}. \tag{13}\]
Now that we have shown how to obtain \(\mathbf{X}_{GP}\) and \(\dot{\mathbf{X}}_{GP}\), we move forward to solve the problem in (7).
### _GPSINDy: Symbolic Regression with GP Denoising_
First, we minimize the negative log-likelihood of the state \(\mathbf{X}\) with respect to the hyperparameters \(\mathbf{\theta}=[\theta_{f},\theta_{l},\theta_{n}]\), assuming its mean function \(m(t)=0\). The negative log-likelihood for \(\mathbf{X}\) is given by
\[-\log p(\mathbf{X})= \frac{1}{2}\mathbf{X}^{T}(K+\theta_{n}^{2})^{-1}\mathbf{X} \tag{14}\] \[+\frac{1}{2}\log|K+\theta_{n}^{2}|+\frac{n}{2}\log(2\pi),\] with \[K= k(t,t).\]
Then, we condition the distribution of the training data on the test data to obtain the posterior mean of the state evaluated at the test points \(\mathbf{t}_{*}\) according to (13).
Next, we minimize the negative log-likelihood of the state derivative \(\dot{\mathbf{X}}\) with respect to the hyperparameters \(\sigma_{f}\), \(\sigma_{l}\), and \(\sigma_{n}\) as in (14). Again, we assume a mean function \(m(\mathbf{X})=0\). Then, we compute the posterior mean \(\dot{\mathbf{X}}_{GP}\):
\[\dot{\mathbf{X}}_{GP}=K(\mathbf{X}_{GP*},\mathbf{X}_{GP})[K(\mathbf{X}_{GP},\mathbf{X}_{GP})+\sigma_ {n}^{2}I]^{-1}\,\dot{\mathbf{X}}. \tag{15}\]
Finally, we update the LASSO problem from (7) to use the smoothed states \(\mathbf{X}_{GP}\) and derivatives \(\dot{\mathbf{X}}_{GP}\) when solving for the coefficients of the system dynamics
\[\mathbf{\xi}_{j}=\operatorname*{argmin}_{\mathbf{\xi}\in\mathbb{R}^{p}}\lVert\mathbf{ \Theta}(\mathbf{X}_{GP},\mathbf{U})^{\top}\mathbf{\xi}-\dot{\mathbf{X}}_{GP,j}\rVert_{2}+ \lambda\lVert\mathbf{\xi}\rVert_{1}, \tag{16}\]
where \(\dot{\mathbf{X}}_{GP,j}\) represents the \(j^{\rm th}\) column of \(\dot{\mathbf{X}}_{GP}\).
**Remark 1** (Distributed Optimization for LASSO).: _LASSO can be computationally expensive for large data sets. Fortunately, the objective function in (16) is separable, making it suitable for computational acceleration via splitting methods. One such method, the Alternating Direction Method of Multipliers (ADMM) [43], handles processing of large data sets by splitting its primary variable into two parts and then updating each part in an alternating fashion._
There is a \(\lambda\) that represents a desirable trade-off between complexity and accuracy, which can be represented by an "elbow" in the Pareto front [44]. To set the sparsity parameter in (16), we can use cross-validation to balance model complexity (determined by the number of nonzero coefficients in \(\mathbf{\Xi}\)) with accuracy. Cross-validation is used to find the best hyperparameters for a machine learning model by iteratively splitting the data into multiple training and test sets (folds). The model's performance is evaluated on the test set for different hyperparameter values to find the set that yields the best performance, as shown in [45].
We perform cross-validation with LASSO to achieve a sparse solution with the best model fit based on the dataset. We use ADMM to solve the LASSO problem in (16) to
discover the dynamics for an unknown system using noisy measurements, and we call this method GPSINDy.
## IV Experiments & Results
We showcase the efficacy of our proposed approach, GPSINDy, across a spectrum of models: the Lotka-Volterra model for benchmarking, a nonholonomic model (emphasizing unicycle dynamics), and real-world data sourced from the NVIDIA JetRacer system. This latter test underscores GPSINDy's robustness in handling noisy datasets from real-world hardware. For baselines, we compare our method with the SINDy algorithm [1] and a neural network (NN) based method termed NNSINDy. In the NNSINDy approach, we first refine the noisy data using a NN followed by symbolic regression employing LASSO as detailed in (7). This NN consists of two fully connected layers with 32 hidden neurons and ReLU activations and is trained on the same dataset as GPSINDy using the ADAM optimizer [46]. For all experiments, we refer to the ground-truth coefficients as \(\mathbf{\Xi}_{\text{GT}}\) and the learned coefficients as \(\mathbf{\Xi}_{\text{Learned}}\).
Experimental SetupIn all of the experiments unless specified, we choose the candidate function library \(\mathbf{\Theta}(\mathbf{X},\mathbf{U})\) such that it consists of polynomial terms up to \(3^{\text{rd}}\) order, sinusoidal terms (\(\sin\) and \(\cos\)), and combinations of the polynomial and sinusoidal terms. For example,
\[\mathbf{\Theta}(\mathbf{X},\mathbf{U})=\left[\begin{array}{ccccc}\bigsqcup&\bigsqcup& \bigsqcup&\bigsqcup\\ 1&\mathbf{X}&\mathbf{X}^{P_{2}}&\cdots&\sin(\mathbf{U})&\cdots\\ \bigsqcup&\bigsqcup&\bigsqcup&\bigsqcup\end{array}\right], \tag{17}\]
where \(\mathbf{X}^{P_{2}}\) denote higher order polynomials. While we have chosen \(\mathbf{\Theta}\) in this manner for the dynamical systems under consideration in our experiments, it is important to note that in practical applications, the candidate function library \(\mathbf{\Theta}(\mathbf{X},\mathbf{U})\) is often broadened to encompass a more diverse set of nonlinear basis functions relevant to the specific system. Additionally, we elect to use LASSO as in (16) in order to maintain consistency and in order to provide a fair point of comparison across all experiments. STLS has been shown to yield better results for noise-free data [2], but in our experiments, this approach failed to yield meaningful results in the presence of noise. For the Gaussian process regression used in experiments, we use the squared exponential kernel and optimize the hyperparameters via maximum likelihood to smooth the measurements.
### _Lotka-Volterra Model (Predator/Prey)_
We first consider the problem of identifying the equations of motion for the Lotka-Volterra model [47], which can be used to model the population of predator and prey species over time. The dynamics of the system are given by
\[\dot{x}_{1}=ax_{1}-bx_{1}x_{2},\qquad\dot{x}_{2}=-cx_{2}+dx_{1}x_{2}, \tag{18}\]
where \(x_{1}\) represents the size of the prey population and \(x_{2}\) represents the size of the predator population, \(a=1.1\) and \(b=0.4\) describe the prey growth rate and the effect of predation upon the prey population, and \(c=1.0\) and \(d=0.4\) describe the predator's death rate and the growth of predators based on prey population.
We first simulated the system for \(30\)s using the deterministic system dynamics in (18) at discrete time steps \(t\in\{t_{1},t_{2},\ldots,t_{r}\}\) from an initial condition \(x_{0}=\left[10,5\right]^{\top}\) with a sampling interval of \(0.1\)s. We collected the deterministic states \(\mathbf{x}(t)\) and standardized the data, i.e. normalized to have zero mean and unit variance. In Gaussian process regression, it is common to standardize as scaled data is more useful for hyperparameter optimization. Additionally, the kernel matrix needs to be inverted in (14) and may become ill-conditioned if the data are not properly scaled [48]. We computed the derivatives \(\dot{\mathbf{x}}(t)\) from the standardized data using the truth dynamics and then added noise \(\mathbf{\epsilon}\sim\mathcal{N}\left(0,\sigma^{2}\right)\) on top of the standardized \(\mathbf{x}(t)\) and \(\dot{\mathbf{x}}(t)\) to simulate measurement noise, thereby obtaining \(\mathbf{X}\) and \(\dot{\mathbf{X}}\). We set aside the last 20% of the simulated data for validation purposes and used the rest for training. We smoothed \(\mathbf{X}\) and \(\dot{\mathbf{X}}\) using Gaussian process regression as described in (13) and (11) to obtain \(\mathbf{X}_{GP}\) and \(\dot{\mathbf{X}}_{GP}\) from the training data and then computed \(\mathbf{\Theta}(\mathbf{X}_{GP})\) from (17). Finally, using \(\dot{\mathbf{X}}_{GP}\) and \(\mathbf{\Theta}(\mathbf{X}_{GP})\), we solved the \(L_{1}\)-regularized least-squares problem in (16) using \(\lambda=0.1\). For the baseline NNSINDy, we trained a neural network predict \(\dot{\mathbf{X}}\) given the observations of \(\mathbf{X}\) using the training data. We performed similar LASSO regression as in GPSINDy to discover the coefficients.
We first compare the learned coefficients \(\mathbf{\Xi}\) between SINDy and our proposed approach. The coefficients are shown in Table I. We can see that the estimates for the parameters \(a,b,c\), and \(d\) obtained by GPSINDy are generally a closer approximation of the true underlying dynamics and that the coefficients matrix \(\mathbf{\Xi}\) learned by GPSINDy is also more sparse than the one learned by SINDy. As expected, this is because our approach uses Gaussian processes to estimate \(\dot{\mathbf{X}}\), which is a smoother approximation of the true derivatives of \(\mathbf{X}\) than the one corrupted by noise.
We also quantitatively compare the performance of SINDy, NNSINDy, and GPSINDy on data corrupted by different
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Ground Truth} & \multicolumn{2}{c}{SINDy} & \multicolumn{2}{c}{GPSINDy} \\ \cline{2-7} \(\Theta\) term & \(\dot{x_{1}}\) & \(\dot{x_{2}}\) & \(\dot{x_{1}}\) & \(\dot{x_{2}}\) & \(\dot{x_{1}}\) & \(\dot{x_{2}}\) \\ \hline \(x_{1}\) & 1.1 & 0.0 & 1.108 & **0.0** & **1.097** & **0.0** \\ \(x_{2}\) & 0.0 & -0.1 & **0.0** & **-0.997** & **0.0** & -0.980 \\ \(x_{1}x_{2}\) & -0.4 & 0.4 & **-0.397** & 0.382 & -0.358 & **0.396** \\ \(x_{2}x_{2}\) & -0.4 & 0.4 & **0.0** & **0.0** & **0.0** & -0.005 \\ \(\cos(x_{1})\) & 0.0 & 0.0 & **0.0** & 0.016 & -0.049 & **0.0** \\ \(x_{1}\cos(x_{1})\) & 0.0 & 0.0 & **0.0** & -0.005 & **0.0** & **0.0** \\ \(x_{1}x_{2}\sin(x_{1})\) & 0.0 & 0.0 & -0.003 & **0.0** & **0.0** & **0.0** \\ \(x_{1}x_{2}\sin(x_{2})\) & 0.0 & 0.0 & **0.0** & -0.007 & **0.0** & **0.0** \\ \(x_{1}x_{2}\sin(x_{2})\) & 0.0 & 0.0 & 0.003 & **0.0** & **0.0** & **0.0** \\ \(x_{1}x_{1}\cos(x_{1})\) & 0.0 & 0.0 & **0.0** & -0.009 & **0.0** & **-0.003** \\ \(x_{1}x_{1}\cos(x_{2})\) & 0.0 & 0.0 & -0.001 & **0.0** & **0.0** & **0.0** \\ \hline \hline \end{tabular}
\end{table} TABLE I: **GPSINDy learns better coefficients for the predator-prey model.** In this table we compare the coefficients learned by SINDy and GPSINDy with the ground truth coefficients for the predator-prey model. The bold values show the best learned coefficients compared the the ground-truth coefficients for both \(\dot{x_{1}}\) and \(\dot{x_{2}}\).
levels of noise as shown in Figure 1. The results show GPSINDy consistently outperforms the baselines across all noise magnitudes, highlighting its robustness in dealing with noisy data. While SINDy is effective at low noise levels, it struggles at higher levels. NNSINDy, constrained by limited data, fails to effectively learn model coefficients. Furthermore, Figure 2 reveals that trajectories derived from GPSINDy's learned coefficients align more closely with the ground truth than those of SINDy and NNSINDy. Notably, even though neural networks usually demand vast data volumes, NNSINDy's trajectory for \(\mathbf{x}_{2}(t)\) aligns well initially. However, it eventually deviates for both \(\mathbf{x}_{1}(t)\) and \(\mathbf{x}_{2}(t)\).
### _Unicycle Dynamics (Simulation)_
We now consider the nonholonomic unicycle system. For this experiment, we produced simulation data based on the dynamics of a unicycle system
\[\begin{split}\dot{x}_{1}=x_{3}\cos(x_{4}),&\dot{x }_{2}=x_{3}\sin(x_{4}),\\ \dot{x}_{3}=u_{1},&\dot{x}_{4}=u_{2}.\end{split} \tag{19}\]
We fix the control input as a function of time, such that \(u_{1}(t)=\sin(t)\) and \(u_{2}(t)=\frac{1}{2}\cos(t)\). The control inputs were chosen to be deterministic functions of time for experiment purposes; in practice, any function can be chosen that perturb the dynamics. As before, we simulated the system for \(30\)s using the deterministic system dynamics and sampled the system at discrete time steps \(t\in\{t_{1},t_{2},\ldots,t_{r}\}\) with a sampling interval of \(0.1\)s from an initial condition \(\mathbf{x}_{0}=[0,0,0.5,0.5]^{\top}\). Unlike prior setups, we opted not to standardize the states \(\mathbf{x}(t)\) since the data spread was already apt for Gaussian process regression.
We determined the state derivatives, \(\dot{\mathbf{X}}(t)\), using the ground truth dynamics. Measurement noise was emulated by adding Gaussian noise, \(\mathbf{\epsilon}\sim\mathcal{N}\left(0,\sigma^{2}\right)\), to the true dynamics to obtain \(\mathbf{X}\) and \(\dot{\mathbf{X}}\). As per convention, the last 20% of the simulated data was reserved for validation with the remainder utilized for training. We smoothed \(\mathbf{X}\) and \(\dot{\mathbf{X}}\) using (15) and (13) to derive \(\mathbf{X}_{GP}\) and \(\dot{\mathbf{X}}_{GP}\) from the training dataset. We then computed the function library \(\mathbf{\Theta}(\mathbf{X}_{GP},\mathbf{U})\) and finally optimized the \(L_{1}\)-regularized least-squares problem in (16) with \(\dot{\mathbf{X}}_{GP}\) and \(\mathbf{\Theta}(\mathbf{X}_{GP},\mathbf{U})\) using \(\lambda=0.1\) to identify the GPSINDy model. We note that we used polynomial terms up to \(1^{\text{st}}\) order in \(\mathbf{\Theta}(\mathbf{X}_{GP},\mathbf{U})\) as each method failed to identify the truth coefficients when \(3^{\text{rd}}\) order terms were included, even when using the deterministic states.
We conduct a quantitative comparison of the performance of SINDy, NNSINDy, and GPSINDy on data affected by various noise levels, as depicted in Figure 3. Notably, GPSINDy consistently outperforms the other methods at higher noise levels, underscoring its resilience against noisy measurements. The performance of NNSINDy and GPSINDy remain relatively consistent between the predator-prey and unicycle systems, but there is a marked degradation in the performance of SINDy for the unicycle system. As the noise levels increase, the efficacy of SINDy diminishes. However, the coefficients learned by all methods in Figure
Fig. 1: **GPSINDy outperforms baselines in learning model coefficients for the Predator-Prey model under noisy measurements.** This plot contrasts the coefficients learned by SINDy (blue), GPSINDy (orange), and NNSINDy (green) across varying noise levels for \(\sigma\), i.e. magnitude of standard deviation. The x-axis represents \(\sigma\) varying from \(0.050\) to \(0.25\). The y-axis quantifies the mean-squared error between the ground-truth coefficients (\(\mathbf{\Xi}_{\text{GT}}\)) and the learned coefficients (\(\mathbf{\Xi}_{\text{Learned}}\)). Lower overall error is better. Each experiment evaluates coefficients learned from noisy measurements of \(\mathbf{X}\) and \(\dot{\mathbf{X}}\) with trials repeated over 40 seeds for each \(\sigma\).
Fig. 2: **Trajectories based on GPSINDy-coefficients closely align with ground truth trajectory for the predator-prey model.** We qualitatively compare the trajectories generated using coefficients learned from SINDy (blue), NNSINDy (green) and GPSINDy (orange) against the ground truth (black) trajectory for the predator-prey model. The coefficients were learned using data with added noise of \(\mathbf{\epsilon}\sim\mathcal{N}\left(0,\sigma^{2}=0.05^{2}\right)\). The x-axis shows the time and y-axis shows the corresponding states of the predator-prey model. The top figure depicts the \(x_{1}\) trajectories, while the bottom depicts the \(x_{2}\) trajectories.
3 have relatively large error compared to the results shown in Figure 1, suggesting significant model mismatch with the true unicycle system dynamics. Nevertheless, this experiment demonstrates the ability of GPSINDy to learn more accurate model coefficients even when faced with intricate model dynamics, such as the nonholonomic unicycle system, while using noise-corrupted measurements.
### _JetRacer Hardware Demonstration_
We also tested our method real hardware data collected on a NVIDIA JetRacer, a \(1/10\) scale high speed car. We actuated the car to drive in a figure-8 made up of two circles, \(3\mathrm{m}\) in diameter. The nominal time for each lap is \(5.5\mathrm{s}\), resulting in a nominal velocity of \(3.4\mathrm{m}\,\mathrm{s}^{-1}\). VICON sensors captured \(22.85\mathrm{s}\) of the system's motion at discrete timesteps of \(0.2\mathrm{s}\), and the control inputs \(\mathbf{U}\) delivered to the robotic system were saved at the same sampling rate. We define the state \(\mathbf{X}_{i}\) at time \(t_{i}\) to be the measured \(x_{1}\) and \(x_{2}\) position in \(\mathrm{m}\), forward velocity \(v\) of the car (with respect to its own frame) in \(\mathrm{m}\,\mathrm{s}^{-1}\), and heading angle \(\phi\) (with respect to a global frame) in \(\mathrm{rad}\,\mathrm{s}^{-1}\). Each state measurement was then stacked as in (3) to gather \(\mathbf{X}\). Subsequently, we applied central finite differencing to numerically approximate \(\mathbf{X}\).
As in previous experiments, we split the dataset into a training and testing set. We obtained \(\mathbf{X}_{GP}\) and \(\hat{\mathbf{X}}_{GP}\) by smoothing \(\mathbf{X}\) and \(\hat{\mathbf{X}}\) and computed \(\mathbf{\Theta}(\mathbf{X}_{GP},\mathbf{U})\). We then solved the \(L_{1}\)-regularized least squares problem in (16) to obtain the GPSINDy dynamics model. To achieve the best model fit, we tuned \(\lambda\) individually for SINDy and GPSINDy via cross-validation. We started at \(\lambda=1e^{-6}\) and incremented logarithmically until it reached \(1\). Then, we increased \(\lambda\) in increments of 10 until all of the coefficients were effectively set to 0. We propagated the dynamics for each \(\lambda\) and, at the end, selected the \(\lambda\) for each \(\hat{\mathbf{X}}\) that fit the data best.
In Figure 4, we compare the performance of the GPSINDy and SINDy-learned dynamics models using the testing dataset from the JetRacer. Both GPSINDy and SINDy capably trace the figure-8 loop navigated by the car. However, the figure shows that GPSINDy, having learned more accurate model coefficients, yields a trajectory that aligns more closely with the ground truth. For the JetRacer run in Figure 4, we juxtapose the trajectories produced by GPSINDy and SINDy against the ground-truth trajectories. **The \(l_{2}\) error norm between the x and y coordinates for SINDy on the testing data is \(1.4\mathrm{m}^{2}\), while for GPSINDy it is reduced to \(0.23\mathrm{m}^{2}\)**. This outcome underscores GPSINDy's adeptress at modeling dynamics from noise-afflicted real-world data.
## V Conclusions & Future Work
In this paper, we proposed an approach to mitigate the issue of noisy data in sparse symbolic regression algorithms such as SINDy. We used Gaussian process regression to smooth noisy measurements and then used LASSO to achieve sparsity and improved model fit with the data over SINDy. We demonstrated our approach on a Lotka-Volterra system, on an simulated unicycle system, and on noisy data taken from hardware experiments using an NVIDIA JetRacer system. Our results show that a Gaussian process smoothing algorithm significantly improves the task of nonlinear system identification for SINDy. Future work should augment the cost function of SINDy to minimize the log-likelihood while simultaneously learning the nonlinear terms using ADMM, provide a more thorough comparison between existing approaches, and conduct testing on different robotic systems, such as quadrupeds or quadcopters.
Fig. 4: **GPSINDy Trajectories Align Closely with Ground Truth for the Real JetRacer System.** The plot contrasts trajectories predicted from SINDy (blue) and GPSINDy (orange) with the ground truth (black) based on collected JetRacer data. The axes denote the JetRacer’s Cartesian coordinates. The plot demonstrates that the trajectories derived from GPSINDy-learned coefficients exhibit the closest match to the ground truth while SINDy’s trajectory diverges, reflecting a mismatch between the true JetRacer system dynamics and the model identified by SINDy.
Fig. 3: **GPSINDy outperforms baselines in learning model coefficients for simulated unicycle dynamics under noisy measurements.** This plot contrasts the coefficients learned by SINDy (blue), GPSINDy (orange), and NNSINDy (green) across varying noise levels. The x-axis represents noise level \(\sigma\) varying from \(0.05\) to \(0.25\). The y-axis quantifies the mean-squared error between the ground-truth coefficients (\(\mathbf{\Xi}_{\text{GT}}\)) and the learned coefficients (\(\mathbf{\Xi}_{\text{Learned}}\)), lower overall error is better. Each experiment evaluates coefficients learned under noisy measurements of \(\mathbf{X}\) and \(\hat{\mathbf{X}}\), with trials repeated over 40 seeds for each noise level. |
2309.16719 | From quantum electrodynamics to a geometric gauge theory of classical
electromagnetism | A relativistic version of the correspondence principle, a limit in which
classical electrodynamics may be derived from QED, has never been clear,
especially when including gravitational mass. Here we introduce a novel
classical field theory formulation of electromagnetism, and then show that it
approximates QED in the limit of a quantum state which corresponds to a
classical charged continua. Our formulation of electromagnetism features a
Lagrangian which is gauge invariant, includes a classical complex field from
which a divergenceless four-current may be derived, and reproduces all aspects
of the classical theory of charged massive continua without any quantum
effects. Taking a geometric approach, we identify the four-current as being in
the direction of extremal phase velocity of the classical field; the field
equations of motion determine this phase velocity as being equal to the mass,
which makes the rest density proportional to the squared modulus of the field. | Adam Marsh | 2023-09-14T17:23:14Z | http://arxiv.org/abs/2309.16719v1 | # From quantum electrodynamics to a geometric gauge theory of classical electromagnetism
###### Abstract
A relativistic version of the correspondence principle, a limit in which classical electrodynamics may be derived from QED, has never been clear, especially when including gravitational mass. Here we introduce a novel classical field theory formulation of electromagnetism, and then show that it approximates QED in the limit of a quantum state which corresponds to a classical charged continua. Our formulation of electromagnetism features a Lagrangian which is gauge invariant, includes a classical complex field from which a divergenceless four-current may be derived, and reproduces all aspects of the classical theory of charged massive continua without any quantum effects. Taking a geometric approach, we identify the four-current as being in the direction of extremal phase velocity of the classical field; the field equations of motion determine this phase velocity as being equal to the mass, which makes the rest density proportional to the squared modulus of the field.
###### Contents
* 1 Introduction
* 1.1 Motivation
* 1.2 Geometry
* 1.3 Overview
* 2 Geometric \(U(1)\) gauge theory
* 2.1 A geometric view of gauge theory
* 2.2 The gauge potential
* 2.3 Inner products
* 2.4 The spacetime gradient
* 2.5 Matter field rotation
* 2.6 Quasi-gauge electromagnetism
* 2.7 Klein-Gordon theory
* 3
Matter field electromagnetism** * 3.1 The Lagrangian * 3.2 Matter field equations of motion * 3.3 Gauge potential equations of motion * 3.4 Geometry of the four-current * 3.5 Metric equations of motion * 3.6 Noether's theorem
* 4 From QED to classical electromagnetism
* 4.1 The QED Lagrangian
* 4.2 Plane wave solutions
* 4.3 Electron packets
* 4.4 Classical four-current configuration
* 4.5 Spinor component equations of motion
* 4.6 From the QED to the MFEM Lagrangian
* 5 Summary and discussion
* 5.1 Summary of results
* 5.2 The discrete four-current
* 5.3 Multiple matter fields
## 1 Introduction
### Motivation
The (quantized, minimally coupled) Dirac equation describes (relativistic, quantum mechanical) electrons (and positrons) interacting electromagnetically. The Dirac spinor matter field comprises four complex components smoothly defined at each point of spacetime; its values as operators provide a description of multiple particles via quantum electrodynamics (QED). The Dirac equation may be derived from the Dirac Lagrangian, which is invariant under \(U(1)\) gauge transformations.
Maxwell's equations describe (relativistic, classical) charged massive continua interacting electromagnetically. The four-current is a divergenceless four-vector field on spacetime whose direction is that of the continuum of particles at that point, and whose length is the number of particles per unit space-like volume orthogonal to that direction. Maxwell's equations are usually derived from a Lagrangian which includes this four-current, does not include a matter field, and is not invariant under \(U(1)\) gauge transformations.
While the classical electromagnetic gauge potential equation of motion may be obtained from QED by the stationary phase approximation, there is no way to obtain a classical matter field yielding a classical four-current using this approximation, since the scalars obtained must anti-commute (i.e. they are Grassmann numbers).
Our aim in this paper is to construct an alternative Lagrangian for classical charged massive continua interacting electromagnetically which (1) is based upon a classical matter field whose equations of motion yield a classical four-current, (2) is invariant under gauge transformations,
and (3) may be obtained as a limit of quantum electrodynamics. An additional goal is a detailed presentation of this alternative theory in terms of both real and complex geometry.
### Geometry
The evolution of gauge theory was from its beginning geometric and tied to electromagnetism. Weyl [6] first introduced the concept of gauge invariance as local spacetime scale invariance (thus the name) in an attempt to unify general relativity with electromagnetism, later repurposing it more successfully [7] as an extension of the global phase invariance of the wave function in Dirac theory to local phase invariance, which yields Maxwell's equations. The resulting "gauge principle," formulated as a procedure for extending global symmetries to local symmetries, was then generalized to higher dimensional complex "rotations" by Yang and Mills [8].
This set of ideas was eventually given an even more geometric formulation in the language of fiber bundles (see e.g. [1]), which also accounts for global considerations. A scalar matter field (a generic term to encompass both scalar and operator field values) is defined as a section of a complex vector bundle over a (possibly curved) spacetime manifold; in analogy with tangent vector fields, the gauge potential is a connection defining parallel transport of these vectors, the field strength is the curvature of this connection, and a gauge transformation is a change of the frame defining the matter field components.
In the viewpoint adopted here, this geometric formulation positions the matter field as the primary quantity, with the gauge potential defining its parallel transport. Classical electromagnetism however, despite being the prototypical gauge theory, is typically described as a gauge theory with the four-current inserted by hand as an external quantity, while the notions of a gauge potential and field strength are preserved even in the absence of an associated matter field. Moreover, the Lagrangian is not gauge invariant, which from a geometric point of view is nonsensical. We will therefore refer to this formulation absent a matter field as "quasi-gauge electromagnetism."
### Overview
We would like to formulate the classical theory of charged massive continua as a gauge theory including a matter field. This matter field should determine a divergenceless four-current; but unlike in quantum theory, its equations of motion should not result in solutions with quantum characteristics. In particular, we should have no need to interpret the "on-shell" matter fields which satisfy these equations in terms of particles or probabilities; instead, they should determine a general classical four-current.
In the following we provide a geometric description of such a theory, which is based upon a gauge-invariant Lagrangian which includes a matter field, and which, like quasi-gauge electromagnetism, reproduces the full classical theory of charged massive continua, without any quantum characteristics. In addition, QED is shown to simplify to this theory under certain conditions. We will refer to this theory as "matter field electromagnetism."
In Section 2 we summarize our geometric view of gauge theory and describe various geometrical quantities associated with our \(U(1)\) gauge theory from both the real and complex viewpoints, taking pains to construct a consistent picture in the real case. Section 3 defines our Lagrangian for matter field electromagnetism, in terms of both geometry and complex algebra, and derives the equations of motion along with the results of Noether's theorem. Section 4 constructs a quantum state corresponding to a classical current and shows that, as desired, QED
in the limit of such a state results in the matter field and equations of motion from matter field electromagnetism.
Throughout the paper we will use natural units, where the constants \(c=G=\hbar=1\), and the mostly pluses spacetime metric signature, where in an orthonormal frame the metric is \(g_{\mu\nu}=\mathrm{diag}\left(-1,1,1,1\right)\).
## 2 Geometric \(U(1)\) gauge theory
### A geometric view of gauge theory
A common way to describe a scalar gauge theory is in terms of a multiplet of (usually complex) matter fields \(\Phi^{a}\) and a Lagrangian which is written in terms of these fields along with their coordinate derivatives \(\partial_{\mu}\Phi^{a}\). The Lagrangian is then noted to be invariant under a global gauge transformation, a complex matrix transformation of the \(\Phi^{a}\) treated as vector components, where the matrix is an element of a matrix group, usually \(U(1)\) or \(SU(n)\) (hereafter referred to as simply \(SU(n)\)). The principle of minimal coupling then prescribes the introduction of a gauge potential to replace coordinate derivatives with gauge covariant derivatives. This promotes the global gauge invariance to a local gauge invariance, wherein the Lagrangian is invariant under multiplication by the matter field at each point by a different but smoothly varying matrix in \(SU(n)\).
The geometric view of scalar gauge theory we take here (detailed in [4]) is based upon a vector bundle \(\left(E,M,X\right)\) over spacetime \(M\), with the fiber \(X\cong\mathbb{C}^{n}\) called the internal space. A matter field is a section of \(E\), or equivalently an \(X\)-valued \(0\)-form (function) on \(M\). A choice of gauge in a region of \(M\) is a frame, a choice of orthonormal basis for the fiber \(X_{p}\) at each point \(p\) in the region, which allows us to express the matter field as a gauge-dependent \(\mathbb{C}^{n}\)-valued \(0\)-form we denote \(\vec{\Phi}\), with components \(\Phi^{a}\) smoothly defined at each point. A gauge transformation is a change of frame, which in a choice of gauge corresponds to a matrix element smoothly defined at each point which we denote \(\check{\gamma}^{-1}\in SU\left(n\right)\); the check decoration indicates a matrix value, and the element is defined as an inverse matrix applied to the basis so that the components \(\Phi^{a}\) at each point in the region transform as \(\vec{\Phi}\rightarrow\check{\gamma}\vec{\Phi}\). A gauge transformation is thus a "rotation" of the internal space basis at each point, or more precisely a linear transformation which leaves the complex inner product on the fiber \(X\) invariant, along with the complex volume element \(\mathsf{e}_{1}\wedge\cdots\wedge\mathsf{e}_{n}\) for \(n>1\); here \(\mathsf{e}_{a}\) is a complex orthonormal basis of \(X\), and we use a sans serif font for internal space basis vectors to distinguish them from spacetime tangent space basis vectors.
In analogy with the tangent bundle, we introduce a connection defining parallel transport, an \(su(n)\)-valued \(1\)-form we denote
\[\check{\Gamma}\equiv-iq\check{A}\equiv-iqA^{a}{}_{b\mu}, \tag{2.1}\]
where the gauge potential \(A^{a}{}_{b\mu}\) is thus a hermitian matrix-valued \(1\)-form with a Greek spacetime index. Geometrically, \(-iqA^{a}{}_{b\mu}\) is the \(a^{\mathrm{th}}\) component of the difference between the frame \(\mathsf{e}_{b}\) and its parallel transport in the direction \(e_{\mu}\).
The gauge covariant derivative, which is geometrically the infinitesimal difference between \(\vec{\Phi}\) and its parallel transport, can then be written in terms of forms and components as
\[\begin{split}\mathrm{D}\vec{\Phi}&=\mathrm{d}\vec{ \Phi}+\check{\Gamma}\vec{\Phi},\\ \mathrm{D}_{\mu}\Phi^{a}&=\partial_{\mu}\Phi^{a}-iqA ^{a}{}_{b\mu}\Phi^{b}.\end{split} \tag{2.2}\]
The connection defines a \(su(n)\)-valued curvature 2-form written in terms of the field strength \(\tilde{F}\) as \(-iqF^{a}{}_{b\mu\nu}\), where
\[\begin{split}\tilde{F}&\equiv\mathrm{d}\tilde{A}-iq \tilde{A}\wedge\tilde{A},\\ F^{a}{}_{b\mu\nu}&=\partial_{\mu}A^{a}{}_{b\nu}- \partial_{\nu}A^{a}{}_{b\mu}-iq\left(A^{a}{}_{c\mu}A^{c}{}_{b\nu}-A^{a}{}_{c \nu}A^{c}{}_{b\mu}\right).\end{split} \tag{2.3}\]
Under a gauge transformation \(\check{\gamma}^{-1}\), these quantities transform as follows:
\[\begin{split}\vec{\Phi}&\rightarrow\check{\gamma} \vec{\Phi}\\ \check{F}&\rightarrow\check{\gamma}\check{F}\check{ \gamma}^{-1}\\ \check{A}&\rightarrow\check{\gamma}\check{A}\check{ \gamma}^{-1}+\frac{i}{q}\check{\gamma}\mathrm{d}\check{\gamma}^{-1}\end{split} \tag{2.4}\]
In the case of \(U(1)\) gauge theory, the matter field is complex-valued and denoted \(\Phi\), while the gauge potential \(A_{\mu}\) is real-valued. A gauge transformation is then usually written \(\gamma^{-1}\equiv e^{-iq\Lambda}\), under which the above relations simplify to:
\[\begin{split}\mathrm{D}_{\mu}\Phi&=\partial_{\mu} \Phi-iqA_{\mu}\Phi\\ F_{\mu\nu}&=\partial_{\mu}A_{\nu}-\partial_{\nu}A _{\mu}\\ \Phi&\to e^{iq\Lambda}\Phi\\ F_{\mu\nu}&\to F_{\mu\nu}\\ A_{\mu}&\to A_{\mu}+\partial_{\mu}\Lambda\end{split} \tag{2.5}\]
Figure 2.1: A gauge theory is a bundle over spacetime \(M\) with fiber a vector space \(X\), and matter field \(\vec{\Phi}\) a section of this bundle. If we choose a gauge, an orthonormal basis for each \(X_{p}\), the matter field can be written in terms of components \(\Phi^{a}\), with a gauge transformation being a smoothly defined change of orthonormal basis for each \(X_{p}\). The gauge covariant derivative in a given gauge can then be written \(\mathrm{D}_{\mu}\Phi^{a}=\partial_{\mu}\Phi^{a}+\Gamma^{a}{}_{b\mu}\Phi^{b}= \partial_{\mu}\Phi^{a}-iqA^{a}{}_{b\mu}\Phi^{b}\). Geometrically, \(\varepsilon\mathrm{D}_{\mu}\Phi^{a}=\left.\vec{\Phi}\right|_{p+\varepsilon e_ {\mu}}-\left.\left.\left\|{}_{\varepsilon e_{\mu}}\right.\vec{\Phi}\right|_{p}\) is the difference between the matter field \(\left.\vec{\Phi}\right|_{p+\varepsilon e_{\mu}}\) at a point infinitesimally displaced in the direction \(e_{\mu}\) from \(p\) and its parallel transport from \(p\), which we denote \(\left.\left\|{}_{\varepsilon e_{\mu}}\right.\vec{\Phi}\right|_{p}\).
### The gauge potential
In \(U(1)\) gauge theory, the matter field value at each point in spacetime is a complex number \(\Phi\in\mathbb{C}\), which from our geometric viewpoint is the single complex component of an intrinsic complex vector in a given gauge (choice of unit length complex vector). A gauge transformation (new choice of unit length complex vector) \(\Phi\to e^{iq\Lambda}\Phi\), or infinitesimally \(\Phi\to\Phi+iq\Lambda\Phi\), changes the complex component, but does not change the intrinsic complex vector. Note that a choice of unit length complex vector is also a choice of volume element, which gauge transformations are only allowed to alter in one complex dimension; i.e. in one dimension we consider \(U(1)\) instead of \(SU(1)\).
Another unique feature of \(U(1)\) gauge theory is that \(U(1)\cong SO(2)\), the complete group of rotations in the decomplexified space, instead of a subgroup \(SU(n)\subset SO(2n)\) of these rotations. The matter field at each point may therefore be viewed as a real vector \(\vec{\Phi}\in\mathbb{R}^{2}\), with a gauge transformation a new choice of real orthonormal basis for the internal space which leaves \(\vec{\Phi}\) unchanged but transforms its components \(\Phi^{a}\). We will adopt this view as our primary one, and will take it quite literally; but we will utilize the complex notation in parallel with the vector notation, since electromagnetism is almost universally expressed this way, and the definition of the gauge potential \(A_{\mu}\) is dependent upon the complex viewpoint.
We associate the rotation of a real vector \(\vec{\Phi}\in\mathbb{R}^{2}\) by \(\theta\) radians with the complex multiplication \(e^{i\theta}\Phi\) in \(\mathbb{C}\). This rotation is usually depicted as being in the counterclockwise direction, since the real axis is usually depicted as the horizontal axis. Therefore, going forward we will use the word "rotation" to mean "counterclockwise rotation in this depiction" or "rotation from the positive real towards the positive imaginary axis."
Now, \(-iqA_{\mu}\) is the imaginary number which when multiplied by a unit basis vector gives the infinitesimal difference between the basis vector at a displaced point and its parallel transport. Since this displacement is infinitesimal and rotates a unit vector, the rotation is by \(-qA_{\mu}\) radians, so that we can view \(q\) as a conversion factor between units, i.e. a rotational multiplier specifying internal space radians per unit spacetime length for unit \(A_{\mu}\). We may then write the (counterclockwise instantaneous) angular velocity (relative to parallel transport) of the internal space frame (gauge) in the \(e_{\mu}\) direction as
\[A_{\mu}^{\mathscr{X}}\equiv-qA_{\mu} \tag{2.6}\]
in radians per coordinate length; a positive gauge potential value then corresponds to a clockwise frame rotation if \(q\) is positive.
### Inner products
The frame rotations of the previous section imply that we define the real inner product on the internal space \(\mathbb{R}^{2}\cong\mathbb{C}\) as the real part of the complex inner product \(\left\langle\vec{\Phi},\vec{\Psi}\right\rangle_{\mathbb{R}}=\operatorname{Re} \left\langle\Phi,\Psi\right\rangle_{\mathbb{C}}\); in particular, the squared length of a matter field vector may then be written
\[\left\langle\vec{\Phi},\vec{\Phi}\right\rangle_{\mathbb{R}} =\left\|\vec{\Phi}\right\|^{2}=\Phi^{a}\Phi_{a} \tag{2.7}\] \[=\left\langle\Phi,\Phi\right\rangle_{\mathbb{C}} =\Phi^{*}\Phi=\left|\Phi\right|^{2},\]
where \(\left|\Phi\right|\) is the modulus and \(\Phi^{*}\) is the complex conjugate of the complex number \(\Phi\). For the field strength, we have
\[\left\langle F,F\right\rangle =\frac{1}{2}F_{\mu\nu}F^{\mu\nu} \tag{2.8}\] \[=\frac{1}{2}\left(\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu} \right)\left(\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}\right)\] \[=\partial_{\mu}A_{\nu}\partial^{\mu}A^{\nu}-\partial_{\mu}A_{\nu }\partial^{\nu}A^{\mu},\]
where we recall the \(k\)-form inner product relation \(\left\langle\varphi,\psi\right\rangle_{\text{form}}=\frac{1}{k!}\varphi_{\mu_ {1}\dots\mu_{k}}\psi^{\mu_{1}\dots\mu_{k}}\) and in the last line we swap dummy indices and combine terms.
Figure 2.2: In a given choice of gauge, the \(so(2)\)-valued connection \(\check{\Gamma}\) is a matrix which applied to any frame vector \(\vec{\mathfrak{e}}_{a}\) yields the difference between the frame vector \(\left.\vec{\mathfrak{e}}_{a}\right|_{p+\varepsilon e_{\mu}}\) at the infinitesimally displaced point and the parallel transported frame vector \(\left.\left\|\varepsilon e_{\mu}\right.\vec{\mathfrak{e}}_{a}\right.\) In the complex view, the connection is \(-iqA\), a complex number which when multiplied by any frame vector yields the difference between the frame vector at the new point and the parallel transported frame vector, again as a complex number. For example, the internal space frame vector \(\vec{\mathfrak{e}}_{2}\) corresponds to \(i\), so that the difference is \(-iqAi=qA\), which is a positive real number if \(q\) and \(A\) are. \(q\) may therefore be viewed as a conversion factor which in the chosen units makes \(qA_{\mu}\) the clockwise angular velocity of the frame (relative to parallel transport) in the \(e_{\mu}\) direction. Note that if \(\mu\neq 0\) this “velocity” will be in radians per unit distance instead of radians per unit time.
We can write the "squared length" of the gauge covariant derivative as
\[\begin{split}\left\langle\mathrm{D}_{\mu}\vec{\Phi},\mathrm{D}_{\mu} \vec{\Phi}\right\rangle&\equiv\left(\mathrm{D}^{\mu}\Phi\right)^{* }\mathrm{D}_{\mu}\Phi\\ &=\left(\partial^{\mu}\Phi^{*}+iqA^{\mu}\Phi^{*}\right)\left( \partial_{\mu}\Phi-iqA_{\mu}\Phi\right)\\ &=\partial^{\mu}\Phi^{*}\partial_{\mu}\Phi+iqA^{\mu}\left(\Phi^{* }\partial_{\mu}\Phi-\Phi\partial_{\mu}\Phi^{*}\right)+q^{2}\left|\Phi\right|^{ 2}A_{\mu}A^{\mu}.\end{split} \tag{2.9}\]
If we define
\[\begin{split}\Phi&\equiv\left|\Phi\right|e^{i\varphi} \\ \Rightarrow\mathrm{D}_{\mu}\Phi&=\frac{\Phi}{\left| \Phi\right|}\mathrm{D}_{\mu}\left|\Phi\right|+i\Phi\mathrm{D}_{\mu}\varphi\\ &=\partial_{\mu}\Phi-iqA_{\mu}\Phi\\ &=\frac{\Phi}{\left|\Phi\right|}\partial_{\mu}\left|\Phi\right|+i \Phi\partial_{\mu}\varphi-iqA_{\mu}\Phi,\end{split} \tag{2.10}\]
we see that geometrically
\[\begin{split}\mathrm{D}_{\mu}\left|\Phi\right|&= \partial_{\mu}\left|\Phi\right|,\\ \mathrm{D}_{\mu}\varphi&=\partial_{\mu}\varphi-qA_{ \mu}\end{split} \tag{2.11}\]
are the difference in length and angle between the matter field and its parallel transport per unit distance in the direction \(e_{\mu}\). In terms of these quantities, we have
\[\begin{split}\left\langle\mathrm{D}_{\mu}\Phi,\mathrm{D}_{\mu} \Phi\right\rangle&=\left(\frac{\Phi^{*}}{\left|\Phi\right|} \mathrm{D}^{\mu}\left|\Phi\right|-i\Phi^{*}\mathrm{D}^{\mu}\varphi\right) \left(\frac{\Phi}{\left|\Phi\right|}\mathrm{D}_{\mu}\left|\Phi\right|+i\Phi \mathrm{D}_{\mu}\varphi\right)\\ &=\left\langle\mathrm{D}_{\mu}\left|\Phi\right|,\mathrm{D}_{\mu} \left|\Phi\right|\right\rangle+\left|\Phi\right|^{2}\left\langle\mathrm{D}_{ \mu}\varphi,\mathrm{D}_{\mu}\varphi\right\rangle.\end{split} \tag{2.12}\]
### The spacetime gradient
\(\left\langle\mathrm{D}_{\mu}\varphi,\mathrm{D}_{\mu}\varphi\right\rangle\) is the magnitude of the four-vector \(\mathrm{D}^{\mu}\varphi\), which is a "spacetime gradient." Since we find no ready reference, we characterize the direction of this four-vector here. For any non-null spacetime four-vector \(V^{\mu}\), we may choose an orthonormal basis for the tangent space for which \(V=Le_{\parallel}\), with \(\left\langle e_{\parallel},e_{\parallel}\right\rangle=\pm 1\) and any other basis four-vector \(e_{\perp}\) orthogonal to \(V\). For any unit four-vector \(\left\langle B,B\right\rangle=\pm 1\) which is a boost of \(e_{\parallel}\) in the same part of the light cone, we have
\[\begin{split} B&=e_{\parallel}+be_{\parallel}+ce_{ \perp}\\ \Rightarrow V_{\mu}B^{\mu}&=\pm L\left(1+b\right)\end{split} \tag{2.13}\]
for positive numbers \(b\) and \(c\) (see Figure 2.3); \(e_{\parallel}\) is therefore the direction \(U\) for which the absolute value \(\left|V_{\mu}U^{\mu}\right|\) is at a minimum under boosts. Similarly, for any unit four-vector \(\left\langle R,R\right\rangle=\pm 1\) which is a rotation of \(e_{\parallel}\) in the same part of the light cone (excluding \(-e_{\parallel}\)), we have
\[\begin{split} R&=e_{\parallel}-re_{\parallel}+se_{ \perp}\\ \Rightarrow V_{\mu}R^{\mu}&=\pm L\left(1-r\right) \end{split} \tag{2.14}\]
for positive numbers \(r<2\) and \(s\); \(e_{\parallel}\) is therefore the direction \(U\) for which the absolute value \(\left|V_{\mu}U^{\mu}\right|\) is at a maximum under rotations.
With these results in hand, we may geometrically describe \(\mathrm{D}^{\mu}\varphi\) as pointing in the direction \(U^{\mu}\) in which \(|\mathrm{D}_{U}\varphi|=|U^{\mu}\mathrm{D}_{\mu}\varphi|\) is smallest under boosts and largest under rotations (the latter corresponding to the usual gradient description as the "direction of steepest ascent"). In the interest of brevity we may say that \(\mathrm{D}^{\mu}\varphi\) points in the direction \(U\) in which \(\mathrm{D}_{U}\varphi\) is extremal; in particular, \(\mathrm{D}_{U^{\perp}}\varphi=0\) for any \(U^{\perp}\) orthogonal to \(U\). We may then say that \(|\langle\mathrm{D}_{\mu}\varphi,\mathrm{D}_{\mu}\varphi\rangle|\) is the squared angular difference between the matter field and its parallel transport per unit distance in the direction in which this difference is extremal.
Note that \(\mathrm{D}_{\mu}\Phi\) has values which are complex and therefore not ordered, disallowing the above interpretation, and there will in general be different directions in which the change in the modulus and phase are extremal.
### Matter field rotation
The imaginary part of the complex inner product is just the real inner product with one of the arguments rotated by \(\pi/2\), i.e.
\[\mathrm{Im}\left\langle\Psi,\Phi\right\rangle_{\mathbb{C}} =\mathrm{Re}\left(-i\left\langle\Psi,\Phi\right\rangle_{\mathbb{C}}\right)\] \[=\left\langle\widetilde{\Psi},-\overline{(i\Phi)}\right\rangle _{\mathbb{R}}\] \[=\left\langle\overline{(i\Psi)},\widetilde{\Phi}\right\rangle _{\mathbb{R}}.\]
Figure 2.3: For a time-like vector \(V_{\mathrm{t}}\) and a space-like vector \(V_{\mathrm{s}}\), the unit vector \(U\) parallel to \(V\) is the unit vector for which \(|V_{\mu}U^{\mu}|\) is a minimum compared to any boosted unit vector \(B\), and is a maximum compared to any rotated unit vector \(R\).
Since the real inner product is geometrically a projection, we may express the lengths of the components of \(\vec{\Psi}\) parallel and perpendicular to \(\vec{\Phi}\) as
\[\begin{split}\vec{\Psi}\|^{\vec{\Phi}}&=\frac{\left< \vec{\Psi},\vec{\Phi}\right>_{\mathbb{R}}}{\left\|\vec{\Phi}\right\|}=\frac{ \operatorname{Re}\left<\Psi,\Phi\right>_{\mathbb{C}}}{\left|\Phi\right|}\\ &=\frac{\left(\Phi^{*}\Psi+\Phi\Psi^{*}\right)}{2\left|\Phi \right|},\\ \vec{\Psi}^{\perp\vec{\Phi}}&=\frac{\left<\vec{\Psi},\overline{\left(i\Phi\right)}\right>_{\mathbb{R}}}{\left\|\vec{\Phi}\right\|} =\frac{-\operatorname{Im}\left<\Psi,\Phi\right>_{\mathbb{C}}}{\left|\Phi\right|} \\ &=\frac{-i\left(\Phi^{*}\Psi-\Phi\Psi^{*}\right)}{2\left|\Phi \right|}.\end{split} \tag{2.15}\]
In particular, defining
\[\begin{split}\operatorname{D}_{\mu}^{\parallel}\vec{\Phi}& \equiv\left(\operatorname{D}_{\mu}\vec{\Phi}\right)^{\parallel \vec{\Phi}}\\ \operatorname{D}_{\mu}^{\perp}\vec{\Phi}&\equiv \left(\operatorname{D}_{\mu}\vec{\Phi}\right)^{\perp\vec{\Phi}},\end{split}\]
we have
\[\begin{split} 2\left\|\vec{\Phi}\right\|\operatorname{D}_{\mu}^{ \parallel}\vec{\Phi}&=2\left<\operatorname{D}_{\mu}\vec{\Phi}, \vec{\Phi}\right>_{\mathbb{R}}\\ &=2\operatorname{Re}\left<\operatorname{D}_{\mu}\Phi,\Phi\right> _{\mathbb{C}}\\ &=\Phi^{*}\operatorname{D}_{\mu}\Phi+\Phi\left(\operatorname{D}_{ \mu}\Phi\right)^{*}\\ &=\Phi^{*}\partial_{\mu}\Phi+\Phi\partial_{\mu}\Phi^{*}\\ &=\partial_{\mu}\left|\Phi\right|^{2},\\ 2\left\|\vec{\Phi}\right\|\operatorname{D}_{\mu}^{\perp}\vec{\Phi}& =2\left<\operatorname{D}_{\mu}\vec{\Phi},\overline{\left(i\Phi \right)}\right>_{\mathbb{R}}\\ &=2\operatorname{Im}\left<\Phi,\operatorname{D}_{\mu}\Phi\right> _{\mathbb{C}}\\ &=-i\left(\Phi^{*}\operatorname{D}_{\mu}\Phi-\Phi\left( \operatorname{D}_{\mu}\Phi\right)^{*}\right)\\ &=-i\left(\Phi^{*}\partial_{\mu}\Phi-\Phi\partial_{\mu}\Phi^{*}- 2iq\left|\Phi\right|^{2}A_{\mu}\right).\end{split} \tag{2.16}\]
As expected, since it measures the change in the length of \(\vec{\Phi}\), we can see that \(\operatorname{D}_{\mu}^{\parallel}\vec{\Phi}\) is independent of \(A_{\mu}\), which affects only rotations; it may also be written
\[\begin{split}\operatorname{D}_{\mu}^{\parallel}\vec{\Phi}& =\partial_{\mu}\left|\Phi\right|\\ &=\operatorname{D}_{\mu}\left|\Phi\right|.\end{split} \tag{2.17}\]
Now, \(\mathrm{D}_{\mu}^{\perp}\vec{\Phi}\) is the length of the component perpendicular to \(\vec{\Phi}\) of the difference between \(\vec{\Phi}\) and its parallel transport in the \(e_{\mu}\) direction, i.e. it is the distance by which \(\vec{\Phi}\) moves due to rotation, ignoring the change in its length. Since the rotation is infinitesimal, the sine is equal to the angle in radians; hence the angular velocity of \(\vec{\Phi}\) relative to parallel transport per unit distance in the \(e_{\mu}\) direction is
\[\mathrm{D}_{\mu}^{\perp}\vec{\Phi} \equiv\frac{\mathrm{D}_{\mu}^{\perp}\vec{\Phi}}{\left\|\vec{\Phi} \right\|} \tag{2.18}\] \[=\frac{\mathrm{Im}\left\langle\Phi,\mathrm{D}_{\mu}\Phi\right\rangle _{\mathbb{C}}}{\left|\Phi\right|^{2}}\] \[=\frac{\mathrm{Im}\left\langle\Phi,\partial_{\mu}\Phi\right\rangle _{\mathbb{C}}}{\left|\Phi\right|^{2}}-qA_{\mu}\] \[=\frac{\partial_{\mu}^{\perp}\vec{\Phi}}{\left\|\vec{\Phi} \right\|}+A_{\mu}^{\perp}\] \[=\partial_{\mu}^{\perp}\vec{\Phi}+A_{\mu}^{\perp}\]
in radians per coordinate length. Note that as expected, this is the counterclockwise angular velocity of the components in the chosen gauge plus the counterclockwise angular velocity of the coordinate axes relative to parallel transport. We can also see this by using (2.10), whereby
the third line above yields
\[\begin{split}\mathrm{D}_{\mu}^{\times\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Quasi-gauge electromagnetism
Classical electromagnetism, excluding mass and gravity, is usually defined (using natural units and the mostly pluses metric signature in flat spacetime) as a \(U(1)\) gauge theory with no matter field and Lagrangian
\[L_{\widehat{\mathrm{EM}}}\equiv A_{\mu}J_{q}^{\mu}-\frac{1}{4}F_{\mu\nu}F^{\mu \nu}, \tag{2.22}\]
where
\[J_{q}^{\mu}\equiv qJ^{\mu}\equiv q\rho_{0}U^{\mu} \tag{2.23}\]
is the electromagnetic four-current, which may be defined in terms of the charge \(q\) and the matter four-current \(J\), which in turn may be defined in terms of the matter rest density \(\rho_{0}\) and the unit time-like four-vector \(U\) in the direction of the four-current. The charge \(q\) may be viewed as being per particle number (theoretically fractional for a continuum), which substitutes "particle number" for "matter" above.
The equations of motion (EOM) which result from varying the gauge potential are (see Section 3.3, or for a treatment in terms of forms see Section 4.1 in [3]) Maxwell's equations
\[J_{q}^{\nu} =\nabla_{\mu}F^{\nu\mu} \tag{2.24}\] \[\Rightarrow\nabla_{\nu}J_{q}^{\nu} =\nabla_{\nu}\nabla_{\mu}F^{\nu\mu}=0, \tag{2.25}\]
where \(\nabla\) is the Levi-Civita covariant derivative, so that (2.25) implies \(J_{q}\) is divergenceless. These EOM are used to excuse the fact that the Lagrangian is not gauge invariant (and is therefore ill-defined from the geometric point of view), since under a gauge transformation \(e^{-iq\Lambda}\) we have
\[A_{\mu} \to A_{\mu}+\partial_{\mu}\Lambda \tag{2.26}\] \[\Rightarrow L_{\widehat{\mathrm{EM}}} \to L_{\widehat{\mathrm{EM}}}-J_{q}^{\mu}\partial_{\mu}\Lambda\] \[=L_{\widehat{\mathrm{EM}}}-\nabla_{\mu}\left(J_{q}^{\mu}\Lambda \right)+\Lambda\nabla_{\mu}J_{q}^{\mu},\]
where the first extra term is a divergence and therefore does not change the equations of motion, while the second vanishes via the EOM. Varying the metric (see Section 3.5) identifies the Hilbert stress energy momentum (SEM) tensor as
\[T^{\mu\nu}_{\text{EM}}\equiv F^{\mu\lambda}F^{\nu}{}_{\lambda}-\frac{1}{4}F^{ \lambda\sigma}F_{\lambda\sigma}g^{\mu\nu}. \tag{2.27}\]
Following e.g. [2], we may include gravity and associate a mass \(m\) as well as a charge with the four-current by defining
\[L_{\text{G\"{EM}}}\equiv-m\sqrt{-J^{\mu}J^{\nu}g_{\mu\nu}}+qA_{\mu}J^{\mu}- \frac{1}{4}g^{\mu\nu}g^{\lambda\sigma}F_{\mu\lambda}F_{\nu\sigma}+\frac{1}{16 \pi}R, \tag{2.28}\]
where \(R\) is the spacetime scalar curvature. The new terms leave the gauge potential EOM unchanged, but upon variation of the metric yield a Hilbert SEM tensor with a time-like dust term
\[T^{\mu\nu}_{\text{GEM}}\equiv m\rho_{0}U^{\mu}U^{\nu}+T^{\mu\nu}_{\text{EM}}. \tag{2.29}\]
This tensor is proportional to the Einstein tensor and hence must be divergenceless, which (see Section 3.5) yields the equations of geodesic deviation
\[m\rho_{0}\left(\nabla_{U}U\right)^{\mu}=F^{\mu}{}_{\nu}J^{\nu}_{q}, \tag{2.30}\]
better known as the Lorentz force law.
### Klein-Gordon theory
For reference, we here summarize Klein-Gordon theory (complex, minimally coupled, and in flat spacetime; sometimes then called scalar electrodynamics or scalar QED), which is usually defined (again using natural units and the mostly pluses metric signature) as a \(U(1)\) gauge theory with a complex scalar matter field and Lagrangian
\[L_{\text{KG}}\equiv-\left(\text{D}_{\mu}\Phi\right)^{*}\text{D}^{\mu}\Phi-m^{2 }\Phi^{*}\Phi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}. \tag{2.31}\]
This is of course usually considered as a quantum theory, but here we want to explore whether it can provide at least inspiration for a model of classical electromagnetism.
Expanding the first and last terms (see Section 3.1), we find the EOM from varying the gauge potential are
\[-iq\left(\Phi^{*}\text{D}^{\nu}\Phi-\Phi\left(\text{D}^{\nu}\Phi\right)^{*} \right)=\nabla_{\mu}F^{\nu\mu}, \tag{2.32}\]
allowing us to identify the divergenceless matter four-current
\[J^{\nu}\equiv-i\left(\Phi^{*}\text{D}^{\nu}\Phi-\Phi\left(\text{D}^{\nu}\Phi \right)^{*}\right), \tag{2.33}\]
in terms of which (2.32) is Maxwell's equations. Varying the matter field, however, results in the Klein-Gordon equation
\[\text{D}_{\mu}\text{D}^{\mu}\Phi=m^{2}\Phi, \tag{2.34}\]
from which point it is difficult to proceed classically. Firstly, these EOM do not constrain the particle density associated with \(J\) to be positive, and secondly, we have no way to eliminate \(\Phi\) from the Lagrangian in order to obtain the classical Hilbert SEM tensor \(T^{\mu\nu}_{\rm GEM}\) of (2.29).
We nevertheless may obtain some results which might act as inspiration for a classical theory by taking \(A=0\). Then (free complex) Klein-Gordon theory no longer includes electromagnetism, but the associated Klein-Gordon equation
\[\partial_{\mu}\partial^{\mu}\Phi=m^{2}\Phi \tag{2.35}\]
has a general solution which is a linear combination of plane wave solutions
\[\Phi_{P}\equiv\left|\Phi\right|e^{iP_{\mu}x^{\mu}}, \tag{2.36}\]
where the complex modulus \(\left|\Phi\right|\) and the four-vector \(P\) are constant. Taking derivatives, we have
\[\begin{split}\partial_{\mu}\Phi_{P}&=iP_{\mu}\Phi _{P}\\ \Rightarrow\partial_{\mu}\partial^{\mu}\Phi_{P}&=-P _{\mu}P^{\mu}\Phi_{P}=m^{2}\Phi_{P}\\ \Rightarrow P^{\mu}&=mU^{\mu},\end{split} \tag{2.37}\]
where \(U\) is again a unit time-like four-vector, while
\[\begin{split} J^{\mu}_{P}&=2\left|\Phi\right|^{2}P ^{\mu}\\ \Rightarrow\rho_{0}&=2m\left|\Phi\right|^{2}.\end{split} \tag{2.38}\]
Geometrically, these are matter fields which rotate in the time-like \(P^{\mu}\) direction with an angular velocity of \(m\) radians per unit proper time, and whose constant length squared is proportional to the rest density. Moreover, on-shell for a plane wave the Lagrangian may be written as
\[L_{\rm KG-P}=-\frac{1}{2}m\rho_{0}U_{\mu}U_{\nu}g^{\mu\nu}-\frac{1}{2}m\rho_{0}, \tag{2.39}\]
which upon varying the metric (see Section 3.5) yields a time-like dust Hilbert SEM tensor of \(m\rho_{0}U^{\mu}U^{\nu}\).
This is all quite suggestive, but faces two issues as inspiration for constructing a classical gauge theory of electromagnetism: (1) we eliminated the electromagnetic interaction to get solutions, and (2) we cannot construct a general composite classical four-current by combining plane wave solutions, since their phases interfere with each other, i.e. they are quantum in nature. In Section 3 which follows, we arrive at an alternative Lagrangian which avoids these issues while taking advantage of the above observations.
## 3 Matter field electromagnetism
### The Lagrangian
With the geometric quantities of Section 2 in hand, we now define matter field electromagnetism (MFEM) (again using natural units and the mostly pluses metric signature) as a geometric \(U(1)\) gauge theory with Lagrangian
\[\begin{split} L_{\rm EM}&\equiv-\left\langle{\rm D }_{\mu}^{\perp}\vec{\Phi},{\rm D}_{\mu}^{\perp}\vec{\Phi}\right\rangle-m^{2} \left\langle\vec{\Phi},\vec{\Phi}\right\rangle-\frac{1}{2}\left\langle F,F \right\rangle\\ &=\frac{1}{4\left|\Phi\right|^{2}}\left(\Phi^{*}{\rm D}_{\mu}\Phi -\Phi\left({\rm D}_{\mu}\Phi\right)^{*}\right)\left(\Phi^{*}{\rm D}^{\mu} \Phi-\Phi\left({\rm D}^{\mu}\Phi\right)^{*}\right)-m^{2}\left|\Phi\right|^{2} -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}.\end{split} \tag{3.1}\]
Geometrically, the absolute value of the dynamical term is the squared perpendicular distance between the matter field and its parallel transport per unit distance in the direction in which this distance is extremal. Its negative can be written in terms of various quantities as
\[\begin{split}-L_{\text{EM-D}}&=\left\|\mathrm{D}_{\mu }^{\perp}\vec{\Phi}\right\|^{2}\\ &=\left\|\vec{\Phi}\right\|^{2}\left\langle\mathrm{D}_{\mu}^{ \mathscr{X}}\vec{\Phi},\mathrm{D}_{\mu}^{\mathscr{X}}\vec{\Phi}\right\rangle \\ &=\left\|\mathrm{D}_{\mu}\vec{\Phi}\right\|^{2}-\frac{1}{\left\| \vec{\Phi}\right\|^{2}}\left\langle\mathrm{D}_{\mu}\vec{\Phi},\vec{\Phi} \right\rangle_{\mathbb{R}}\left\langle\mathrm{D}^{\mu}\vec{\Phi},\vec{\Phi} \right\rangle_{\mathbb{R}}\\ &=\mathrm{D}_{\mu}\Phi^{a}\mathrm{D}^{\mu}\Phi_{a}-\left(\Phi_{c }\Phi^{c}\right)^{-1}\Phi^{a}\left(\mathrm{D}_{\mu}\Phi_{a}\right)\Phi^{b} \left(\mathrm{D}^{\mu}\Phi_{b}\right)\\ &=\frac{1}{\left|\Phi\right|^{2}}\mathrm{Im}\left\langle\mathrm{D }_{\mu}\Phi,\Phi\right\rangle_{\mathbb{C}}\mathrm{Im}\left\langle\mathrm{D}^{ \mu}\Phi,\Phi\right\rangle_{\mathbb{C}}\\ &=\left|\Phi\right|^{2}\left(\partial_{\mu}\varphi-qA_{\mu} \varphi\right)\left(\partial^{\mu}\varphi-qA^{\mu}\varphi\right).\end{split} \tag{3.2}\]
Note that if \(\mathrm{D}_{\mu}^{\parallel}\vec{\Phi}=0\), i.e. if the matter field has constant length, then \(\mathrm{D}_{\mu}^{\perp}\vec{\Phi}=\mathrm{D}_{\mu}\vec{\Phi}\) and our Lagrangian is identical to the Klein-Gordon Lagrangian.
In order to vary the gauge potential, we must express the Lagrangian explicitly in terms of \(A_{\mu}\). Expanding out the gauge covariant derivatives yields a dynamical term for gauge potential variations of
\[\begin{split} L_{\text{EM-D}}&=\frac{1}{4\left| \Phi\right|^{2}}\left\|\Phi^{*}\partial_{\mu}\Phi-\Phi\partial_{\mu}\Phi^{*}-2 iq\left|\Phi\right|^{2}A_{\mu}\right\|^{2}\\ \Rightarrow L_{\text{EM-D}}\left(A_{\mu}\right)&=- iqA_{\mu}\left(\Phi^{*}\partial^{\mu}\Phi-\Phi\partial^{\mu}\Phi^{*}\right)-q^{2} \left|\Phi\right|^{2}A_{\mu}A^{\mu}.\end{split} \tag{3.3}\]
Note that this is identical to that of the Klein-Gordon dynamical term
\[\begin{split} L_{\text{KG-D}}&=-\left(\partial_{\mu }\Phi-iqA_{\mu}\Phi\right)\left(\partial^{\mu}\Phi^{*}+iqA^{\mu}\Phi^{*}\right) \\ \Rightarrow L_{\text{KG-D}}\left(A_{\mu}\right)&=- iqA_{\mu}\left(\Phi^{*}\partial^{\mu}\Phi-\Phi\partial^{\mu}\Phi^{*}\right)-q^{2} \left|\Phi\right|^{2}A_{\mu}A^{\mu},\end{split} \tag{3.4}\]
which means we will end up with the same gauge potential EOM.
### Matter field equations of motion
The Euler-Lagrange equation may be written
\[\frac{\partial L}{\partial\Phi}=\mathrm{D}_{\mu}p_{\Phi}^{\mu} \tag{3.5}\]
in terms of the canonical momentum
\[p_{\Phi}^{\mu}\equiv\frac{\partial L}{\partial\left(\mathrm{D}_{\mu}\Phi \right)}. \tag{3.6}\]
In keeping with our geometrical viewpoint, we opt to work here in terms of the real vector components \(\Phi^{a}\), but in Section 3.6 we derive the same results in terms of \(\Phi\) and \(\Phi^{*}\) as independent
quantities, as is more common. Recalling (2.20), we have
\[L_{\rm EM}\left(\Phi^{a},{\rm D}_{\mu}\Phi^{a}\right) =-{\rm D}_{\mu}\Phi^{a}{\rm D}^{\mu}\Phi_{a}+\left(\Phi_{c}\Phi^{c} \right)^{-1}\Phi^{a}\left({\rm D}_{\mu}\Phi_{a}\right)\Phi^{b}\left({\rm D}^{\mu }\Phi_{b}\right)-m^{2}\Phi_{a}\Phi^{a},\] \[\frac{\partial L_{\rm EM}}{\partial\Phi^{d}} =-2\left(\Phi_{c}\Phi^{c}\right)^{-2}\Phi_{d}\Phi^{a}\left({\rm D }_{\mu}\Phi_{a}\right)\Phi^{b}\left({\rm D}^{\mu}\Phi_{b}\right)\] \[\quad+2\left(\Phi_{c}\Phi^{c}\right)^{-1}\Phi^{a}\left({\rm D}_{ \mu}\Phi_{a}\right)\left({\rm D}^{\mu}\Phi_{d}\right)-2m^{2}\Phi_{d},\] \[p_{\Phi^{d}}^{\mu} =\frac{\partial L_{\rm EM}}{\partial\left({\rm D}_{\mu}\Phi^{d} \right)} =-2{\rm D}^{\mu}\Phi_{d}+2\left(\Phi_{c}\Phi^{c}\right)^{-1}\Phi^{a }\left({\rm D}^{\mu}\Phi_{a}\right)\Phi_{d} \tag{3.7}\] \[\Rightarrow{\rm D}_{\mu}p_{\Phi^{d}}^{\mu} =-2{\rm D}_{\mu}{\rm D}^{\mu}\Phi_{d}-4\left(\Phi_{c}\Phi^{c} \right)^{-2}\Phi^{b}\left({\rm D}_{\mu}\Phi_{b}\right)\Phi^{a}\left({\rm D}^{ \mu}\Phi_{a}\right)\Phi_{d}\] \[\quad+2\left(\Phi_{c}\Phi^{c}\right)^{-1}{\rm D}_{\mu}\Phi^{a} \left({\rm D}^{\mu}\Phi_{a}\right)\Phi_{d}\] \[\quad+2\left(\Phi_{c}\Phi^{c}\right)^{-1}\Phi^{a}\left({\rm D}_{ \mu}{\rm D}^{\mu}\Phi_{a}\right)\Phi_{d}+2\left(\Phi_{c}\Phi^{c}\right)^{-1} \Phi^{a}\left({\rm D}^{\mu}\Phi_{a}\right){\rm D}_{\mu}\Phi_{d}.\]
But
\[2\left(\Phi_{c}\Phi^{c}\right)^{-1}\Phi^{a}\left({\rm D}_{\mu}{\rm D}^{\mu} \Phi_{a}\right)\Phi_{d}=2\left({\rm D}_{\mu}{\rm D}^{\mu}\Phi_{d}\right) \tag{3.8}\]
since applying each side to \(\Phi^{d}\) yields the same result. Therefore the Euler-Lagrange equation is
\[-2m^{2}\Phi_{d} =-2\left(\Phi_{c}\Phi^{c}\right)^{-2}\Phi^{b}\left({\rm D}_{\mu} \Phi_{b}\right)\Phi^{a}\left({\rm D}^{\mu}\Phi_{a}\right)\Phi_{d}+2\left(\Phi _{c}\Phi^{c}\right)^{-1}{\rm D}_{\mu}\Phi^{a}{\rm D}^{\mu}\Phi_{a}\Phi_{d}\] (3.9) \[=-2\left(\Phi_{c}\Phi^{c}\right)^{-1}\Phi_{d}\left(-{\rm D}_{\mu} \Phi^{a}{\rm D}^{\mu}\Phi_{a}+\left(\Phi_{c}\Phi^{c}\right)^{-1}\Phi^{a}\left( {\rm D}^{\mu}\Phi_{a}\right)\Phi^{b}\left({\rm D}_{\mu}\Phi_{b}\right)\right)\] \[\Rightarrow-m^{2} =\left\langle{\rm D}_{\mu}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
### Gauge potential equations of motion
From (3.3) and (2.8), the Lagrangian for gauge potential variations is
\[\begin{split} L_{\text{EM}}\left(A_{\mu},\partial_{\nu}A_{\mu} \right)&=-iqA_{\mu}\left(\Phi^{*}\partial^{\mu}\Phi-\Phi \partial^{\mu}\Phi^{*}\right)-q^{2}\left|\Phi\right|^{2}A_{\mu}A^{\mu}\\ &\quad-\frac{1}{2}\left(\partial_{\mu}A_{\nu}\partial^{\mu}A^{ \nu}-\partial_{\mu}A_{\nu}\partial^{\nu}A^{\mu}\right).\end{split} \tag{3.12}\]
Our dynamical term does not depend on the derivative of the gauge potential, so the electromagnetic four-current will be \(\partial L/\partial A_{\nu}\) as is usual in other EM gauge theories. Explicitly,
\[\begin{split}\frac{\partial L}{\partial A_{\nu}}& =-iq\left(\Phi^{*}\partial^{\nu}\Phi-\Phi\partial^{\nu}\Phi^{*} \right)-2q^{2}\left|\Phi\right|^{2}A^{\nu}\\ &=-iq\left(\Phi^{*}\text{D}^{\nu}\Phi-\Phi\left(\text{D}^{\nu} \Phi\right)^{*}\right),\\ p_{A_{\nu}}^{\mu}\equiv\frac{\partial L}{\partial\left(\partial _{\mu}A_{\nu}\right)}&=-\left(\partial^{\mu}A^{\nu}-\partial^{ \nu}A^{\mu}\right)\\ &=-F^{\mu\nu},\end{split} \tag{3.13}\]
yielding an Euler-Lagrange equation of
\[iq\left(\Phi^{*}\text{D}^{\nu}\Phi-\Phi\left(\text{D}^{\nu}\Phi\right)^{*} \right)=\nabla_{\mu}F^{\mu\nu}, \tag{3.14}\]
which identifies the electromagnetic four-current as
\[J_{q}^{\nu}\equiv-iq\left(\Phi^{*}\text{D}^{\nu}\Phi-\Phi\left(\text{D}^{\nu} \Phi\right)^{*}\right). \tag{3.15}\]
As in (2.25) \(J_{q}\) is therefore divergenceless when the gauge potential EOM are satisfied; reversing the indices of \(F\) yields the usual form of Maxwell's equations
\[J_{q}^{\nu}=\nabla_{\mu}F^{\nu\mu}. \tag{3.16}\]
### Geometry of the four-current
The matter four-current may be written in terms of the various quantities we have defined as
\[\begin{split} J_{\mu}&=-i\left(\Phi^{*}\text{D}_{ \nu}\Phi-\Phi\left(\text{D}_{\nu}\Phi\right)^{*}\right)\\ &=-i\left(\Phi^{*}\partial_{\mu}\Phi-\Phi\partial_{\mu}\Phi^{*} -2iq\left|\Phi\right|^{2}A_{\mu}\right)\\ &=2\left\|\vec{\Phi}\right\|\text{D}_{\mu}^{\perp}\vec{\Phi}\\ &=2\left\|\vec{\Phi}\right\|^{2}\text{D}_{\mu}^{\perp}\vec{\Phi} \\ &=4\text{D}_{\mu}^{\circ}\vec{\Phi}\\ &=2\left|\Phi\right|^{2}\left(\partial_{\mu}\varphi+A_{\mu}^{ \preclearrowright}\right)\\ &=2\text{Im}\left\langle\text{D}_{\mu}\Phi,\Phi\right\rangle_{ \mathbb{C}}.\end{split} \tag{3.17}\]
The fourth expression in (3.17) explicitly identifies the direction of the four-current at a given point as being the direction in which the matter field angular velocity (relative to parallel
transport) is extremal. Recall from Section 3.2 that the matter field EOM require the unit length four-vector \(U^{\mu}\) in this direction to be time-like. We may then write
\[J_{\mu} =\rho_{0}U_{\mu} \tag{3.18}\] \[\Rightarrow\rho_{0} =-U^{\mu}J_{\mu}\] \[=-2\left\|\vec{\Phi}\right\|^{2}\mathrm{D}_{U}^{\Join}\vec{\Phi}\] \[=-4\mathrm{D}_{U}^{\circ}\vec{\Phi},\]
so that geometrically the matter per volume orthogonal to the four-current is proportional to the internal space area swept out clockwise by the matter field per unit proper time along the four-current.
Since the matter field EOM are \(\mathrm{D}_{U}^{\Join}\vec{\Phi}=\pm m\), and particle density must be positive, we henceforth only consider the negative value
\[\mathrm{D}_{U}^{\Join}\vec{\Phi}=-m \tag{3.19}\]
to be physical, so that we have
\[\rho_{0} =2\left\|\vec{\Phi}_{\mathrm{EL}}\right\|^{2}m \tag{3.20}\] \[\Rightarrow J^{\mu} =2\left\|\vec{\Phi}_{\mathrm{EL}}\right\|^{2}mU^{\mu}\]
in terms of the on-shell matter field \(\vec{\Phi}_{\mathrm{EL}}\), whose rotation relative to parallel transport in the direction \(U\) is always clockwise. The reversed sign matter field EOM may be associated with a positive particle density by defining the electromagnetic four-current with the opposite sign, which is equivalent to simply flipping the sign of \(q\). Thus positively charged four-currents rotate clockwise, while negatively charged four-currents rotate counterclockwise.
We presently see an immediate distinction with the classical theory of continua: the metric dependence of \(\rho_{0}\) and \(J^{\mu}\). In the classical theory, \(\rho_{0}\) and \(J^{\mu}\) both depend upon the metric such that the four-current density \(\mathfrak{J}\equiv J\sqrt{-\det\left(g_{\mu\nu}\right)}\) is metric-independent. In MFEM, \(\rho_{0}\) has a different metric dependency and \(J\) is a metric-independent 1-form, except on-shell where \(\rho_{0}\) is metric-independent and \(J\) is proportional to a unit vector. This will be explained in Section 5.2 when viewing MFEM as an approximation of QED.
It is important to note that despite the appearance of \(m\) in the on-shell expression for \(J\), it is the expression for the matter (particle number) four-current; the electromagnetic four-current is \(J_{q}\equiv qJ\), just as the mass four-current is \(J_{m}\equiv mJ\), which will therefore have a factor \(m^{2}\). We also may note that we can now write the negative dynamical term of the Lagrangian on-shell as
\[-L_{\mathrm{EM-D-EL}} =\frac{1}{4\left\|\vec{\Phi}_{\mathrm{EL}}\right\|^{2}}\left\langle J,J\right\rangle \tag{3.21}\] \[=\frac{1}{2}\frac{m}{\rho_{0}}\left\langle J,J\right\rangle\] \[=\frac{1}{2}m\rho_{0}\left\langle U,U\right\rangle,\]
which is the form of a Lagrangian for relativistic dust.
### Metric equations of motion
As noted in the previous section, both the four-current 1-form \(J_{\nu}=-i\left(\Phi^{*}\mathrm{D}_{\nu}\Phi-\Phi\left(\mathrm{D}_{\nu}\Phi\right) ^{*}\right)\) and the rest density \(\rho_{0}=2\left\|\vec{\Phi}_{\mathrm{EL}}\right\|^{2}m\) are metric-independent, so that for an on-shell matter field we use (3.21) and (3.20) to write the action as
\[S_{\mathrm{EM}}\left(g_{\mu\nu}\right)=\int\left(-\frac{1}{2}\frac{m}{\rho_{0} }J_{\mu}J_{\nu}g^{\mu\nu}-\frac{1}{2}m\rho_{0}-\frac{1}{4}g^{\mu\nu}g^{\lambda \sigma}F_{\mu\lambda}F_{\nu\sigma}\right)\sqrt{g}\mathrm{d}^{4}x. \tag{3.22}\]
Recalling that \(\delta g^{\mu\nu}=-g^{\mu\lambda}g^{\nu\sigma}\delta g_{\lambda\sigma}\) and \(\delta\left(\sqrt{g}\right)=\frac{1}{2}\sqrt{g}g^{\mu\nu}\delta g_{\mu\nu}\), the variation of the action yields
\[\begin{split}\delta S_{\mathrm{EM}}\left(g_{\mu\nu}\right)& =\frac{1}{2}\int\left(\frac{m}{\rho_{0}}J^{\mu}J^{\nu}\sqrt{g}- \frac{m}{\rho_{0}}J_{\lambda}J_{\sigma}g^{\lambda\sigma}\frac{1}{2}\sqrt{g}g^ {\mu\nu}-m\rho_{0}\frac{1}{2}\sqrt{g}g^{\mu\nu}\right)\delta g_{\mu\nu} \mathrm{d}^{4}x\\ &\quad-\frac{1}{4}\int\left(2g^{\lambda\sigma}F_{\mu\lambda}F_{ \nu\sigma}\sqrt{g}\delta g^{\mu\nu}+F^{\lambda\sigma}F_{\lambda\sigma}\frac{1} {2}\sqrt{g}g^{\mu\nu}\delta g_{\mu\nu}\right)\mathrm{d}^{4}x\\ &=\frac{1}{2}\int\left(m\rho_{0}U^{\mu}U^{\nu}+T^{\mu\nu}_{ \mathrm{EM}}\right)\delta g_{\mu\nu}\sqrt{g}\mathrm{d}^{4}x,\end{split} \tag{3.23}\]
where \(T^{\mu\nu}_{\mathrm{EM}}\) is defined per (2.27) and the quantity in parentheses in the last line defines the Hilbert SEM tensor, which is equal to \(T^{\mu\nu}_{\mathrm{GEM}}\) from (2.29), matching that of quasi-gauge EM as desired.
This implies the Lorentz force law, completing our equivalence with classical electromagnetism, as we quickly review (see e.g. [2]). \(T^{\mu\nu}_{\mathrm{GEM}}\) is proportional to the Einstein tensor and hence must be divergenceless, which yields the equations of geodesic deviation
\[\begin{split}\nabla_{\nu}T^{\mu\nu}_{\mathrm{GEM}}& =\nabla_{\nu}\left(m\rho_{0}U^{\mu}U^{\nu}+g_{\lambda\sigma}F^{ \mu\lambda}F^{\nu\sigma}-\frac{1}{4}F^{\lambda\sigma}F_{\lambda\sigma}g^{\mu \nu}\right)\\ &=mU^{\mu}\nabla_{\nu}J^{\nu}+m\rho_{0}U^{\nu}\nabla_{\nu}U^{\mu} \\ &\quad+g_{\lambda\sigma}F^{\mu\lambda}\nabla_{\nu}F^{\nu\sigma}+g _{\lambda\sigma}F^{\nu\sigma}\nabla_{\nu}F^{\mu\lambda}-\frac{1}{2}F^{\lambda \sigma}\nabla_{\nu}F_{\lambda\sigma}g^{\mu\nu}\\ &=m\rho_{0}\left(\nabla_{U}U\right)^{\mu}+F^{\mu}{}_{\sigma} \nabla_{\nu}F^{\nu\sigma}\\ &\quad+\frac{1}{2}F^{\lambda\sigma}g^{\mu\nu}\left(\nabla_{ \lambda}F_{\nu\sigma}-\nabla_{\sigma}F_{\nu\lambda}-\nabla_{\nu}F_{\lambda \sigma}\right)\\ \Rightarrow m\rho_{0}\left(\nabla_{U}U\right)^{\mu}& =F^{\mu}{}_{\sigma}J^{\sigma}_{q},\end{split} \tag{3.24}\]
where in the penultimate equality we use \(\nabla_{\nu}J^{\nu}=0\) and the anti-symmetry of \(F\), yielding the three terms in parentheses which vanish due to the second Bianchi identity, and in the last line we use the gauge potential EOM. \(U\) is the unit four-vector in the direction of the proper time \(\tau\) of the four-current, so that at a point in flat spacetime this equation becomes
\[\partial_{\tau}P^{\mu}=qF^{\mu}{}_{\sigma}U^{\sigma}, \tag{3.25}\]
where \(P=mU\). In an inertial frame we then have components
\[\begin{split}\partial_{\tau}\left(E\quad p^{x}\quad p^{y}\quad p ^{z}\right)&=q\begin{pmatrix}0&E^{x}&E^{y}&E^{z}\\ E^{x}&0&B^{z}&-B^{y}\\ E^{y}&-B^{z}&0&B^{x}\\ E^{z}&B^{y}&-B^{x}&0\end{pmatrix}\begin{pmatrix}\gamma\\ \gamma v^{x}\\ \gamma v^{y}\\ \gamma v^{z}\end{pmatrix}\\ \Rightarrow\partial_{t}\left(\mathbf{p}\right)&=q\left(\mathbf{E}+ \mathbf{v}\times\mathbf{B}\right),\end{split} \tag{3.26}\]
where \(\gamma\) is the Lorentz factor, we use \(\partial_{\tau}=\gamma\partial_{t}\), and recall that \(\mathbf{p}\equiv\gamma m\mathbf{v}\) is the relativistic momentum, which in the non-relativistic limit is just the momentum.
### Noether's theorem
In order to take advantage of the simplicity of complex gauge transformations, we here re-derive the matter field EOM in complex notation. As with similar theories, we may vary \(\Phi\) and \(\Phi^{*}\) as independent quantities. Using the complex second expression of (3.1), we write the Lagrangian for matter field variations in terms of the four-current as
\[\begin{split} L_{\text{EM}}\left(\Phi,\text{D}_{\mu}\Phi\right)& =\frac{1}{4\Phi^{*}\Phi}\left(iJ_{\mu}\right)\left(iJ^{\mu}\right)-m ^{2}\Phi^{*}\Phi,\\ iJ_{\mu}&=\Phi^{*}\text{D}_{\mu}\Phi-\Phi\left( \text{D}_{\mu}\Phi\right)^{*}.\end{split} \tag{3.27}\]
Calculating derivatives then yields
\[\begin{split}\frac{\partial L_{\text{EM}}}{\partial\Phi}& =\frac{J_{\mu}J^{\mu}\Phi^{*}}{4\left(\Phi^{*}\Phi\right)^{2}}- \frac{iJ^{\mu}\left(\text{D}_{\mu}\Phi\right)^{*}}{2\Phi^{*}\Phi}-m^{2}\Phi^{ *},\\ p_{\Phi}^{\mu}&=\frac{\partial L_{\text{EM}}}{ \partial\left(\text{D}_{\mu}\Phi\right)}&=\frac{\Phi^{*}iJ^{ \mu}}{2\Phi^{*}\Phi}\\ \Rightarrow\text{D}_{\mu}\left(p_{\Phi}^{\mu}\right)& =-\frac{\Phi^{*}iJ^{\mu}}{2\left(\Phi^{*}\Phi\right)^{2}}\partial_{ \mu}\left|\Phi\right|^{2}+\frac{\left(\text{D}_{\mu}\Phi\right)^{*}iJ^{\mu}}{2 \Phi^{*}\Phi}+\frac{\Phi^{*}i\nabla_{\mu}J^{\mu}}{2\Phi^{*}\Phi},\end{split} \tag{3.28}\]
where we use the facts that D is \(\partial\) if applied to a scalar and \(\nabla\) if applied to a vector, and \(\text{D}_{\mu}\left(\Phi^{*}\right)=\left(\text{D}_{\mu}\Phi\right)^{*}\). If we multiply the Euler-Lagrange equation by \(\Phi\), we arrive at
\[-m^{2}\Phi^{*}\Phi=-\frac{J_{\mu}J^{\mu}}{4\Phi^{*}\Phi}-\frac{iJ^{\mu}}{2 \Phi^{*}\Phi}\partial_{\mu}\left|\Phi\right|^{2}+\frac{\left(\text{D}_{\mu} \Phi\right)^{*}\Phi iJ^{\mu}}{\Phi^{*}\Phi}+\frac{i}{2}\nabla_{\mu}J^{\mu}. \tag{3.29}\]
But using (2.16) we have
\[\begin{split}\left\langle\text{D}_{\mu}\Phi,\Phi\right\rangle_{ \mathbb{C}}iJ^{\mu}&=\left(i\text{Re}\left\langle\text{D}_{\mu} \Phi,\Phi\right\rangle_{\mathbb{C}}-\text{Im}\left\langle\text{D}_{\mu}\Phi, \Phi\right\rangle_{\mathbb{C}}\right)J^{\mu}\\ &=\left(\frac{i}{2}\partial_{\mu}\left|\Phi\right|^{2}+\frac{1}{2} J_{\mu}\right)J^{\mu},\end{split} \tag{3.30}\]
so that the Euler-Lagrange equation multiplied by \(\Phi\) is
\[-m^{2}\Phi^{*}\Phi=\frac{J_{\mu}J^{\mu}}{4\Phi^{*}\Phi}+\frac{i}{2}\nabla_{ \mu}J^{\mu}, \tag{3.31}\]
whose real and imaginary parts yield
\[\begin{split}-m^{2}&=\left\langle\text{D}_{\mu}^{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\
with an Euler-Lagrange equation multiplied by \(\Phi^{*}\) of
\[-m^{2}\Phi^{*}\Phi=\frac{J_{\mu}J^{\mu}}{4\Phi^{*}\Phi}-\frac{i}{2} \nabla_{\mu}J^{\mu}, \tag{3.34}\]
whose second term we note has a reversed sign from (3.31). Again, we obtain the same results if we use the equivalent Euler-Lagrange expression \(\partial L/\partial\Phi=\partial_{\mu}\left(\partial L/\partial\left(\partial_ {\mu}\Phi\right)\right)\) in flat spacetime.
With the matter field EOM confirmed and a complex canonical momentum in hand, we may apply Noether's theorem in complex form. The MFEM Lagrangian is invariant under global infinitesimal gauge transformations, which in complex notation transform the matter field according to
\[\begin{split}\Phi&\to e^{iq\varepsilon}\Phi\\ &=\Phi+iq\varepsilon\Phi,\end{split} \tag{3.35}\]
yielding a corresponding Noether current of
\[\begin{split} iq\Phi p^{\mu}_{\Phi}&=-\frac{1}{2}J^ {\mu}_{q}\\ \Rightarrow\nabla_{\mu}J^{\mu}_{q}&=0.\end{split} \tag{3.36}\]
Thus the four-current is divergenceless if the matter field EOM are satisfied, even if the gauge potential EOM are not; this is commonly described in flat spacetime as "global gauge invariance results in conservation of charge." This result implies that the second EOM in (3.32) is redundant, since it will be true for any matter field for which the action vanishes upon its variation.
## 4 From QED to classical electromagnetism
In this section we arrive at MFEM as a limit of QED by setting up a quantum state configuration that corresponds to a classical four-current, i.e. a configuration similar to the "in" state when defining the scattering matrix, except assumed to hold locally for all spacetime. Explicitly, this configuration will consist of spatially separated electron wave packets interacting only via an electromagnetic field which is too weak for pair production or bound states. Our treatment is for spin up electrons, but spin down electrons and positrons of both spins may be treated similarly.
We begin with a summary of the relevant aspects of QED to fix notation and conventions. Our presentation is a bit idiosyncratic but allows for calculations without the usual clutter of integrals and sums.
### The QED Lagrangian
Quantum electrodynamics (in flat spacetime) is defined by quantizing the classical field theory of the Dirac Lagrangian (minimally coupled, using natural units and the mostly pluses metric signature)
\[L_{\text{DIRAC}}\equiv-\text{Re}\left(\overline{\psi}\gamma^{\mu }\text{D}_{\mu}\psi\right)-m\overline{\psi}\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}, \tag{4.1}\]
where in the first term we only take the negative real part, \(\psi\) is a complex four-component spinor matter field, \(\gamma^{\mu}\) are the Dirac matrices, and we follow Weinberg[5] in defining \(\overline{\psi}\equiv\psi^{\dagger}i\gamma^{0}\) as the Dirac adjoint of the matter field, where \(\psi^{\dagger}\) is the Hermitian conjugate. The Dirac matrices are a complex matrix representation of an arbitrary constant orthonormal spacetime dual frame, with matrix multiplication the action of Clifford multiplication.
Expanding the first and last terms of the Lagrangian, we find the EOM from varying the gauge potential are
\[q\overline{\psi}i\gamma^{\nu}\psi=\nabla_{\mu}F^{\nu\mu}, \tag{4.2}\]
allowing us to identify the real divergenceless matter four-current
\[J^{\nu}\equiv\overline{\psi}i\gamma^{\nu}\psi, \tag{4.3}\]
in terms of which (4.2) is Maxwell's equations.
We note that this expression determines the components of \(J\), so that for example if in our inertial frame the only non-zero component is \(J^{0}\), then we may write \(J^{\nu}=\overline{\psi}i\gamma^{0}\psi U^{\nu}\), where \(U\) is the unit vector in the \(x^{0}\) direction; this expression then holds in any inertial frame, since under a Lorentz transformation the spinor \(\psi\) is multiplied by the matrix representation of this transformation to make it so. We also note that the particle density associated with \(J\) may be shown to be \(\psi^{\dagger}\psi\), which is positive, and this definition allows us to write the Lagrangian as
\[L_{\text{DIRAC}}=-\text{Re}\left(\overline{\psi}\gamma^{\mu}\partial_{\mu} \psi\right)+qJ^{\mu}A_{\mu}-m\overline{\psi}\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}, \tag{4.4}\]
which omitting the terms in \(\psi\) is identical to the quasi-gauge EM Lagrangian.
Varying the matter field results in the Dirac equation
\[\gamma^{\mu}\text{D}_{\mu}\psi=-m\psi. \tag{4.5}\]
The Dirac operator \(\gamma^{\mu}\text{D}_{\mu}\) is the "square root" of the Laplacian, i.e. generalized to the gauge covariant derivative we have
\[\left(\gamma^{\mu}\text{D}_{\mu}\right)\left(\gamma^{\nu}\text{D}_{\nu} \right)=\gamma^{\mu}\gamma^{\nu}\text{D}_{\mu}\text{D}_{\nu}=\eta^{\mu\nu} \text{D}_{\mu}\text{D}_{\nu}=\text{D}^{\mu}\text{D}_{\mu}, \tag{4.6}\]
using the properties of the Dirac matrices \(\gamma^{\mu}\) (see [4]). Thus the components of \(\psi\) satisfy
\[0 =\left(\gamma^{\mu}\text{D}_{\mu}+m\right)\psi \tag{4.7}\] \[\Rightarrow 0 =\left(\gamma^{\mu}\text{D}_{\mu}-m\right)\left(\gamma^{\mu} \text{D}_{\mu}+m\right)\psi\] \[=\left(\text{D}^{\mu}\text{D}_{\mu}-m^{2}\right)\psi,\]
the Klein-Gordon equation.
Varying the frame results in the on-shell Hilbert SEM tensor
\[T^{\mu\nu}_{\text{DIRAC}}=\frac{1}{2}\text{Re}\left(\overline{\psi}\gamma^{ \mu}\text{D}^{\nu}\psi+\overline{\psi}\gamma^{\nu}\text{D}^{\mu}\psi\right)+ T^{\mu\nu}_{\text{EM}}. \tag{4.8}\]
### Plane wave solutions
The canonically quantized Dirac field is based on the free quantum Dirac equation
\[\gamma^{\mu}\partial_{\mu}\hat{\psi}=-m\hat{\psi}, \tag{4.9}\]
where \(\hat{\psi}\) is now the time-dependent Dirac spinor operator in the Heisenberg picture.
This equation has plane wave solutions, which we express using the mostly pluses chiral basis from [5], in which the Dirac matrices are
\[\gamma^{0}\equiv-i\begin{pmatrix}0&I\\ I&0\end{pmatrix},\;\gamma^{i}\equiv-i\begin{pmatrix}0&\sigma_{i}\\ -\sigma_{i}&0\end{pmatrix}, \tag{4.10}\]
where \(\sigma_{i}\) are the Pauli matrices. With this choice we may define an (operator-valued) spin up electron plane wave solution of four-momentum \(P=mU\) by aligning \(\gamma^{0}\) and the coordinate \(x^{0}\) with \(U\), so that \(e^{iP_{\mu}x^{\mu}}=e^{-imx^{0}}\) and we write
\[\begin{split}\hat{\psi}_{P}&\equiv\frac{1}{\sqrt{2}}\begin{pmatrix} 1\\ 0\\ 1\\ 0\end{pmatrix}\hat{a}_{P}e^{iP_{\mu}x^{\mu}}\\ &\equiv u\hat{a}_{P}e^{-imx^{0}},\end{split} \tag{4.11}\]
where \(\hat{a}_{P}\) is postulated to be the annihilation operator for a spin up single electron state \(\left|P\right\rangle\) of four-momentum \(P\). The spinor \(u\) satisfies
\[\begin{split} u^{\dagger}u&=1,\\ \gamma^{0}u&=-iu,\\ u^{\dagger}\gamma^{0}\gamma^{\nu\neq 0}u&=0,\end{split} \tag{4.12}\]
so that we may verify that the free Dirac equation is satisfied:
\[\begin{split}\gamma^{\mu}\partial_{\mu}\hat{\psi}_{P}& =\gamma^{0}u\partial_{0}\hat{a}_{P}e^{-imx^{0}}\\ &=-um\hat{a}_{P}e^{-imx^{0}}\\ &=-m\hat{\psi}_{P}\end{split} \tag{4.13}\]
Keeping \(\gamma^{0}\) constant, a spin up electron plane wave solution of arbitrary four-momentum \(K\) may be written
\[\hat{\psi}_{K}=\Lambda_{K}u\hat{a}_{K}e^{iK_{\mu}x^{\mu}}, \tag{4.14}\]
where \(\Lambda_{K}\) is the matrix representation of the Lorentz transformation which aligns \(\gamma^{0}\) with \(K\). The general free solution \(\hat{\psi}\) is then a (complex) linear combination (integral) of such plane wave solutions, along with similar solutions for spin down electrons and positrons of both spins, and if \(\left|0\right\rangle\) is the ground state of the free theory, then we have
\[\left\langle 0\middle|\hat{\psi}\middle|P\right\rangle=ue^{-imx^{0}}. \tag{4.15}\]
In the interacting theory, we choose inertial coordinates and define an (operator-valued) plane wave solution for a spin up electron of four-momentum \(P=mU=(E_{\mathbf{p}},\mathbf{p})\) at time \(t_{0}\) to be
\[\hat{\psi}_{\mathbf{p}}=\Lambda_{\mathbf{p}}u\hat{a}_{\mathbf{p}}\left(t_{0} \right)e^{i\mathbf{p}\cdot\mathbf{x}}, \tag{4.16}\]
where \(\hat{a}_{\mathbf{p}}\left(t_{0}\right)\) is now postulated to be the annihilation operator for a spin up single electron state \(|\mathbf{p}\rangle_{t_{0}}\) of momentum \(\mathbf{p}\) at time \(t_{0}\). The general solution at time \(t_{0}\) is again a linear combination of such plane wave solutions, so that if \(|\Omega\rangle\) is the ground state of the interacting theory and \(\gamma^{0}\) and \(t=x^{0}\) are aligned with \(U\), we have
\[\left\langle\Omega\Big{|}\hat{\psi}\Big{|}\mathbf{p}\right\rangle_{t_{0}}=ue^{ -imt_{0}}. \tag{4.17}\]
### Electron packets
We may now construct a spin up electron packet. We define the quantum state \(|\phi_{\mathbf{p}}\rangle_{t_{0}}\) to be an integral of electron states \(|\mathbf{k}\rangle_{t_{0}}\) with momenta clustered around \(\mathbf{p}\) such that the spinor-valued wave packet
\[\begin{split}\phi_{\mathbf{p}}\left(\mathbf{x},t_{0}\right)& \equiv\left\langle\Omega\Big{|}\hat{\psi}\Big{|}\phi_{\mathbf{p}} \right\rangle_{t_{0}}\\ &\sim\left|\phi_{\mathbf{p}}\left(\mathbf{x},t_{0}\right)\right|ue ^{-imt_{0}}\end{split} \tag{4.18}\]
is smooth and normalized, i.e. its modulus is close to a smooth envelope whose square integrated over space is unity. Propagation in time is defined by
\[\begin{split}\gamma^{\mu}\mathrm{D}_{\mu}\phi_{\mathbf{p}}\left( \mathbf{x},t\right)&=\left\langle\Omega\Big{|}\gamma^{\mu} \mathrm{D}_{\mu}\hat{\psi}\Big{|}\phi_{\mathbf{p}}\right\rangle_{t_{0}}\\ &=-\left\langle\Omega\Big{|}m\hat{\psi}\Big{|}\phi_{\mathbf{p}} \right\rangle_{t_{0}}\\ &=-m\phi_{\mathbf{p}}\left(\mathbf{x},t_{0}\right),\end{split} \tag{4.19}\]
i.e. \(\phi_{\mathbf{p}}\left(\mathbf{x},t\right)\) satisfies the Dirac equation.
It is important to note that a Gaussian wave packet at \(t_{0}\) will not remain so as it propagates in time, even in the free theory due to the Lorentz factors in the \(|\mathbf{k}\rangle_{t_{0}}\). For fermionic ladder operators we are also constrained to only have one plane wave for each four-momentum value; however, if the packet is in an unbound state, this has no practical effect since \(\mathbf{k}\) can take a continuum of values.
Recalling (4.12), the matter four-current operator for \(\hat{\psi}_{\mathbf{p}}\) is
\[\begin{split}\hat{J_{P}}^{\nu}&=\overline{\hat{ \psi}}_{P}i\gamma^{\nu}\hat{\psi}_{P}\\ &=-u^{\dagger}\hat{a}_{P}^{\dagger}e^{-iP_{\mu}x^{\mu}}\gamma^{0} \gamma^{\nu}u\hat{a}_{P}e^{iP_{\mu}x^{\mu}}\\ \Rightarrow\hat{J_{\mathbf{p}}^{0}}\left(t_{0}\right)& =-\hat{a}_{\mathbf{p}}^{\dagger}\left(t_{0}\right)e^{-imt_{0}} \gamma^{0}\gamma^{0}\hat{a}_{\mathbf{p}}\left(t_{0}\right)e^{imt_{0}}\\ &=\hat{a}_{\mathbf{p}}^{\dagger}\left(t_{0}\right)\hat{a}_{ \mathbf{p}}\left(t_{0}\right),\end{split} \tag{4.20}\]
the number operator, and the other components vanish due to our alignment of \(\gamma^{0}\). When computing the matter four-current, the term for positrons is reversed to also give the number operator, so that at time \(t_{0}\) we have
\[\begin{split}\left\langle\phi_{\mathbf{p}}\Big{|}\hat{J^{0}} \Big{|}\phi_{\mathbf{p}}\right\rangle_{t_{0}}&\sim\left|\phi_{ \mathbf{p}}\left(\mathbf{x},t_{0}\right)\right|^{2}\left\langle\phi_{\mathbf{p }}|\phi_{\mathbf{p}}\right\rangle_{t_{0}}\\ &\sim\overline{\hat{\phi}}_{\mathbf{p}}\left(\mathbf{x},t_{0} \right)i\gamma^{0}\phi_{\mathbf{p}}\left(\mathbf{x},t_{0}\right)\left\langle \phi_{\mathbf{p}}|\phi_{\mathbf{p}}\right\rangle_{t_{0}}.\end{split} \tag{4.21}\]
For arbitrary \(\gamma^{0}\) we then have
\[\left\langle\phi_{\mathbf{p}}\Big{|}\hat{J^{\nu}}\Big{|}\phi_{\mathbf{p}} \right\rangle_{t_{0}}\sim\overline{\hat{\phi}}_{\mathbf{p}}\left(\mathbf{x},t_{ 0}\right)i\gamma^{\nu}\phi_{\mathbf{p}}\left(\mathbf{x},t_{0}\right)\left\langle \phi_{\mathbf{p}}|\phi_{\mathbf{p}}\right\rangle_{t_{0}}. \tag{4.22}\]
### Classical four-current configuration
We now define a quantum state configuration that corresponds to a classical continuous four-current. We make the following assumptions:
1. **Durable localized packets**: The quantum state at any time corresponds to some number of non-overlapping packets \(\sum_{n}\phi_{\mathbf{p}_{n}}\), each of whose four-momenta changes slowly from packet to packet. The state evolves in time such that the packets remain smooth, localized, and separated (a nontrivial assumption per the previous section), and without pair production.
2. **Smooth packet distribution**: Spacetime can be split up into cells, each of which contains an integral number of packets with approximately equal four-momenta \(P=mU\), such that the change in the number of packets and the change in momentum per unit of space are both much smaller than the change in the phase of the packets per unit time.
3. **Classical gauge potential**: The electromagnetic state is dominated at larger length scales by the classical solution, i.e. the stationary phase approximation holds, so that the classical gauge potential equation of motion also holds for real-valued \(A_{\mu}\).
Within any cell we may align \(t=x^{0}\) and \(\gamma^{0}\) with \(U\), and choose a time \(t_{0}\) near the center of a cell. The first two assumptions mean that for each packet in the cell we may make the approximation
\[\phi_{\mathbf{p}}\left(\mathbf{x},t\right)\sim\left|\phi_{\mathbf{p}}\left( \mathbf{x},t_{0}\right)\right|ue^{-im(t-t_{0})}. \tag{4.23}\]
By choosing the Weyl gauge (\(A_{0}=0\)) in the cell, we therefore can neglect any change in \(\phi_{\mathbf{p}}\) except in the \(U\) direction, since
\[\begin{split}\gamma^{\mu}\mathrm{D}_{\mu}\phi_{\mathbf{p}}& \sim-im\gamma^{0}\left|\phi_{\mathbf{p}}\left(\mathbf{x},t_{0} \right)\right|ue^{-im(t-t_{0})}+\gamma^{j}\mathrm{D}_{j}\phi_{\mathbf{p}}\\ &=-m\phi_{\mathbf{p}}+\gamma^{j}\mathrm{D}_{j}\phi_{\mathbf{p}} \end{split} \tag{4.24}\] \[\Rightarrow\gamma^{j}\mathrm{D}_{j}\phi_{\mathbf{p}} \sim 0.\]
The Dirac equation approximately satisfied by each \(\phi_{\mathbf{p}}\) is then
\[\gamma^{0}\mathrm{D}_{U}\phi_{\mathbf{p}}=-m\phi_{\mathbf{p}}, \tag{4.25}\]
which we will call the Dirac packet equation. Note that \(U\) will be different for each cell.
Again using the first two assumptions, we may then define a spinor-valued field \(\phi\) on all of spacetime by "smearing" the \(\phi_{\mathbf{p}_{n}}\) across cells; more precisely, we again choose inertial coordinates and Dirac matrices which align \(x^{0}\) and \(\gamma^{0}\) with \(U\) and in a space-like hyperplane across each cell \(s\equiv(\mathbf{x},t_{0})\) define
\[\begin{split}\phi_{s}&\equiv\frac{1}{\sqrt{2V_{s}}} \int_{V_{s}}\sum_{n}\phi_{\mathbf{p}_{n}}\left(\mathbf{x},t_{0}\right)\mathrm{ d}^{3}x\\ &\sim\frac{1}{\sqrt{2V_{s}}}ue^{-imt_{0}}\sum_{n}\sqrt{2}\int_{V_ {s}}\left|\phi_{\mathbf{p}_{n}}\left(\mathbf{x},t_{0}\right)\right|^{2} \mathrm{d}^{3}x\\ &=\sqrt{\rho_{0}}ue^{-imt_{0}},\end{split} \tag{4.26}\]
where we assume a Gaussian envelope and at a given time
\[\rho_{0}\equiv\frac{N_{s}}{V_{s}} \tag{4.27}\]
is the number of wave packets in the space-like hyperplane of the cell divided by the volume of the hyperplane, i.e. it is the matter rest density. We then smoothly interpolate \(\mathbf{p}\) and \(\rho_{0}\) between cells to arrive at a globally defined \(\phi\). At any point we therefore have
\[\overline{\phi}i\gamma^{0}\phi =\overline{\phi}\phi\] \[=\rho_{0},\]
and the Dirac packet equation, which is only dependent upon the phase, remains valid at any point as
\[\gamma^{0}\mathrm{D}_{U}\phi=-m\phi. \tag{4.28}\]
Figure 4.1: The quantum state configuration that corresponds to a classical continuous four-current is a smooth distribution of durable localized packets. By dividing spacetime into cells and aligning \(t=x^{0}\) and \(\gamma^{0}\) with \(P=mU\), we may express each packet at time \(t_{0}\) as \(\left\langle\Omega\middle|\hat{\psi}\middle|\phi_{\mathbf{p}}\right\rangle_{t_ {0}}\sim\left|\phi_{\mathbf{p}}\left(\mathbf{x},t_{0}\right)\right|ue^{-imt_ {0}}\). In the figure the value of the modulus is represented by the shading of each packet, while the packet phase is depicted as rotating counterclockwise as \(t\) increases. Assuming the packet momenta are approximately equal within a cell lets us define a spinor-valued \(\phi_{s}\) per cell, which may be interpolated to yield a global spinor-valued \(\phi\). Since the change in each \(\phi_{\mathbf{p}}\) and therefore \(\phi\) is overwhelmingly due to the change in phase, the Dirac packet equation \(\gamma^{0}\mathrm{D}_{U}\phi=-m\phi\) is approximately satisfied at any point, where \(U=\partial/\partial t\).
Using the last assumption and (4.22), each packet approximately satisfies the equation of motion
\[\begin{split} q\left\langle\phi_{\mathbf{p}}\Big{|}\hat{J}^{\nu} \Big{|}\phi_{\mathbf{p}}\right\rangle_{t}&=\nabla_{\mu}F^{\nu\mu} \left\langle\phi_{\mathbf{p}}|\phi_{\mathbf{p}}\right\rangle_{t}\\ \Rightarrow q\overline{\phi}_{\mathbf{p}}i\gamma^{\nu}\phi_{ \mathbf{p}}&\sim\nabla_{\mu}F^{\nu\mu}.\end{split} \tag{4.29}\]
The construction of \(\phi\) makes the field homogeneous within each cell, and therefore so is \(F\). Aligning \(\gamma^{0}\) with \(P\) in a cell, we can then integrate over space to arrive at
\[\begin{split}\int_{V_{s}}\sum_{n}q\overline{\phi}_{\mathbf{p}}i \gamma^{\nu}\phi_{\mathbf{p}}\mathrm{d}^{3}x&\sim\int_{V_{s}} \nabla_{\mu}F^{\nu\mu}\mathrm{d}^{3}x\\ \Rightarrow q\int_{V_{s}}\sum_{n}\left|\phi_{\mathbf{p}n}\left( \mathbf{x},t_{0}\right)\right|^{2}\mathrm{d}^{3}x&\sim\nabla_{ \mu}F^{\nu\mu}\int_{V_{s}}\mathrm{d}^{3}x\\ \Rightarrow qN_{s}&=V_{s}\nabla_{\mu}F^{\nu\mu}\\ \Rightarrow q\rho_{0}&=\nabla_{\mu}F^{\nu\mu}\\ &=q\overline{\phi}i\gamma^{\nu}\phi,\end{split} \tag{4.30}\]
i.e. \(\phi\) satisfies the classical Dirac gauge potential equations of motion, so that the continua of packets propagates according to Maxwell's equations. Thus we see that the Dirac packet equation determines \(\phi\), with \(U\) determined by Maxwell's equations.
Putting this all together, the equations of motion for the spinor-valued field \(\phi\) may be derived from the Lagrangian
\[L_{\mathrm{DIRAC}-\phi}\equiv-\mathrm{Re}\left(\overline{\phi}\gamma^{\mu}U_{ \mu}\mathrm{D}_{U}\phi\right)-m\overline{\phi}\phi-\frac{1}{4}F_{\mu\nu}F^{ \mu\nu}. \tag{4.31}\]
### Spinor component equations of motion
We now determine that the MFEM equations of motion are satisfied by the rest frame components of the spinor-valued \(\phi\). At any point in spacetime we can align \(\gamma^{0}\) with \(P\) and write
\[\begin{split}\Phi&\equiv\sqrt{\rho_{0}}e^{-imt_{0} }\\ \Rightarrow\phi&=\frac{1}{\sqrt{2}}\begin{pmatrix} \Phi\\ 0\\ \Phi\\ 0\end{pmatrix}\\ \Rightarrow J^{0}&=\overline{\phi}i\gamma^{0}\phi,\\ &=\overline{\phi}\phi\\ &=\left|\Phi\right|^{2}\\ &=\rho_{0}.\end{split} \tag{4.32}\]
This may be contrasted with the MFEM result \(\rho_{0}=2m\left|\Phi\right|^{2}\), and means that the gauge potential equations of motion (4.30) may be written
\[q\left|\Phi\right|^{2}U^{\nu}=\nabla_{\mu}F^{\nu\mu}. \tag{4.33}\]
If we now left multiply the Dirac packet equation by \(\overline{\phi}\) and note that the right side is real, we have
\[\begin{split}\operatorname{Re}\left(\overline{\phi}\gamma^{0} \mathrm{D}_{U}\phi\right)&=-m\overline{\phi}\phi\\ \Rightarrow\operatorname{Re}\left(\phi^{\dagger}i\gamma^{0} \gamma^{0}\mathrm{D}_{U}\phi\right)&=\operatorname{Re}\left(-i \Phi^{*}\mathrm{D}_{U}\Phi\right)\\ &=\operatorname{Im}\left\langle\Phi,\mathrm{D}_{U}\Phi\right\rangle _{\mathbb{C}}\\ &=-\left|\Phi\right|^{2}m\\ \Rightarrow-m&=\frac{\operatorname{Im}\left\langle\Phi, \mathrm{D}_{U}\Phi\right\rangle_{\mathbb{C}}}{\left|\Phi\right|^{2}}\\ &=\mathrm{D}_{U}^{\times}\vec{\Phi}.\end{split} \tag{4.34}\]
Thus we see that for the spinor-valued field \(\phi\) which represents the "smeared" distribution of electron packets, the Dirac equation is equivalent to the MFEM EOM for the rest frame components \(\Phi\). Also note that there only one sign of solution, but that building a positron packet would have resulted in an opposite sign in the EOM; thus both theories include anti-particles.
Lastly, at any point in spacetime we can again align \(x^{0}\) and \(\gamma^{0}\) with \(U\) and choose the Weyl gauge so that counting only non-zero components we have
\[\begin{split}\overline{\phi}\gamma^{\mu}\mathrm{D}^{\nu}\phi& =\phi^{\dagger}i\gamma^{0}\gamma^{\mu}\eta^{00}\partial_{0}\phi\\ &=im\phi^{\dagger}i\gamma^{0}\gamma^{\mu}\phi\\ &=-m\phi^{\dagger}\gamma^{0}\gamma^{0}\phi\\ &=m\left|\Phi\right|^{2}\\ \Rightarrow\overline{\phi}\gamma^{\mu}\mathrm{D}^{\nu}\phi& =m\rho_{0}U^{\mu}U^{\nu},\end{split} \tag{4.35}\]
where in the third line we have used 4.12. Thus for an on-shell matter field the Hilbert SEM tensor using (4.8) is
\[\begin{split} T_{\mathrm{DIRAC}-\phi}^{\mu\nu}&= \frac{1}{2}\mathrm{Re}\left(\overline{\phi}\gamma^{\mu}\mathrm{D}^{\nu}\phi+ \overline{\phi}\gamma^{\nu}\mathrm{D}^{\mu}\phi\right)+T_{\mathrm{EM}}^{\mu \nu}\\ &=m\rho_{0}U^{\mu}U^{\nu}+T_{\mathrm{EM}}^{\mu\nu},\end{split} \tag{4.36}\]
which includes the desired time-like dust term.
### From the QED to the MFEM Lagrangian
The gauge potential and matter field EOM we have detailed for the spinor component in the previous section may be extracted from the Lagrangian
\[\begin{split} L_{\mathrm{DIRAC}-\Phi}&=- \operatorname{Im}\left\langle\Phi,U^{\mu}\mathrm{D}_{\mu}\Phi\right\rangle_{ \mathbb{C}}-m\Phi^{*}\Phi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\\ &=-\left\|\vec{\Phi}\right\|^{2}\mathrm{D}_{U}^{\times}\vec{\Phi}- m\left\|\vec{\Phi}\right\|^{2}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\end{split} \tag{4.37}\]
where \(\Phi\) is now an arbitrary complex-valued field to be determined by the equations of motion, and we have also expressed the Lagrangian in terms of the real vector valued field \(\vec{\Phi}\). This Lagrangian, however, has an extra free variable: the time-like unit vector \(U\) at each point, which is parallel to the four-current. It is determined by the gauge potential EOM, but we would like to eliminate it to obtain a standard gauge theory Lagrangian.
We may eliminate \(U\) by taking advantage of the geometric observations of Section 2.4 and squaring the distinct parts of the dynamical and mass terms in the Lagrangian. This results in the MFEM Lagrangian:
\[L_{\text{DIRAC}-\Phi}\overset{\text{squared}^{\text{\tiny{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text \
If we write \(J=\rho_{0}U\), these EOM imply that
\[\left\|\vec{\Phi}\right\|^{2}=\frac{\rho_{0}}{2m}, \tag{5.5}\]
which allows us to write the on-shell Hilbert SEM tensor as
\[T^{\mu\nu}=\frac{1}{2}m\rho_{0}U^{\mu}U^{\nu}+F^{\mu\lambda}F^{\nu}{}_{\lambda} -\frac{1}{4}F^{\sigma\lambda}F_{\sigma\lambda}g^{\mu\nu}, \tag{5.6}\]
whereupon taking the divergence of both sides yields the equations of geodesic deviation
\[m\rho_{0}\left(\nabla_{U}U\right)^{\mu}=F^{\mu}{}_{\sigma}J^{\sigma}_{q}, \tag{5.7}\]
which is equivalent to the Lorentz force law.
One may define a QED quantum state configuration that corresponds to a classical continuous four-current: a smooth distribution of durable localized packets which follow the classical gauge potential EOM. A spinor-valued field whose EOM may be derived from
\[L_{\text{DIRAC}-\phi}\equiv-\text{Re}\left(\overline{\phi}\gamma^{\mu}U_{\mu} \text{D}_{U}\phi\right)-m\overline{\phi}\phi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu} \tag{5.8}\]
is defined by smearing this state across spacetime; the complex-valued rest frame components of this spinor then satisfy the MFEM EOM, which may be derived from
\[L_{\text{DIRAC}-\Phi}=-\left(\text{D}_{U}^{\vec{\gamma}}\vec{\Phi}-m\right) \left\|\vec{\Phi}\right\|^{2}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}. \tag{5.9}\]
This Lagrangian includes an extra free variable \(U\) which is determined by the gauge potential EOM; squaring each of the terms in brackets eliminates \(U\) and yields the MFEM Lagrangian.
### The discrete four-current
The definition of the four-current \(J=\rho_{0}U\) corresponding to classical matter continua is a four-vector whose direction \(U\) at each point is tangent to worldline of the continua and whose "length" \(\sqrt{-\left\langle J,J\right\rangle}=\rho_{0}\) is equal to the particle number rest density \(\rho_{0}\). The rest density is the worldline rest frame particle number per unit volume (i.e. the infinitesimal particle number per unit space-like hypersurface orthogonal to \(U\)), which like \(U\) is metric-dependent, so that \(J\) is as well, and these dependencies are such that the four-current density \(\mathfrak{J}\equiv J\sqrt{-\det\left(g_{\mu\nu}\right)}\) is independent of the metric.
The curious fact that the \(\rho_{0}\) defined in MFEM (as in Dirac and Klein-Gordon theory) is _not_ metric dependent is here explained by the QED approximation; \(\rho_{0}\) is describing a unit volume of matter that is discrete, not continuous. This means that while \(\rho_{0}\) depends upon the chosen units, it does not change with an infinitesimal change of the metric, since an infinitesimal change in the unit space-like hypersurface orthogonal to \(U\) does not alter the integral number of packets enclosed.
Note that the various expressions for the four-current
\[\overline{\psi}i\gamma^{\nu}\psi \rightarrow\overline{\phi}i\gamma^{\nu}\phi\] \[\rightarrow\Phi^{*}U^{\nu}\Phi\]
are all metric-dependent, since they involve the frame; moreover, their dependency remains consistent with discrete matter, since they compensate for altered time-like lengths under metric variations to keep \(\sqrt{-J^{\mu}J^{\nu}g_{\mu\nu}}=\rho_{0}\) constant. The transition to the MFEM Lagrangian results in a four-current which is a metric-independent 1-form, with \(J^{\mu}=g^{\mu\nu}J_{\nu}\) no longer transforming like a discrete four-current under metric variations. However, unlike the various expressions for the rest density
\[\psi^{\dagger}\psi \rightarrow\phi^{\dagger}\phi\] \[\rightarrow\Phi^{*}\Phi\]
which are metric-independent, the rest density \(-2\left\|\vec{\Phi}\right\|^{2}\mathrm{D}_{U}^{\nuclyeq}\vec{\Phi}\) in MFEM from (3.18) is metric-dependent via the unit vector \(U\); when applying the matter field EOM in (3.20), we instead arrive at the on-shell four-current \(J^{\mu}=2\left|\Phi_{\mathrm{EL}}\right|^{2}mU^{\mu}\), which regains its consistency with discrete matter.
### Multiple matter fields
Our matter field configuration is associated with a single four-current defined on all of spacetime. Unlike in quantum theory, where there is a single field and interacting particles are identified as Fourier components of this field which may interfere, here we have a single continuum four-current with no explicit reference to constituent particles.
We may also consider multiple matter fields, each of which has its own terms in the Lagrangian and is associated with a separate four-current; each matter field is a separate section of the vector bundle, but all these sections are associated with the same principal bundle with a single parallel transport and curvature, and with the same spacetime base space with a single metric. These four-currents will gravitationally interact as relativistic dust via the spacetime metric, and also electromagnetically interact via the electromagnetic gauge potential according to the EOM \(\sum_{i}qJ_{i}^{\nu}=\nabla_{\mu}F^{\nu\mu}\).
Note that all matter fields are associated with the same charge per particle number since they share a single field strength term, but each may be associated with its own mass per particle number; thus the mass per charge of each matter field is arbitrary.
One may also define a QED quantum state configuration that corresponds to multiple classical continuous four-currents, as long as the packets remain discrete, localized, and smoothly distributed. This results in multiple spinor-valued fields whose multiple rest frame components correspond to multiple MFEM matter fields.
|
2309.10368 | Worst-Case and Smoothed Analysis of the Hartigan-Wong Method for k-Means
Clustering | We analyze the running time of the Hartigan-Wong method, an old algorithm for
the $k$-means clustering problem. First, we construct an instance on the line
on which the method can take $2^{\Omega(n)}$ steps to converge, demonstrating
that the Hartigan-Wong method has exponential worst-case running time even when
$k$-means is easy to solve. As this is in contrast to the empirical performance
of the algorithm, we also analyze the running time in the framework of smoothed
analysis. In particular, given an instance of $n$ points in $d$ dimensions, we
prove that the expected number of iterations needed for the Hartigan-Wong
method to terminate is bounded by $k^{12kd}\cdot poly(n, k, d, 1/\sigma)$ when
the points in the instance are perturbed by independent $d$-dimensional
Gaussian random variables of mean $0$ and standard deviation $\sigma$. | Bodo Manthey, Jesse van Rhijn | 2023-09-19T07:06:40Z | http://arxiv.org/abs/2309.10368v3 | # Worst-Case and Smoothed Analysis of Hartigan's Method for \(\mathbf{k}\)-Means Clustering
###### Abstract
We analyze the running time of Hartigan's method, an old algorithm for the \(k\)-means clustering problem. First, we construct an instance on the line on which the method can take \(2^{\Omega(n)}\) steps to converge, demonstrating that Hartigan's method has exponential worst-case running time even when \(k\)-means is easy to solve. As this is in contrast to the empirical performance of the algorithm, we also analyze the running time in the framework of smoothed analysis. In particular, given an instance of \(n\) points in \(d\) dimensions, we prove that the expected number of iterations needed for Hartigan's method to terminate is bounded by \(k^{12kd}\cdot\mathrm{poly}(n,k,d,1/\sigma)\) when the points in the instance are perturbed by independent \(d\)-dimensional Gaussian random variables of mean \(0\) and standard deviation \(\sigma\).
10.4230/LIPIcs. _Jesse van Rhijn_: Supported by NWO grant OCENW.KLEIN.176.
## 1 Introduction
Clustering is an important problem in computer science, from both a practical and a theoretical perspective. On the practical side, identifying clusters of similar points in large data sets has relevance to fields ranging from physics to biology to sociology. Recent advances in machine learning and big data have made the need for efficient clustering algorithms even more apparent. On the theoretical side, clustering problems continue to be a topic of research from the perspective of approximation algorithms, heuristics, and computational geometry.
Perhaps the best-studied clustering problem is that of \(k\)-means clustering. In this problem, one is given a finite set of points \(\mathcal{X}\subseteq\mathbb{R}^{d}\) and an integer \(k\). The goal is to partition the points into \(k\) subsets, such that the sum of squared distances of each point to the centroid of its assigned cluster, also called its cluster center, is minimized.
Despite great effort to devise approximation algorithms for \(k\)-means clustering, the method of choice remains Lloyd's method [11]. This method starts with an arbitrary choice of centers, and assigns each point to its closest center. The centers are then moved to the centroids of each cluster. On the next iteration, each point is again reassigned to its closest center, and the process repeats.
It is not hard to show that this process strictly decreases the objective function whenever either a cluster center changes position, or a point is reassigned. Hence, no clustering can show up twice during an execution of this algorithm. Since the number of partitions of \(n\) points into \(k\) sets is at most \(k^{n}\), the process must eventually terminate.
Although Lloyd's method has poor approximation performance both in theory and in practice [2], its speed has kept it relevant to practitioners. This is in startling contrast to its worst-case running time, which is exponential in the number of points [15].
To close the gap between theory and practice, Arthur et al. have shown that Lloyd's method terminates in expected polynomial time on perturbed point sets, by means of a smoothed analysis [1]. This provides some theoretical justification for the use of Lloyd's method in practice.
Another, less well-known heuristic for clustering is Hartigan's method [8]. In this method, one proceeds point-by-point. Given an arbitrary clustering, one checks whether there exists a point that can be reassigned to a different cluster, such that the objective function decreases. If such a point exists, it is reassigned to this new cluster. If no such points exist, the algorithm terminates and the clustering is declared locally optimal.
Although at first sight Hartigan's method might seem like a simpler version of Lloyd's method, it is qualitatively different. If Lloyd's method reassigns a point \(x\) from cluster \(i\) to cluster \(j\), then \(x\) must be closer to the center of cluster \(j\) than to that of cluster \(i\). In Hartigan's method, this is not true; \(x\) may be reassigned even when there are no cluster centers closer to \(x\) than its current center. This can be beneficial, as Telgarsky & Vattani showed that Hartigan's method is more powerful than Lloyd's method [14].
To be precise, every local optimum of Hartigan's method is also a local optimum of LLoyd's method, while the converse does not hold. Telgarsky & Vattani moreover performed computational experiments, which show that Hartigan's method not only tends to find better clusterings than Lloyd's, but also has a similar running time on practical instances. Despite these promising results, theoretical knowledge of Hartigan's method is lacking.
In this paper, we aim to advance our understanding of this heuristic. Our contributions are twofold. First, we construct an instance on the line on which Hartigan's method can take \(2^{\Omega(n)}\) iterations to terminate. Considering that \(k\)-means clustering can be solved exactly in polynomial time in \(d=1\), this shows that the worst-case running time of Hartigan's method
is very poor even on easy instances. This is in contrast to Lloyd's method, where all known non-trivial lower bounds require \(d\geq 2\).
For each \(m\in\mathbb{N}_{\geq 2}\) there exists an instance of \(k\)-means clustering on the line with \(n=4m-3\) points and \(k=2m-1\) clusters on which Hartigan's method can take \(2^{\Omega(n)}\) iterations to converge to a local optimum.
Second, we attempt to reconcile Theorem 3 with the observed practical performance of Hartigan's method. We perform a smoothed analysis of its running time, in which each point in an arbitrary instance is independently perturbed by a Gaussian random variable of variance \(\sigma^{2}\). Let \(n,k,d\in\mathbb{N}\), and assume \(4kd\leq n\). Fix a set of \(n\) points \(\mathcal{Y}\subseteq[0,1]^{d}\), and assume that each point in \(\mathcal{Y}\) is independently perturbed by a \(d\)-dimensional Gaussian random variable with mean \(0\) and standard deviation \(\sigma\), yielding a new set of points \(\mathcal{X}\). Then the expected running time of Hartigan's method on \(\mathcal{X}\) is bounded by
\[O\bigg{(}\frac{k^{12kd+5}d^{11}n^{12.5+\frac{1}{2}}\ln^{4.5}(nkd)}{\sigma^{8}} \bigg{)}=k^{12kd}\cdot\mathrm{poly}(n,k,d,1/\sigma).\]
This is a first step to settling a conjecture by Telgarsky & Vattani that Hartigan's method, like Lloyd's, should have polynomial smoothed running time.
## 2 Preliminaries and Notation
Given vectors \(x,y\in\mathbb{R}^{d}\), we write \(\langle x,y\rangle\) for the standard Euclidean inner product on \(\mathbb{R}^{d}\), and \(\|x\|=\sqrt{\langle x,x\rangle}\) for the standard norm.
Given a set of \(k\) clusters \(\mathcal{C}=\{\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\}\), a configuration of a cluster \(\mathcal{C}_{i}\in\mathcal{C}\) is an assignment of a set of points to \(\mathcal{C}_{i}\). We will denote the clusters by calligraphic letters, and their configurations by regular letters; i.e., the configuration of \(\mathcal{C}_{i}\) will be denoted \(C_{i}\). This distinction is sometimes useful. For the majority of this paper, however, we will not make this distinction explicitly, and will refer to both a cluster and its configuration interchangeably by regular letters.
Given a finite set of points \(S\subseteq\mathbb{R}^{d}\), we define the center of mass of \(S\) as
\[\mathrm{cm}(S)=\frac{1}{|S|}\sum_{x\in S}x.\]
With this definition, we can formally define the objective function of \(k\)-means. Let \(C=\{C_{i}\}_{i=1}^{k}\) be a partition of a finite set of points \(\mathcal{X}\subseteq\mathbb{R}^{d}\). Then the objective function of \(k\)-means is
\[\Phi(C)=\sum_{i=1}^{k}\sum_{x\in C_{i}}\|x-\mathrm{cm}(C_{i})\|^{2}=\sum_{i=1} ^{k}\Phi(C_{i}),\]
where we define \(\Phi(C_{i})=\sum_{x\in C_{i}}\|x-\mathrm{cm}(C_{i})\|^{2}\). We will also refer to \(\Phi(C)\) as the potential function.
For both the worst-case and smoothed complexity bounds, be need to analyze the improvement of a single iteration. Thus, we need a simple expression for this quantity. Lemmas 3 and 4 allow us to obtain such an expression. These results were already obtained by Telgarsky & Vattani [14].
**Lemma 3**.: _Let \(S\) and \(T\) be two disjoint nonempty sets of points in \(\mathbb{R}^{d}\). Then_
\[\Phi(S\cup T)-\Phi(S)-\Phi(T)=\frac{|S|\cdot|T|}{|S|+|T|}\cdot\|\operatorname{ cm}(S)-\operatorname{cm}(T)\|^{2}.\]
**Lemma 4**.: _Let \(S\) and \(T\) be two disjoint nonempty sets of points in \(\mathbb{R}^{d}\) with \(|S|>1\). Suppose we move a point \(x\in S\) from \(S\) to \(T\). Then_
\[\Phi(S\setminus\{x\})+\Phi(T\cup\{x\})-\Phi(T)-\Phi(S)=\frac{|T|}{|T|+1}\| \operatorname{cm}(T)-x\|^{2}-\frac{|S|}{|S|-1}\|\operatorname{cm}(S)-x\|^{2}.\]
Let \(C\) be some clustering of \(\mathcal{X}\). Suppose in some iteration of Hartigan's, we move \(x\in C_{i}\) to \(C_{j}\). Let the gain of this iteration be denoted \(\Delta_{x}(C_{i},C_{j})\). Then Lemma 4 tells us that
\[\Delta_{x}(C_{i},C_{j})=\frac{|C_{i}|}{|C_{i}|-1}\|x-\operatorname{cm}(C_{i}) \|^{2}-\frac{|C_{j}|}{|C_{j}|+1}\|x-\operatorname{cm}(C_{j})\|^{2}.\]
At first sight, it seems like Lemma 4 leaves open the possibility that a cluster is left empty. The following lemma shows that this can never happen.
**Lemma 5**.: _No iteration can leave a cluster empty._
Proof.: Suppose before an iteration, \(C_{i}=\{x\}\) for some \(x\in X\), and after the iteration \(C_{i}^{\prime}=\emptyset\) and \(C_{j}^{\prime}=C_{j}\cup\{x\}\), i.e. \(x\) is moved from cluster \(i\) to cluster \(j\). The gain of this iteration is then (Lemma 3)
\[\Phi(C_{i})+\Phi(C_{j})-\Phi(\emptyset)-\Phi(C_{j}\cup\{x\})=\Phi(C_{j})-\Phi( C_{j}\cup\{x\})=-\frac{|C_{j}|}{|C_{j}|+1}\|x-\operatorname{cm}(C_{j})\|^{2}\leq 0,\]
since \(\operatorname{cm}(C_{i})=x\) and \(\Phi(\emptyset)=0\). Since every iteration must improve the clustering, this concludes the proof.
## 3 Exponential Lower Bound
In this section, we construct a family of \(k\)-means instances on the line on which Hartigan's method can take an exponential number of iterations before reaching a local optimum. To be precise, we prove the following theorem.
**Theorem 1** (Restated).: _For each \(m\in\mathbb{N}_{\geq 2}\) there exists an instance of \(k\)-means clustering on the line with \(n=4m-3\) points and \(k=2m-1\) clusters on which Hartigan's method can take \(2^{\Omega(n)}\) iterations to converge to a local optimum._
The construction we employ is similar to the construction used by Vattani for Lloyd's method [15]. However, Hartigan's method only reassigns a single point in each iteration, and we are free to choose which point we reassign. Moreover, we are even free to choose which cluster we move a point to if there are multiple options. This allows us to simplify the construction and embed it in a single dimension, rather than the plane used by Vattani.
We define a set of \(m\) gadgets \(G_{i}\), \(i\in\{0,\dots,m-1\}\). Each gadget except for the "leaf" gadget \(G_{0}\) consists of four points, and has two clusters \(G_{i}(\mathcal{C}_{0})\) and \(G_{i}(\mathcal{C}_{1})\) associated with it. Moreover, each gadget except \(G_{0}\) has three distinguished states, called "morning", "afternoon", and "asleep". The leaf gadget only has two states, "awake" and "asleep ".
During the morning state, a gadget \(G_{i}\) watches \(G_{i-1}\). If \(G_{i-1}\) falls asleep, then it is awoken by \(G_{i}\); this is achieved by moving a point of \(G_{i}\) to one of the clusters of \(G_{i-1}\). This
allows \(G_{i-1}\) to perform a sequence of iterations, which ends with \(G_{i-1}\) back in its morning state.
Meanwhile, \(G_{i}\) performs a sequence of iterations that transition it to its afternoon state. During the afternoon state, it once more watches \(G_{i-1}\). When the latter falls asleep, \(G_{i}\) once again wakes \(G_{i-1}\), and transitions itself to its asleep state.
The leaf gadget \(G_{0}\), as it does not watch any gadgets, only ever awakens and immediately falls asleep again.
We end the sequence of iterations once gadget \(m-1\) falls asleep. Observe that with this construction, \(G_{i}\) falls asleep twice as often as \(G_{i+1}\). With the condition that \(G_{m-1}\) falls asleep once, we obtain a sequence of at least \(2^{m-1}\) iterations. With \(n=4m-3\), this yields Theorem 1.
For space reasons, we only describe the instance and the exponential-length sequence here. The proof that this sequence is improving, which completes the proof of Theorem 1, is deferred to the full version.
### Formal Construction
We now give a detailed construction of a unit gadget, \(G\). All gadgets except for \(G_{0}\) are scaled and translated versions of \(G\). The unit gadget is a tuple \(G=(S,\mathcal{C}_{0},\mathcal{C}_{1})\), where \(S=\{a,b,p,q\}\subseteq\mathbb{R}\), and \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) are two clusters. The positions of the points in \(S\) are given in Table 1. In addition, the gadget is depicted schematically in Figures 1 and 2. Note that the relative positions of the points in these figures do not correspond to Table 1, but are chosen for visual clarity.
We remark that the points in Table 1 are not simply chosen by trial-and-error. As will be explained shortly, we can obtain from our construction a series of inequalities that must be satisfied by the points in \(S\). We then obtained these points by solving the model
\[\min a^{2}+b^{2}+p^{2}+q^{2}+f^{2}+t_{0}^{2}\] s.t. each move decreases the clustering cost, \[a,b,p,q,f,t_{0}\in\mathbb{Z}\]
using Gurobi [7]. The first constraint here amounts to satisfying a series of inequalities of the form \(\Delta_{x}(A,B)>0\) for \(x\in S(G)\) and \(A,B\) subsets of the points in a gadget and its neighboring gadgets. For space reasons, we defer their derivation and verification to the full version. The objective function here is purely chosen so that Gurobi prefers to choose small integers in the solution.
To construct \(G_{i}\) from the unit gadget (for \(i\geq 1\)), we scale the unit gadget by a factor \(5^{i-1}\), and translate it by \(t_{i}=\sum_{j=0}^{i-1}5^{j}t_{0}\), where \(t_{0}=8\). Since each gadget only ever exchanges points with its neighbors in the sequence we are about to construct, it will suffice in proving Theorem 1 to consider only iterations involving \(G_{i}\), \(G_{i-1}\) and \(G_{i+1}\) for some fixed \(i>2\). For the leaf gadget, we simply have \(G_{0}=(S_{0},\mathcal{C}_{0})\), where \(S_{0}=\{f\}=\{0\}\).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|} Point & \(a\) & \(b\) & \(p\) & \(q\) & \(f\) & \(t_{0}\) \\ \hline Position & 9 & 6 & 5 & 13 & 0 & 8 \\ \hline \end{tabular}
\end{table}
Table 1: Positions of the points in \(S(G)\), the leaf point \(f\), and the translation vector \(t_{0}\) between gadgets \(G_{1}\) and \(G_{2}\).
Before we go on to construct an improving sequence of exponential length, we define the earlier-mentioned states. For ease of notation, we will refer to the points of \(G_{i}\) as \(a_{i}\), \(b_{i}\), and so on, and to the clusters of \(G_{i}\) as \(\mathcal{C}_{0}(G_{i})\) and \(\mathcal{C}_{1}(G_{i})\). Then we say the state of \(G_{i>0}\) is:
* asleep, if \(C_{0}(G_{i})=\{b_{i}\}\) and \(C_{1}(G_{i})=\{a_{i},q_{i}\}\) (in this state, \(p_{i}\) is in some cluster of \(G_{i-1}\));
* morning, if \(C_{0}(G_{i})=\{p_{i},q_{i},b_{i}\}\) and \(C_{1}(G_{i})=\{a_{i}\}\);
* afternoon, if \(C_{0}(G_{i})=\{b_{i}\}\) and \(C_{1}(G_{i})=\{p_{i},q_{i},a_{i}\}\).
For the leaf gadget, we say its state is:
* asleep, if \(C_{0}(G_{0})=\{f\}\);
* awake, otherwise.
We now explicitly determine a sequence of iterations of exponential length. In the proof of Theorem 1, we show that this sequence is improving. To analyze the sequence, we consider the perspective of \(G_{i}\) as it wakes up \(G_{i-1}\) and falls asleep; and then as it is awoken by \(G_{i+1}\). We first consider only the case that \(G_{i-1}\neq G_{0}\). See Figure 1 and Figure 2 for a schematic depiction of the sequence described below.
#### 4.1.1 Morning.
We start with \(G_{i}\) in the morning state, and \(G_{i-1}\) asleep. To wake up \(G_{i-1}\), the point \(p_{i}\) moves to \(\mathcal{C}_{1}(G_{i-1})\), which currently contains \(a_{i-1}\) and \(q_{i-1}\). This triggers the wakeup phase of \(G_{i-1}\); we will analyze this phase later from the perspective of \(G_{i}\). When the wakeup phase completes, \(\mathcal{C}_{1}(G_{i-1})\) contains \(a_{i-1}\) and \(p_{i}\), and \(p_{i}\) moves to \(\mathcal{C}_{1}(G_{i})\). Subsequently \(q_{i}\) moves from \(\mathcal{C}_{0}(G_{i})\) to \(\mathcal{C}_{1}(G_{i})\). Observe that this puts \(G_{i}\) into the afternoon state.
#### 4.1.2 Afternoon.
In this state, \(G_{i}\) is once again watching \(G_{i-1}\). Once the latter falls asleep, \(p_{i}\) moves from \(\mathcal{C}_{1}(G_{i})\) to \(\mathcal{C}_{1}(G_{i-1})\), which triggers another wakeup phase of \(G_{i-1}\). Additionally, this move causes \(G_{i}\) to fall asleep. Thus, at the end of the wakeup phase of \(G_{i-1}\), we have \(G_{i+1}\) wake up \(G_{i}\).
Figure 1: Schematic depiction of the interactions between \(G_{i}\) and \(G_{i-1}\) during the morning and afternoon phases of \(G_{i}\).
### Waking up.
First, the point \(p_{i+1}\) joins \(\mathcal{C}_{1}(G_{i})\). Next, \(p_{i}\) moves from \(\mathcal{C}_{1}(G_{i-1})\) to \(\mathcal{C}_{0}(G_{i})\). Then, \(q_{i}\) moves from \(\mathcal{C}_{1}(G_{i})\) to \(\mathcal{C}_{0}(G_{i})\), and finally, \(p_{i+1}\) leaves \(\mathcal{C}_{1}(G_{i})\), and joins either \(\mathcal{C}_{1}(G_{i+1})\) (if \(G_{i+1}\) was in the morning state when waking up \(G_{i}\)) or \(\mathcal{C}_{0}(G_{i+1})\) (if \(G_{i+1}\) was in the afternoon state; in this case, the move of \(p_{i+1}\) occurs during the wakeup phase of \(G_{i+1}\)).
#### Leaf gadget.
The leaf gadget does not watch or wake up any other gadgets. It only wakes up when \(p_{1}\) moves into \(\mathcal{C}_{0}(G_{0})\), and falls asleep again when \(p_{1}\) moves back to a cluster of \(G_{1}\).
#### Initialization.
The sequence starts with all gadgets in the morning state, except for \(G_{0}\), which is asleep.
At every step, we have the gadget with the smallest index that is not asleep wake up the gadget that it is watching. From this sequence of iterations, we can retrieve a series of inequalities, each of which encodes the condition that the gain of every iteration must be positive. To prove Theorem 1, we must show that the points in Table 1 satisfy these inequalities.
An implementation of the sequence described above is provided in the following link: [https://pastebin.com/raw/RhZyut4X](https://pastebin.com/raw/RhZyut4X).
## 4 Smoothed Analysis
For a smoothed analysis, the first hope might be to straightforwardly adapt a smoothed analysis of Lloyd's algorithm, e.g. that of Arthur, Manthey and Roglin [1]. On closer inspection, however, such analyses strongly rely on a couple of properties of Lloyd's method that are not valid in Hartigan's.
First, in Lloyd's algorithm the hyperplane that bisects two cluster centers also separates their corresponding clusters, since every point is always assigned to the cluster center closest
Figure 2: Schematic depiction of the interactions between \(G_{i}\), \(G_{i-1}\) and \(G_{i+1}\) during the wakeup phase of \(G_{i}\). Note that the final state of \(G_{i}\) corresponds to the first state depicted in Figure 1.
to itself. Second, the two stages of Lloyd's algorithm, moving the cluster centers and reassigning points, both decrease the potential. Neither of these properties are satisfied by iterations of Hartigan's method. Hence, any analysis that relies on either property cannot be easily repurposed.
Instead, we will use a different technique, more closely related to the analysis of the Flip heuristic for Max-Cut with squared Euclidean distances by Etscheid and Roglin [6]. The main result we will work towards in this section is stated in Theorem 3.
### Technical Preliminaries
Let \(\mathcal{Y}\subseteq[0,1]^{d}\) be a set of \(n\) points. Throughout the remainder, we will denote by \(\mathcal{X}\) the set of points obtained by perturbing each point in \(\mathcal{Y}\) independently by a \(d\)-dimensional Gaussian vector of mean \(0\) and standard deviation \(\sigma\leq 1\). Note that this last assumption is not actually a restriction. If \(\sigma>1\), we scale down the set \(\mathcal{Y}\) so that \(\mathcal{Y}\subseteq[0,1/\sigma]^{d}\), and subsequently perturb the points by Gaussian variables with \(\sigma=1\). Since the number of iterations required to terminate is invariant under scaling of the input point set, this is equivalent to the original instance.
Our analysis is based on the standard technique of proving that it is unlikely that a sequence of iterations decreases the potential function by a small amount. For this technique to work, we additionally require the potential function to be bounded from above and from below with sufficiently high probability. Since it is obvious that the potential is non-negative for any clustering, it is enough to guarantee that the perturbed point set \(\mathcal{X}\) lies within the hypercube \([-D/2,D/2]^{d}\) for some finite \(D\). To that end, we have the following lemma.
Let \(D=\sqrt{2n\ln(nkd)}\). Then \(\mathbb{P}(\mathcal{X}\nsubseteq[-D/2,D/2]^{d})\leq k^{-n}\).
Similar results to Lemma 3 can be found in previous works on the smoothed analysis of algorithms on Gaussian-perturbed point sets [13, 1]. The only difference in our version is the value of \(D\). Hence, we omit the proof.
Lemma 3 allows us to assume that all points lie within \([-D/2,D/2]^{d}\) after the perturbation. Formally, we must take into account the failure event that any point lies outside this hypercube. However, since the probability of this event is at most \(k^{-n}\), this adds only a negligible \(+1\) to the smoothed complexity bound which we prove in Theorem 3. We therefore ignore the failure event in the sequel.
We need to show that we can approximate the gain of an iteration if we have a good approximation to the cluster centers. Recall that \(\Delta_{x}(C_{i},C_{j})\) is the gain of moving a point \(x\) from \(C_{i}\) to \(C_{j}\). Since we wish to use approximations to the centers of \(C_{i}\) and \(C_{j}\), it is convenient to define the variable
\[\Delta_{x}^{|C_{i}|,|C_{j}|}(a,b)=\frac{|C_{i}|}{|C_{i}|-1}\|x-a\|^{2}-\frac{ |C_{j}|}{|C_{j}|+1}\|x-b\|^{2}.\]
This variable is the gain that would be incurred if the centers of \(C_{i}\) and \(C_{j}\), with fixed sizes \(|C_{i}|\) and \(|C_{j}|\), were \(a\) and \(b\). Indeed, note that \(\Delta_{x}^{|C_{i}|,|C_{j}|}(\operatorname{cm}(C_{i}),\operatorname{cm}(C_{j} ))=\Delta_{x}(C_{i},C_{j})\). When their intended values are clear from context, we will often omit the superscripts \(|C_{i}|\) and \(|C_{j}|\) from \(\Delta_{x}^{|C_{i}|,|C_{j}|}(a,b)\).
### Approximating Iterations
Before we begin with the analysis proper, we provide a rough outline of our analysis. Suppose we tile the hypercube \([-D/2,D/2]^{d}\) with a rectangular grid of spacing \(\epsilon\). Then any point
in \([-D/2,D/2]^{d}\) is at a distance of at most \(\sqrt{d}\epsilon\) from some grid point. Since we need the positions of the cluster centers \(c_{i}=\operatorname{cm}(C_{i})\) for \(i\in[k]\), we guess \(k\) grid points \(c^{\prime}_{i}\) for their positions. If we guess correctly, meaning \(c^{\prime}_{i}\) is the grid point closest to \(c_{i}\) for each \(i\in[k]\), then we can approximate the gain \(\Delta\) of an iteration by replacing the cluster centers with these grid points in the formula for \(\Delta\) (Lemma 7).
The price for this approximation is a union bound over all choices of the grid points. However, we can compensate for this by noticing that, when we move a point between clusters, we know exactly how the cluster centers move. Thus, if the guessed grid points are good approximations, we can obtain new good approximations by moving them the same amount. Thus, we only need to guess once, and can use this guess for a sequence of iterations. Then we can bound the probability that all iterations in this sequence yield a small improvement.
Suppose the point \(x\) moves from cluster \(i\) to cluster \(j\). Let \(C_{i}\) and \(C_{j}\) denote the configurations of these clusters before this move, and let \(c_{i}=\operatorname{cm}(C_{i})\) and \(c_{j}=\operatorname{cm}(C_{j})\). Let \(c^{\prime}_{i}\) and \(c^{\prime}_{j}\) be two points such that \(\|c_{i}-c^{\prime}_{i}\|,\|c_{j}-c^{\prime}_{j}\|\leq\epsilon\) for some \(0\leq\epsilon\leq\sqrt{d}D\). Then
\[|\Delta_{x}(C_{i},C_{j})-\Delta_{x}(c^{\prime}_{i},c^{\prime}_{j})|\leq 9 \sqrt{d}D\epsilon,\]
In particular, \(\Delta_{x}(C_{i},C_{j})\in(0,\epsilon]\) implies \(|\Delta_{x}(c^{\prime}_{i},c^{\prime}_{j})|\leq 10\sqrt{d}D\epsilon\).
Proof.: Observe that
\[\|x-c_{i}\|^{2}=\|x-c^{\prime}_{i}+c^{\prime}_{i}-c_{i}\|^{2}=\|x-c^{\prime}_{ i}\|^{2}+\|c_{i}-c^{\prime}_{i}\|^{2}+2\langle c^{\prime}_{i}-c_{i},x\rangle.\]
Thus,
\[\Delta_{x}(C_{i},C_{j})=\Delta_{x}(c^{\prime}_{i},c^{\prime}_{j}) +\frac{|C_{i}|}{|C_{i}|-1}\big{(}\|c_{i}-c^{\prime}_{i}\|^{2}+2\langle c^{ \prime}_{i}-c_{i},x\rangle\big{)}\\ -\frac{|C_{j}|}{|C_{j}|+1}\big{(}\|c_{j}-c^{\prime}_{j}\|^{2}+2 \langle c^{\prime}_{j}-c_{j},x\rangle\big{)}.\]
By the Cauchy-Schwarz inequality, \(|\langle c^{\prime}_{i}-c_{i},x\rangle|\leq\epsilon\cdot\|x\|\). Moreover, since \(x\in[-D/2,D/2]^{d}\), we have \(\|x\|\leq\sqrt{d}D\). Similarly, \(\|c^{\prime}_{i}-c_{i}\|\leq\epsilon\leq\sqrt{d}D\) by assumption.
Moving \(\Delta_{x}(c^{\prime}_{i},c^{\prime}_{j})\) to the left and taking an absolute value, we then obtain
\[|\Delta_{x}(C_{i},C_{j})-\Delta_{x}(c^{\prime}_{i},c^{\prime}_{j})|\leq\left( \frac{|C_{i}|}{|C_{i}|-1}+\frac{|C_{j}|}{|C_{j}|+1}\right)\cdot 3\sqrt{d}D\epsilon.\]
To finish the proof, observe that by Lemma 5 the first term inside the parentheses is at most 2, while the second term is bounded by 1. We then have that \(\Delta_{x}(C_{i},C_{j})\in(0,\epsilon]\) implies \(\Delta_{x}(c^{\prime}_{i},c^{\prime}_{j})\in(-9\sqrt{d}D\epsilon,(9\sqrt{d}D +1)\epsilon]\), which yields the lemma.
In the following, we fix a set \(A\subseteq\mathcal{X}\) of active points which will move during a sequence of Hartigan's method. We also fix the configuration of the active points, the sizes of the clusters \(|C_{1}|\) and \(|C_{2}|\), and the order \(\pi:A\to[|A|]\) in which the points move. Observe that these data also fix the sizes of the clusters whenever a new point moves.
While performing a sequence of iterations, the cluster centers move. Hence, even if we have a good approximation to a cluster center, it may not remain a good approximation after the iteration. However, if we know which points are gained and lost by each cluster, then we can compute new good approximations to the cluster centers from the old approximations. The following lemma captures this intuition.
**Lemma 8**.: _Let \(t_{1}\), \(t_{2}\) be two iterations of Hartigan's method in a sequence in which the points \(A\subseteq\mathcal{X}\) move, with \(t_{1}<t_{2}\). Suppose between \(t_{1}\) and \(t_{2}\), cluster \(i\) loses the points \(S_{-}\) and gains the points \(S_{+}\). Let \(c_{i}(t)\) denote the cluster center of cluster \(i\) before \(t\) takes place, and let \(C_{i}^{t}\) denote its configuration before \(t\). Let \(c_{i}^{\prime}(t_{1})\in\mathbb{R}^{d}\), and \(c_{i}^{\prime}(t_{2})=\frac{|C_{i}^{t_{1}}|}{|C_{i}^{t_{2}}|}c_{i}^{\prime}(t_ {1})+\frac{1}{|C_{i}^{t_{2}}|}\Big{(}{\sum_{x\in S_{+}}x-\sum_{x\in S_{-}}x} \Big{)}\). Then_
\[\|c_{i}^{\prime}(t_{2})-c_{i}(t_{2})\|=\frac{|C_{i}^{t_{1}}|}{|C_{i}^{t_{2}}|} \cdot\|c_{i}^{\prime}(t_{1})-c_{i}(t_{2})\|.\]
_Moreover, if \(\|c_{i}^{\prime}(0)-c_{i}(0)\|\leq\epsilon\), then \(\|c_{i}^{\prime}(t_{j})-c_{i}(t_{j})\|\leq 2|A|\epsilon\) for all \(j\in[|A|]\)._
Proof.: Since the center of a cluster is defined as its center of mass, we can write
\[|C_{i}^{t_{2}}|\operatorname{cm}(C_{i}^{t_{2}})=\sum_{x\in C_{i}^{t_{1}}\cup S _{+}\setminus S_{-}}x=|C_{i}^{t_{1}}|\operatorname{cm}(C_{i}^{t_{1}})+\sum_{x \in S_{+}}x-\sum_{x\in S_{-}}x.\]
Thus,
\[|C_{i}^{t_{2}}|c_{i}(t_{2})=|C_{i}^{t_{1}}|c_{i}(t_{1})+\sum_{x\in S_{+}}x- \sum_{x\in S_{-}}x.\]
Observe then that
\[\|c_{i}^{\prime}(t_{2})-c_{i}(t_{2})\|=\frac{|C_{i}^{t_{1}}|}{|C_{i}^{t_{2}}|} \cdot\|c_{i}^{\prime}(t_{1})-c_{i}(t_{1})\|.\]
This proves the first claim. To prove the second claim, we obtain by telescoping
\[\|c_{i}(t_{j})-c_{i}^{\prime}(t_{j})\|=\frac{|C_{i}^{0}|}{|C_{i}^{t_{j}}|} \cdot\|c_{i}(0)-c_{i}^{\prime}(0)\|\leq(|A|+1)\epsilon\leq 2|A|\epsilon,\]
since at most \(|A|\) points are active during any subsequence.
### Analyzing Sequences
We now know that we can closely approximate the gain of a sequence of iterations, provided that we have good approximations to the cluster centers at the start of the sequence. The next step is then to show that there is only a small probability that such an approximate sequence improves the potential by a small amount. For that, we first require the following technical lemma.
**Lemma 9**.: _Let \(X\) be a \(d\)-dimensional Gaussian random variable with arbitrary mean and standard deviation \(\sigma\leq 1\), and let \(Z=a\|X\|^{2}+\langle v,X\rangle\) for fixed \(a\in\mathbb{R}\setminus\{0\}\) and \(v\in\mathbb{R}^{d}\). Then the probability that \(Z\) falls in an interval of size \(\epsilon\leq 1\) is bounded from above by \(O\Big{(}\frac{1}{|a|\sigma^{2}}\sqrt{\frac{\epsilon}{d}}\Big{)}\)._
Proof.: Let \(Z_{i}=aX_{i}^{2}+v_{i}X_{i}\), so that \(Z=\sum_{i=1}^{d}Z_{i}\). We define the auxiliary variable \(\bar{Z}_{i}=Z_{i}+v_{i}^{2}/(4a)\). Since \(a\) and \(v\) are fixed, the densities of \(Z\) and \(\bar{Z}\) are identical up to translation, and so we can analyze \(\bar{Z}\) instead. Observe that \(\bar{Z}_{i}/a=\big{(}X_{i}+\frac{v_{i}}{2a}\big{)}^{2}\). Thus, \(\bar{Z}/a\) is equal in distribution to \(\|Y\|^{2}\), where \(Y\) is a \(d\)-dimensional Gaussian variable with mean \(\mu-v/(4a)\) and variance \(\sigma^{2}\). We see then that \(\bar{Z}/a\) has the density of a non-central chi-squared distribution.
For \(\lambda\geq 0\), denote by \(f(x,\lambda,d)\) the non-central \(d\)-dimensional chi-squared density with non-centrality parameter \(\lambda\) and standard deviation \(\sigma\). Then [10]
\[f(x,\lambda,d)=\sum_{i=0}^{\infty}\frac{e^{-\lambda/2}(\lambda/2)^{i}}{i!}f(x,0, d+2i).\]
Now observe that \(f(x,0,d)\) is bounded from above by \(O(1/(\sqrt{d}\sigma^{2}))\) for \(d\geq 2\). Adding in the scaling factor of \(1/|a|\) then yields the lemma for \(d\geq 2\), since \(\epsilon\leq\sqrt{\epsilon}\) for \(\epsilon\leq 1\).
For \(d=1\), we have
\[f(x,0,1)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\cdot\frac{e^{-\frac{x}{2\sigma^{2}}}} {\sqrt{x/\sigma^{2}}}.\]
Let \(I\) be an interval of size \(\epsilon\). Then
\[\mathbb{P}(\|Y\|^{2}\in I)=\int_{I}f(x,\lambda,1)\,\mathrm{d}x\leq\sum_{i=1}^ {\infty}\frac{e^{-\lambda/2}(\lambda/2)^{i}}{i!}\int_{I}f(x,0,1+2i)\,\mathrm{ d}x+\int_{I}f(x,0,1)\,\mathrm{d}x.\]
The first term is bounded by \(O(\epsilon/\sigma^{2})\) by the same argument we used for \(d\geq 2\). For the second term, we use the expression for \(f(x,0,1)\) above to bound the integral as
\[\int_{I}f(x,0,1)\,\mathrm{d}x\leq\frac{1}{\sqrt{2\pi\sigma^{2}}}\int_{0}^{ \epsilon}\frac{e^{-\frac{x}{2\sigma^{2}}}}{\sqrt{x/\sigma}}\,\mathrm{d}x=O( \sqrt{\epsilon}/\sigma)=O(\sqrt{\epsilon}/\sigma^{2}),\]
where the final bound follows since \(\sigma\leq 1\). This proves the lemma for \(d=1\) when we again add in the scaling factor \(1/|a|\).
With Lemma 9, we can show that a single fixed approximate iteration is unlikely to yield a small improvement. Let \(a,b\in\mathbb{R}^{d}\) be fixed. Let \(\Delta_{x}(a,b)\) be the improvement of the first move of \(x\) in \(S\), if the cluster centers in this iteration are located at \(a\) and \(b\). Let \(I\) be an interval of size \(\epsilon\leq 1\). Then
\[\mathbb{P}(\Delta_{x}(a,b)\in I)=O\bigg{(}\frac{n}{\sigma^{2}}\sqrt{\frac{ \epsilon}{d}}\bigg{)}.\]
Proof.: By Lemma 4, we have
\[\Delta_{x}(a,b)=\frac{|C_{i}|}{|C_{i}|-1}\|x-a\|^{2}-\frac{|C_{j} |}{|C_{j}|+1}\|x-b\|^{2}\\ =\bigg{(}\frac{|C_{i}|}{|C_{i}|-1}-\frac{|C_{j}|}{|C_{j}|+1} \bigg{)}\|x\|^{2}+\bigg{\langle}2\bigg{(}\frac{|C_{j}|}{|C_{j}|+1}b-\frac{|C_ {i}|}{|C_{i}|-1}a\bigg{)},x\bigg{\rangle}\\ +\frac{|C_{i}|}{|C_{i}|-1}\|a\|^{2}-\frac{|C_{j}|}{|C_{j}|+1}\|b \|^{2},\]
where \(|C_{i}|\) and \(|C_{j}|\) denote the sizes of clusters \(i\) and \(j\) before the iteration, and we assume \(x\) moves from cluster \(i\) to cluster \(j\).
Since the sizes of the clusters as well as \(a\) and \(b\) are fixed, the last term in the above is fixed, and hence we may disregard it when analyzing \(\mathbb{P}(\Delta_{x}(a,b)\in I)\). Since \(x\) is a Gaussian random variable, we can apply Lemma 9 to find
\[\mathbb{P}(\Delta_{x}(a,b)\in I)=O\Bigg{(}\bigg{(}\frac{|C_{i}|}{|C_{i}|-1}- \frac{|C_{j}|}{|C_{j}|+1}\bigg{)}^{-1}\cdot\frac{1}{\sigma^{2}}\cdot\sqrt{ \frac{\epsilon}{d}}\Bigg{)}.\]
It remains to bound quantity in the inner brackets from below. Since each cluster is bounded in size by \(n\), we have
\[\frac{|C_{i}|}{|C_{i}|-1}-\frac{|C_{j}|}{|C_{j}|+1}\geq\frac{n}{n-1}-\frac{n}{n+ 1}=\frac{2n}{(n-1)(n+1)}\geq\frac{1}{n},\]
and we are done.
As stated at the start of the analysis, analyzing a single iteration is not enough to prove Theorem 2. The following lemma extends Lemma 10 to a sequence of iterations, given a fixed point set \(A\subseteq\mathcal{X}\) that moves in the sequence. Fix an active set \(A\) and starting cluster sizes \(|C_{i}|\) for \(i\in[k]\). Moreover, fix an order \(\pi:A\to[|A|]\) in which the points in \(A\) move, i.e., \(\pi(x)<\pi(y)\) means \(x\) moves for the first time before \(y\) moves for the first time. Let \(\Delta\) denote the minimum improvement of a sequence satisfying these hypotheses over all possible configurations of \(\mathcal{X}\setminus A\). Then for \(\epsilon\leq 1\),
\[\mathbb{P}(\Delta\leq\epsilon)\leq\left(\frac{2D}{\epsilon}\right)^{kd}\cdot \left(\frac{O(1)\cdot k^{|A|}\cdot\sqrt{d}Dn|A|\sqrt{\epsilon}}{\sigma^{2}} \right)^{|A|}.\]
Proof.: For \(x\in A\), let \(\Delta_{x}\) denote the improvement of the first move of \(x\in A\). We label the points in \(A\) as \((x_{1},\ldots,x_{|A|})\) according to \(\pi\). Let \(\Delta=(\Delta_{i})_{i=1}^{|A|}\).
To compute the vector \(\Delta\), we would need to know the configuration and positions of the points \(P=\mathcal{X}\setminus A\), since these are required to compute the \(k\) cluster centers. However, if we had approximations to the cluster centers in every iteration corresponding to the entries of \(\Delta\), then then we could compute an approximation to \(\Delta\) by Lemma 7.
Since the cluster centers are convex combinations of points in \([-D/2,D/2]^{d}\), we know that the cluster centers at the start of \(S\) must also lie in \([-D/2,D/2]^{d}\). Thus, there exist grid points \(c^{\prime}_{i}\) (\(i\in[k]\)) within a distance \(\sqrt{d}\epsilon\) of the initial cluster centers.
Knowing these grid points, we would like to apply Lemma 8 in order to update the approximate cluster centers whenever a new point moves. We then need to know the points gained and lost by each cluster between first moves of each \(x\in A\). Observe that to obtain this information, it suffices to know the configuration of the active points before the first move of each \(x\in A\). Thus, we fix these configurations.
We collect the gain of each first move of a point in \(A\), where we replace the cluster centers by these approximations, into a vector \(\Delta^{\prime}\). By the reasoning above and by Lemma 7, if there exist initial cluster centers \(c_{i}\) (\(i\in[k]\)) such that \(\Delta_{x}\in(0,\epsilon]\) for all \(x\in A\), then there exist grid points \(c^{\prime}_{i}\), such that \(|\Delta^{\prime}_{x}|\leq 20|A|dD\epsilon\) for all \(x\in A\).
By this reasoning, it suffices to obtain a bound on \(\mathbb{P}(\bigcap_{x\in A}|\Delta^{\prime}_{x}|\leq 20|A|dD\epsilon)\). We can then take a union bound over these events for all \((D/\epsilon+1)^{kd}\leq(2D/\epsilon)^{kd}\) choices of \(c^{\prime}_{i}\) for \(i\in[k]\), and a union bound over the configuration of \(A\) before the first move of each \(x\in A\).
To show that \(\mathbb{P}(\bigcap_{x\in A}|\Delta^{\prime}_{x}|\leq 20|A|dD\epsilon)\) is bounded as desired, we consider the following algorithm.
1. Set \(t=1\).
2. Reveal \(x_{t}\), and compute \(\Delta_{x_{t}}(c^{\prime}_{i_{t}},c^{\prime}_{j_{t}})\), where \(x_{t}\) moves from \(C_{i_{t}}\) to \(C_{j_{t}}\).
3. If \(|\Delta_{x_{t}}(c^{\prime}_{i_{t}},c^{\prime}_{j_{t}})|>20|A|dD\epsilon\), then return GOOD and halt.
4. If \(t=|A|\), return BAD.
5. Update the positions of the approximate cluster centers using Lemma 8.
6. Continue executing moves in the sequence until we encounter the first move of \(x_{t+1}\). Observe that the information we fixed before executing this algorithm suffices to compute approximations to the cluster centers whenever a new point moves.
7. Set \(t\gets t+1\) and go to step 2.
The sequence of iterations improves the potential by at most \(\epsilon\) only if the above algorithm returns \(\mathsf{BAD}\). We now argue that
\[\mathbb{P}(\mathsf{BAD})\leq\Big{(}O(1)\cdot\sqrt{d}Dn|A|\sqrt{\epsilon}/ \sigma^{2}\Big{)}^{|A|}.\]
Let \(\mathsf{BAD}_{t}\) be the event that the above algorithm loops for at least \(t\) iterations. Then \(\mathbb{P}(\mathsf{BAD})=\mathbb{P}(\mathsf{BAD}_{|A|})\). Since \(\mathbb{P}(\mathsf{BAD}_{t}\mid\mathsf{-BAD}_{t-1})=0\), we can immediately conclude that for all \(t\in\{2,\ldots,|A|\}\),
\[\mathbb{P}(\mathsf{BAD}_{t})=\mathbb{P}(\mathsf{BAD}_{t}\mid\mathsf{BAD}_{t-1} )\mathbb{P}(\mathsf{BAD}_{t-1}).\]
By Lemma 3.1, we have \(\mathbb{P}(\mathsf{BAD}_{t}\mid\mathsf{BAD}_{t-1})\leq O(1)\cdot\sqrt{d}Dn|A| \sqrt{\epsilon}/\sigma^{2}\). Thus, \(\mathbb{P}(\mathsf{BAD}_{t})\) is bounded as claimed.
Taking a union bound over all choices of the approximate grid points at the start of the sequence yields the factor \((2D/\epsilon)^{kd}\). Finally, we must take a union bound over the configuration of \(A\) before the first move of each \(x\in A\), yielding a factor \(k^{|A|^{2}}\), which concludes the proof.
Armed with Lemma 3.1, we can bound the probability that there exists a sequence in which a fixed number of points moves, which improves the potential by at most \(\epsilon\).
Let \(\Delta_{\min}\) denote the minimum improvement of any sequence of moves in which exactly \(4kd\) distinct points switch clusters. Then for \(\epsilon\leq 1\),
\[\mathbb{P}(\Delta_{\min}\leq\epsilon)\leq\left(\frac{O(1)\cdot k^{8kd+4}d^{10} D^{5}n^{8+\frac{1}{2}}\epsilon}{\sigma^{8}}\right)^{kd}.\]
Proof.: Fix an active set \(A\) of \(4kd\) distinct points, an order \(\pi:A\to[|A|]\) in which the points in \(A\) move, and the sizes of the two clusters at the start of the sequence.
We have by Lemma 3.1
\[\mathbb{P}(\Delta(S)\leq\epsilon)\leq\left(\frac{2D}{\epsilon}\right)^{kd} \left(\frac{O(1)\cdot k^{2kd}\cdot d^{3/2}Dn\sqrt{\epsilon}}{\sigma^{2}} \right)^{4kd}=\left(\frac{O(1)\cdot d^{6}\cdot k^{8kd}\cdot D^{5}n^{4} \epsilon}{\sigma^{8}}\right)^{kd}.\]
We conclude the proof by a union bound over the choices of \(A\), \(\pi\), and the sizes of the clusters at the start of the sequence, which yields a factor of at most \((4kd)^{4kd}\cdot n^{4kd+1}\).
With Lemma 3.1, we are in a position to prove the main result of this section. The proof is essentially mechanical, following techniques used in many previous smoothed analyses [1, 3, 4, 5, 6, 13].
[Restated] Let \(n,k,d\in\mathbb{N}\), and assume \(4kd\leq n\). Fix a set of \(n\) points \(\mathcal{Y}\subseteq[0,1]^{d}\), and assume that each point in \(\mathcal{Y}\) is independently perturbed by a \(d\)-dimensional Gaussian random variable with mean \(0\) and standard deviation \(\sigma\), yielding a new set of points \(\mathcal{X}\). Then the expected running time of Hartigan's method on \(\mathcal{X}\) is bounded by
\[O\Bigg{(}\frac{k^{12kd+5}d^{11}n^{12.5+\frac{1}{2}}\ln^{4.5}(nkd)}{\sigma^{8}} \Bigg{)}=k^{12kd}\cdot\operatorname{poly}(n,k,d,1/\sigma).\]
Proof.: First, we recall that the point set \(\mathcal{X}\) is contained in \([-D/2,D/2]^{d}\). This yields an upper bound for the value of the potential function for the initial clustering \(C\),
\[\Phi(C)=\sum_{i=1}^{k}\sum_{x\in C_{i}}\|x-\operatorname{cm}(C_{i})\|^{2}\leq kndD ^{2}.\]
We divide the sequence of iterations executed by Hartigan's method into contiguous disjoint blocks during which exactly \(4kd\) distinct points move. By Lemma 12, we know that the probability that any such block yields a bad improvement is small.
Let \(T\) be the number of such blocks traversed by the heuristic before we reach a local optimum. Then
\[\mathbb{P}(T\geq t)\leq\mathbb{P}\bigg{(}\Delta_{\min}\leq\frac{ kndD^{2}}{t}\bigg{)}\leq\min\Biggl{\{}1,\frac{O(1)\cdot k^{8kd+5}d^{11}D^{7}n^{9+ \frac{1}{2}}}{\sigma^{8}}\cdot\frac{1}{t}\Biggr{\}}.\]
This probability becomes nontrivial when
\[t>\left\lceil\frac{O(1)\cdot k^{8kd+5}d^{11}D^{7}n^{9+\frac{1}{2}}}{\sigma^{8 }}\right\rceil=:t^{\prime}.\]
Observe that \(t^{\prime}=\Omega(kndD^{2})\), justifying our use of Lemma 12 above. Thus, we find
\[\mathbb{E}(T)=\sum_{t=1}^{k^{n}}\mathbb{P}(T\geq t)\leq t^{\prime}+t^{\prime} \cdot\sum_{t=t^{\prime}}^{k^{n}}\frac{1}{t}\leq t^{\prime}+t^{\prime}\cdot \int_{t^{\prime}}^{k^{n}}\frac{1}{t}\,\mathrm{d}t\leq t^{\prime}+t^{\prime} \cdot\ln(k^{n}).\]
The upper limit of \(k^{n}\) to the sum is simply the number of possible clusterings of \(n\) points into \(k\) sets, which is a trivial upper bound to the number of iterations. To conclude, we observe that any block in which exactly \(4kd\) distinct points move has a length of at most \(k^{4kd}\), as otherwise some clustering would show up twice. Thus, we multiply \(\mathbb{E}(T)\) by \(k^{4kd}\) to obtain a bound for the smoothed complexity. Finally, we insert the value of \(D=\sqrt{2n\ln(nkd)}\).
## 5 Discussion
Theorems 1 and 2 provide some of the first rigorous theoretical results concerning Hartigan's method that have been found since Telgarsky & Vattani explored the heuristic in 2010 [14]. Of course, many interesting open questions still remain.
Theorem 1 establishes the existence of exponential-length sequences on the line, but leaves open the possibility that a local optimum may be reachable more efficiently by a different improving sequence. To be precise: given an instance of \(k\)-means clustering on the line and an initial clustering, does there always exist a sequence of iterations of Hartigan's method of length \(\operatorname{poly}(n,k)\) starting from this clustering and ending in a local optimum? Although the \(d=1\) case appears very restricted at first sight, this question seems surprisingly difficult to answer.
In addition, the construction we use in Theorem 1 requires \(k=\Theta(n)\) clusters. This opens up the question whether similar worst-case constructions can be made using fewer, perhaps even \(O(1)\), clusters. Note that this is not true for Lloyd's method, since the number of iterations of Lloyd's method is bounded by \(n^{O(kd)}\)[9], which is polynomial for \(k,d\in O(1)\).
Theorem 2 entails, to our knowledge, the first step towards settling the conjecture by Telgarsky & Vattani [14] that Hartigan's method has polynomial smoothed complexity. Our result is reminiscent of the smoothed complexity bound of Lloyd's method obtained in 2009
by Manthey & Roglin [12], which is \(k^{kd}\cdot\operatorname{poly}(n,1/\sigma)\). In the case of Lloyd's method, the smoothed complexity was later settled to \(\operatorname{poly}(n,k,d,1/\sigma)\)[1].
Observe that our bound is polynomial for constant \(k\) and \(d\), and even for \(kd\log k\in O(\log n)\). While this is certainly an improvement over the trivial upper bound of \(k^{n}\), it falls short of a true polynomial bound. We hope that our result can function as a first step to a \(\operatorname{poly}(n,k,d,1/\sigma)\) smoothed complexity bound of Hartigan's method.
We remark that the exponents in the bound in Theorem 2 can be easily improved by a constant factor for \(d\geq 2\). The reason is that in Lemma 9, the factor \(\sqrt{\epsilon}\) emerges from the \(d=1\) case, while for \(d\geq 2\) this would be \(\epsilon\). We chose to combine these cases for the sake of keeping the analysis simple, as we expect the bound in Theorem 2 would be far from optimal regardless.
The main challenges to improving this bound are twofold. First, we must take a union bound over the configuration of the active points each time we apply Lemma 8, yielding factors of \(k^{O(kd)}\). Second, we must analyze sequences in which \(\Theta(kd)\) points move in order to guarantee a significant potential decrease. This incurs a factor of the length of such a sequence, which is another source of a factor \(k^{O(kd)}\)
One avenue for resolving this problem might be to analyze shorter sequences in which a significant number of points move. Angel et al. used such an approach in their analysis of the Flip heuristic for Max-Cut. They identify in any sequence \(L\) of moves a shorter subsequence \(B\), such that the number of unique vertices that flip in \(B\) is linear in the length of \(B\). The major challenge is then to find sufficient independence in such a short subsequence, which in our case seems challenging, as we need to compensate for a factor \(\epsilon^{-kd}\) in Lemma 11.
|
2309.03472 | Perceptual Quality Assessment of 360$^\circ$ Images Based on Generative
Scanpath Representation | Despite substantial efforts dedicated to the design of heuristic models for
omnidirectional (i.e., 360$^\circ$) image quality assessment (OIQA), a
conspicuous gap remains due to the lack of consideration for the diversity of
viewing behaviors that leads to the varying perceptual quality of 360$^\circ$
images. Two critical aspects underline this oversight: the neglect of viewing
conditions that significantly sway user gaze patterns and the overreliance on a
single viewport sequence from the 360$^\circ$ image for quality inference. To
address these issues, we introduce a unique generative scanpath representation
(GSR) for effective quality inference of 360$^\circ$ images, which aggregates
varied perceptual experiences of multi-hypothesis users under a predefined
viewing condition. More specifically, given a viewing condition characterized
by the starting point of viewing and exploration time, a set of scanpaths
consisting of dynamic visual fixations can be produced using an apt scanpath
generator. Following this vein, we use the scanpaths to convert the 360$^\circ$
image into the unique GSR, which provides a global overview of gazed-focused
contents derived from scanpaths. As such, the quality inference of the
360$^\circ$ image is swiftly transformed to that of GSR. We then propose an
efficient OIQA computational framework by learning the quality maps of GSR.
Comprehensive experimental results validate that the predictions of the
proposed framework are highly consistent with human perception in the
spatiotemporal domain, especially in the challenging context of locally
distorted 360$^\circ$ images under varied viewing conditions. The code will be
released at https://github.com/xiangjieSui/GSR | Xiangjie Sui, Hanwei Zhu, Xuelin Liu, Yuming Fang, Shiqi Wang, Zhou Wang | 2023-09-07T04:10:30Z | http://arxiv.org/abs/2309.03472v1 | Perceptual Quality Assessment of 360\({}^{\circ}\) Images Based on Generative Scanpath Representation
###### Abstract
Despite substantial efforts dedicated to the design of heuristic models for omnidirectional (_i.e._, 360\({}^{\circ}\)) image quality assessment (OIQA), a conspicuous gap remains due to the lack of consideration for the diversity of viewing behaviors that leads to the varying perceptual quality of 360\({}^{\circ}\) images. Two critical aspects underline this oversight: the neglect of viewing conditions that significantly sway user gaze patterns and the overrelineal on a single viewport sequence from the 360\({}^{\circ}\) image for quality inference. To address these issues, we introduce a unique generative scanpath representation (GSR) for effective quality inference of 360\({}^{\circ}\) images, which aggregates varied perceptual experiences of multi-hypothesis users under a predefined viewing condition. More specifically, given a viewing condition characterized by the starting point of viewing and exploration time, a set of scanpaths consisting of dynamic visual fixations can be produced using an apt scanpath generator. Following this vein, we use the scanpaths to convert the 360\({}^{\circ}\) image into the unique GSR, which provides a global overview of gazed-focused contents derived from scanpaths. As such, the quality inference of the 360\({}^{\circ}\) image is swiftly transformed to that of GSR. We then propose an efficient OIQA computational framework by learning the quality maps of GSR. Comprehensive experimental results validate that the predictions of the proposed framework are highly consistent with human perception in the spatiotemporal domain, especially in the challenging context of locally distorted 360\({}^{\circ}\) images under varied viewing conditions. The code will be released at [https://github.com/xiangjieSui/GSR](https://github.com/xiangjieSui/GSR).
Omnidirectional images, perceptual quality assessment, virtual reality.
## I Introduction
Virtual reality (VR) photography endeavors to capture or recreate a spherical natural scene into omnidirectional (_i.e._, 360\({}^{\circ}\) ) images. 360\({}^{\circ}\) images offer a vast interactive space, making them particularly appealing in the realm of metaverse applications. However, one of the main barriers to the broader adoption of VR photography is the significant loss of quality that 360\({}^{\circ}\) images undergo during the processes of image capture, projection, compression, and transmission [1, 2]. Consequently, investigating the factors that influence the perceptual quality of 360\({}^{\circ}\) images has emerged as a significant research topic.
Researchers have devoted considerable efforts to analyzing the _global_ distortions that uniformly affect the quality of 360\({}^{\circ}\) images, such as compression, Gaussian blur, and noise [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Numerous subjective user studies have been conducted on degraded 360\({}^{\circ}\) images [1, 2, 8], leading to the development of several objective omnidirectional image quality assessment (OIQA) models [3, 4, 5, 6, 7, 8, 9, 10, 11]. A family of explicit methods aims to extend the 2D full-reference IQA model for the OIQA task [12, 13, 14, 15] by considering the stretch ratio of the equirectangular projection [13, 14] or the spherical properties of 360\({}^{\circ}\) images [12, 15]. However, since comparing entire omnidirectional images is inconsistent with human perception in 360\({}^{\circ}\) scenes due to the limited field of view (FoV), a more effective approach is to evaluate the perceptual quality of 360\({}^{\circ}\) images by combining quality scores of viewport images [3, 4, 5, 6, 7, 8, 9, 10, 11].
Unfortunately, these methods largely overlook the diversity of viewing behaviors that leads to variations in the perceived quality of 360\({}^{\circ}\) images, as shown in Fig. 1 (a). This oversight manifests in two key aspects. First, two viewing conditions that significantly sway user gaze patterns are frequently overlooked. Specifically, the _starting point of viewing_ and the _exploration time_ can significantly impact the scanpath patterns, which in turn affect the perceived quality of 360\({}^{\circ}\) images, particularly when images are locally distorted [16, 17]. For instance, users may fail to detect distortions if they start viewing from a distortion-free area with a short exploration time. As such, the perceptual quality might vary significantly among users under different viewing conditions. Second, the majority of OIQA methods predominantly relies on a single fixed viewport sequence from the 360\({}^{\circ}\) image to predict its perceptual quality, as shown in Fig. 1 (b). However, the deterministic quality evaluation relying on one fixed sequence cannot explain the probabilistic viewing behaviors with randomness, thereby potentially leading to prediction bias.
Despite a few studies have attempted to address these issues [11, 16, 17], they still manifest certain limitations. Sui _et al_. [16] proposed an OIQA method by integrating the quality scores of multiple viewport sequences generated using human scanpaths. However, this approach poses significant computational challenges due to the necessity of extracting a multitude of viewport images. Fang _et al_. [17] introduced an OIQA model that adapts to different viewing conditions by embedding the positions of the uniformly sampled viewports and the prompts of starting point and exploration time. However, this model did not consider users' viewing behaviors that can significantly impact the perceptual quality. Recently, Wu _et al_.
al_. [11] proposed an OIQA model that incorporates a strategy for sampling multiple viewport sequences based on patch-wise entropy. However, such a method may not be adequate to model complex viewing behaviors that are impacted by a variety of factors, _e.g_., scene semantics and kinematic constraints [18].
In this study, we introduce a unique generative scanpath representation (GSR) for OIQA. The underlying principle of the GSR is to aggregate dynamic perceptual experiences of multi-hypothesis users. The GSR conversion is operated under a predefined viewing condition which is characterized by the starting point and exploration time. More specifically, given an arbitrary starting point and exploration time, a set of scanpaths [19] can be produced. This aligns with the philosophy that users' viewing behaviors exhibit significant variation due to the diversity of individual experiences and preferences [16]. Subsequently, the GSR is constructed by aggregating multiple small gazed-focused areas, as shown in Fig. 1 (c). As such, the quality inference of 360deg image can be swiftly transformed to that of GSR which realistically incorporates the viewing behaviors. Our contributions are summarized as follows:
* We develop a unique GSR representation of 360deg images, which is created by using a set of realistic scanpaths generated under a predefined viewing condition. As such, we bridge the gap between viewing conditions and quality assessment in 360deg scenes.
* We provide a comprehensive global overview of dynamic perceptual experiences of multi-hypothesis users with the GSR representation. As such, we can make a thorough quality assessment from a variety of experiences.
* We design a novel computational framework for OIQA based on GSR. Comprehensive experiments demonstrate that our model is much more accurate than advanced OIQA models with less computational complexity.
## II Relate Work
### _Objective Quality Assessment of 2D Images and Videos_
PSNR and SSIM [21] are the two most popular full-reference measures for both image and video quality assessment, and they also trigger the development of quality models, including knowledge-driven and data-driven methods [22, 23]. For knowledge-driven methods, the researchers proposed to model the priorities of human visual systems, such as structural similarity [24, 25, 26], contrastive sensitive functions [27], free energy theory [28, 29], and the information theory [30, 31]. Data-driven methods were designed with various feature engineering schemes and learning strategies. Support vector regression (SVR) based on hand-crafted features was one of the most representative methods, aiming to model the natural scene statistics [32, 33, 34, 35]. Furthermore, different advanced deep learning techniques have been utilized to regress image or video quality scores, such as convolution neural network (CNN) models [36, 37, 38, 39], transformer-based models [40, 41], meta-learning based model [42], learning-to-rank models [43, 44], and self-supervised learning models [45, 46, 47].
### _Objective Quality Assessment of Omnidirectional Images_
Current OIQA methods can be roughly classified into three categories based on the planes they operate on - 2D plane [8, 13, 14, 20], sphere [12, 15], and viewport [3, 4, 5, 6, 9, 10, 11, 16, 17], as listed in Table I. The first two categories attempted to extend 2D-IQA methods for OIQA by compensating for the non-uniform sampling caused by sphere-to-plane projection, _e.g_., equirectangular projection. Methods in the 2D plane, with typical examples being WS-PSNR [14] and CPP-PSNR [13], weighed local signal errors by their positions. Methods in the sphere, such as S-PSNR [12] and S-SSIM [15], attempted to compute the local quality by uniformly sampling signals on the spherical domain. The last category computed the local quality on the viewport domain. The viewport sampling strategies of these methods can be classified into three categories: predetermined rules [3, 5, 6, 7, 17], key points [4, 9], and scanpaths [10, 11, 16]. As indicated in Table I, the trend in the design of the OIQA model tends to involve scanpaths that represent the human viewing behaviors in 360deg scenes. However, the effectiveness and practicality of such methods are limited due to the overlook
Fig. 1: (a) An illustrations of diverse perception experiences in the 360° image. Diverse viewing behaviors lead to varying perceptual quality. (b) The typical scheme of current OIQA methods involves predicting the perceptual quality of 360° images by combining the predicted quality scores of a single fixed viewport sequence. However, the deterministic quality evaluation relying on one fixed sequence cannot explain the probabilistic viewing behaviors with randomness, thereby potentially leading to prediction bias. (c) The proposed computational framework. Our method can generate a dynamic GSR sequence given a predefined starting point and an exploration time, and cover multiple sections of the 360° scene at each time instance. Therefore, the predictions of our method can be highly consistent with human perception.
of viewing conditions [10], the reliability of viewing behavior modeling [11], and the time-consuming process of viewport extraction [16].
### _Scanpath Prediction of Omnidirectional Images_
Scanpath prediction of 360deg images aims to produce realistic dynamic gaze behavior based on the human visual perception mechanism. Existing scanpath prediction methods for 360deg images can be divided into two categories: saliency-based methods [48, 49, 50] and generative methods [19, 51, 52]. Generally speaking, the former first produced the saliency map reflecting the degree of importance of each pixel to the human visual system and then sampled the time-order gaze points from the saliency map to form a scanpath. The latter took advantage of generative networks, _e.g._, Generative Adversarial Network (GAN), to directly produce scanpaths based on 360deg images. However, such methods did not model the time-dependence of scanpaths and neglected viewing conditions that have a crucial impact on influencing user viewing behavior [16, 17, 18]. Recently, Sui _et al._[19] proposed a deep Markov model for scanpath prediction of 360deg images, which focused on modeling time-dependent attention in the 360deg scenes. More importantly, it was developed with a high degree of interactivity and flexibility, enabling the assignment of specific starting points and exploration times to generate scanpaths.
## III The Proposed GSR Representation
The GSR aggregates varied perceptual experiences of multi-hypothesis users under a predefined viewing condition. To this end, the GSR is first designed to adapt to a predefined viewing condition by using a scanpath generator to generate scanpaths based on a given starting point and exploration time. This allows the collection of quality opinions from multi-hypothesis users under a specific viewing condition, similar to the methodology of subjective user studies [16, 53]. To reduce redundant calculations in quality inference for viewport images, which are caused by the high probability of overlap among the FoV of different users (see Fig. 2 (a)), we propose to "downscaling" the 360deg image to a GSR sequence that realistically incorporates viewing behaviors. Inspired by the visual neuroscience that visual detail is captured primarily by the fovea (_i.e._, the center of gaze) [54], the quality inference is conducted on the aggregation of small gaze-focused areas at each time instance (see Fig. 2 (b)). As such, a 360deg image is swiftly transformed to a GSR sequence which realistically incorporates viewing behaviors.
### _Scanpath Generator_
We produce realistic scanpaths using an effective scanpath prediction model which is capable of accurately replicating human gaze behaviors. For 360deg scenes, the generator should have two essential features:
* _Adaptability_: it should be able to adapt to any predefined viewing condition, with the capacity to generate scanpaths from any starting point and for any exploration time.
* _Generativity_: it should be consistent with probabilistic viewing behaviors, with the capacity of generating different scanpaths for a given 360deg image.
The _adaptability_ enables the produced GSR to be relevant and effective in inferring perceptual quality by adapting to predefined viewing conditions. Furthermore, the _generativity_ allows for a diverse range of viewing behaviors to be taken into account for a thorough quality inference. An additional benefit of using a generative model is that the resulting scanpaths
and corresponding GSR sequence can vary with each iteration, implying that new training samples can be generated regardless of data augmentation [55]. The effectiveness of such a strategy is demonstrated in Sec. V-E.
In this study, we use ScanDMM [19] as the scanpath generator within our framework, which takes two inputs: 1) a 360\({}^{\circ}\) image \(\mathbf{I}\), which is in the form of 2D equirectangular projections; 2) a viewing condition \(\mathbf{\Omega}=\{\mathcal{P}_{1},\ T\}\) that includes a starting point \(\mathcal{P}_{1}=(y_{1},x_{1})\) and an exploration time \(T\), where \((y_{1},x_{1})\) indicates the normalized 2D coordinate at the initial moment with values in the range of \([0,1]\). Given the two inputs, ScanDMM produces a scanpath through the generative process of the Markov model. This process is carried out in parallel to create \(N\) plausible scanpaths:
\[\hat{\mathcal{P}}_{1:T}^{1:N}=\mathcal{G}(\mathbf{I},\mathbf{\Omega}). \tag{1}\]
\(\mathcal{G}\) is the ScanDMM model. \(\hat{\mathcal{P}}_{1:T}^{1:N}=\{\{(y_{t}^{n},x_{t}^{n})\}_{t=1}^{T}\}_{n=1}^{N}\) is a set of \(N\) generated scanpaths, where \((y_{t}^{n},x_{t}^{n})\) represents the predicted gaze point of \(n\)-th hypothetical user at time instant \(t\).
### _GSR Conversion_
Herein, we detail the process of converting a 360\({}^{\circ}\) image into a GSR sequence using the generated scanpaths. Given a predicted gaze position as the center, we extract a mini-patch with a small size of \((\mathbf{P}_{h}\times\mathbf{P}_{w})\) from the image. To account for the overstretch inherent in the equirectangular projection, the patch extraction process is executed using the spherical convolution [56] which adaptively wraps the kernel around the sphere (see Fig. 3 (b)). More specifically, the spherical convolution retrieves the spherical coordinates of sampling points by using a kernel created on the spherical tangent domain. Then, these spherical coordinates are projected to the 2D plane to access the pixel values. By repeating the patch extraction process over each hypothetical user, a set of \(N\) mini-patches can be obtained at each time instance. These patches are subsequently organized to construct a GSR sequence (denoted as \(V_{1:T}\)), where each GSR consists of \(\sqrt{N}\) rows and columns of patches (see Fig. 2 (b)). Additionally, to capture the temporal quality variation of the independent mini-patch sequence, we impose a constraint on their positions to ensure alignment across time, as suggested in [57]. This is done as if an individual mini-patch sequence were a mini-video.
## IV The proposed computational framework
In this section, we provide an overview of the proposed OIQA framework. The framework presented in Fig. 3 consists of two components: the GSR converter and the quality evaluator. The former transforms a 360\({}^{\circ}\) image into a GSR sequence. The latter extracts quality-aware features from the GSR sequence and regresses them to a quality score.
Our approach presents 360\({}^{\circ}\) images using GSR sequences, which contain essential spatiotemporal information of perceptual experiences for quality inference. As such, we leverage an advanced backbone network in video tasks to learn spatiotemporal quality of the GSR sequences. Overall, with the resulting GSR sequence of a 360\({}^{\circ}\) image, we maximize the following likelihood function:
\[\boldsymbol{\alpha}^{*}=\arg\max_{\boldsymbol{\alpha}}p(q|\mathcal{M}( \mathcal{Q}(V_{1:T}));\boldsymbol{\alpha}), \tag{2}\]
where \(q\) is the ground-truth quality score and \(\boldsymbol{\alpha}\) denotes the learnable parameters in the network. \(\mathcal{Q}\) and \(\mathcal{M}\) are the quality evaluator and Multilayer perceptron (MLP) layer, respectively. The quality inference of the 360\({}^{\circ}\) image is achieved by entering the GSR sequence into the quality evaluator and the MLP layer:
\[\hat{q}=\mathcal{M}(\mathcal{Q}(V_{1:T})), \tag{3}\]
where the \(\hat{q}\) is the predicted quality score. In this study, we use the X-Clip-B/32 [58] as the quality evaluator. The last MLP layer of X-Clip-B/32 is substituted with the one tailored for video quality assessment [57]. We refer to the proposed model as GSR-X.
## V Experiments
In this section, we first offer a detailed description of the implementation of our models and the evaluation procedures.
Fig. 2: (a) An illustration of viewport representation for 360\({}^{\circ}\) images. (b) The proposed GSR representation. The viewports of different users are high likely to overlap. Instead, our GSR representation highlights the small gaze-focused areas and effectively captures the spatio-temporal experiences of multi-hypothesis users. We show a specific example of how we convert a 360\({}^{\circ}\) image into a GSR sequence using scanpaths. Note that the patches are extracted in the spherical tangent domain, which we omit for brevity.
We then compare the proposed model with state-of-the-art quality models. Finally, we present comprehensive ablation studies to analyze the effectiveness of our design elements.
### _Implementation Details and Evaluation Protocols_
The details of the implementation are outlined in Table II, and the comparison databases are summarized in Table III.
**Default Settings**. Our framework requires the specification of viewing conditions (_i.e._, the starting point \(\mathcal{P}_{1}\) and the exploration time \(T\)) to convert static 360\({}^{\circ}\) images into GSR sequences. When such information is not available (_e.g._, with the CVIQD [1] and OIQA [2] databases), the conversion is done using default settings. More specifically, the starting point is set to \((0.5,0.5)\), _i.e._, the center of the 360\({}^{\circ}\) images, and the exploration time \(T\) is set to \(20\) seconds.
**GSR Configurations**. We set the mini-patch size to \(32\times 32\) and the number of mini-patches per GSR to \(49\). This creates a GSR with a size of \(224\times 224\), which is a common size for many computer vision networks. The length of a GSR sequence is the same as the exploration time \(T\), for example, \(20\) in the OIQA database.
**Specific Modules**. We implement the scanpath generator using the original codes provided by the authors [19] and build the quality evaluator using an open source toolbox [60]. The ScanDMM is retrained on the JUFE [17] database to learn viewing behaviors in the OIQA task. Note that the scanpath generator and quality evaluator are consistently trained on the
Fig. 3: Illustrations of the fundamental components for our computational framework. (a) The computational framework in a nutshell. (b) GSR converter. (c) Quality evaluator \(\mathcal{M}(\mathcal{Q}(V_{1:T}))\). (d) Symbol description.
same training set to prevent data leakage.
**Computation Devices**. All experiments are carried out on a server equipped with an AMD Ryzen \(9\)\(5950\)X \(16\)-Core CPU, a \(128\) GB RAM, and an NVIDIA GeForce RTX 3090 GPU.
**Benchmarking Databases**. We utilize three OIQA databases for model evaluation: CVIQD [1], OIQA [2], and JUFE [17]. The CVIQD database consists of \(528\) distorted 360deg images, all with a resolution of \(4\)K, generated from \(16\) distortion-free images with \(3\) types of compression distortions at \(11\) distortion levels. The OIQA database is a collection of \(320\) distorted 360deg images with high resolutions (\(\approx 11\)K). These images were created from \(16\) reference 360deg images, each of which was distorted using \(4\) different types of distortion at \(5\) different levels. The JUFE database is made up of \(1032\) non-uniform distorted 360deg images, derived from \(258\) reference images, all with a resolution of \(8\)K. To assess the influence of viewing conditions on the perceptual quality of the 360deg images, the subjects were divided into two groups to view a 360deg image from two different starting points. They were asked to rate the quality of the 360deg image after viewing it for \(5\) seconds (while watching) and \(15\) seconds (when watching is finished), respectively. This results in a total of \(4\) different viewing conditions (\(2\) starting points \(\times\)\(2\) exploration time) in the JUFE database, with each distorted image having \(4\) quality labels corresponding to the four viewing conditions. The head and gaze movements data of the subjects have also been recorded to analyze their viewing behaviors.
**Evaluation Protocols**. We use two standard evaluation criteria to quantify quality prediction performance: Spearman's rank-order correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC). The higher the SRCC and PLCC values, the better the performance of the model. We randomly divide each database into three sets: training (\(70\%\)), validation (\(10\%\)), and test (\(20\%\)) sets, based on the reference images. We repeat this process five times and report the mean and standard deviation of the SRCC and PLCC results.
### _Performance Comparisons_
In addition to GSR-X model, we have created two other models: GSR-S and GSR-C. These models use Video Swin-T [61] and ConvNeXts-T [62] backbones as the quality evaluator within our proposed framework, respectively (referring to Table II for implementation details). We compare the performance of the proposed models against seven full-reference IQA models, including PSNR, SSIM [21], DISTS [26], CPP-PSNR [13], WS-PSNR [14], S-PSNR [12], and S-SSIM [15], as well as eight no-reference IQA models, including NIQE [59], DBCNN [36], TreS [40], MC360IQA [3], VGCN [4], MFILGN [6], Fang22 [17], and Assessor360 [11]. When comparing models on the JUFE database, we do not retrain the data-driven models (_e.g._, VGCN) that overlook the viewing conditions (the results are indicated by "N/A" in Table IV) as they suffer from convergence problems when a single 360deg image has four quality labels. The results of SRCC and PLCC are summarized in Table IV, from which we make several interesting observations.
First, even classical 2D-IQA models (_e.g._, PSNR and SSIM) perform well on CVIQD [1] database, indicating the distortion artifacts of this database might be similar to those of traditional 2D-IQA databases. However, MFILGN [6], a model based on natural scenes statistics, exhibits lower accuracy and higher variance, possibly due to statistical bias resulting from the limited number of scenes in the training set (_i.e._, \(12\) scenes). On the contrary, the performance of these models drops significantly in the OIQA database [2]. This may result from the discrepant distortion artifact procedure - the distortion artifacts are first injected to each fish-eye image, then stitched and projected onto the 2D-plane. As a result, these distortion artifacts are presented differently from the 2D distortions.
Second, the results on the two databases with global distortions (_i.e._, CVIQD and OIQA databases) imply that recent advancements in the field of 2D-IQA are effective in addressing global distortions in 360\({}^{\circ}\) images. For example, DISTS [26], DBCNN [36], and TreS [40] show competitive performances compared to the models tailored for 360\({}^{\circ}\) images, and outperform all full-reference OIQA metrics (_e.g._, S-PSNR). This conclusion is further evidenced by the superior results of VGCN [4], which is built on top of DBCNN and thus benefits from the knowledge regarding 2D distortions. Such observations suggest that, despite the considerable effort put into addressing global distortions in 360\({}^{\circ}\) images, they might essentially stand on the same page as 2D-IQA.
Third, the results on the database with local distortions (_i.e._, JUFE database) reveal that current OIQA methods are not able to effectively evaluate the perceptual quality of locally distorted 360\({}^{\circ}\) images when viewing conditions are varied. This is evidenced by the very low values of SRCC and PLCC (\(\approx 0\)) of the models. On the contrary, the proposed models achieve superior performance compared to competing models, regardless of the backbone of the quality evaluator, thanks to the flexibility of the scanpath generator and the proposed GSR representation. Furthermore, the proposed OIQA models achieve competitive performance on the CVIQD and OIQA databases, while being much faster than the advanced models (referring to Fig. 4).
### _Effectiveness of GSR_
The GSR is an essential component of the proposed computational framework. Here, we demonstrate the effectiveness of the GSR through quantitative and qualitative experiments.
**Improvement for Full-Reference Metrics.** We present the performance gains achieved by applying full-reference 2D-IQA methods to GSR sequences, compared to applying them to 2D equirectangular projections (as baselines) and viewport sequences [16]. To ensure a fair comparison, we generate \(N=49\) scanpaths to extract viewports. To calculate a quality score for a 360\({}^{\circ}\) image using a full-reference metric \(Q\), the following process is used:
\[\hat{q}=\frac{1}{N}\sum_{n=1}^{N}\frac{\sum_{t=1}^{T}\omega_{t}Q(a_{n,t})}{ \sum_{t=1}^{T}\omega_{t}}, \tag{4}\]
where \(a_{n,t}\) denotes a pair of reference and distorted viewport/GSR images, and \(\omega_{t}\) denotes the weight allocated by the specific temporal pooling method. We add "V-" and "G-" to the 2D-IQA method as prefixes to name viewport-based models [16] and our GSR-based models. In addition, we add suffixes to distinguish different temporal pooling strategies, such as the arithmetic mean (AM) and the ascending half of Gaussian weighting (GW) [16]. The final model names follow the format of "_prefix-Model-suffix_" (_e.g._, G-PSNR-GW). The experimental results on the JUFE database are shown in Table V. We observe that our methods achieve significant performance improvements compared to baselines and are competitive with the methods proposed by the study [16] in terms of performance. Moreover, the proposed GSR-based models run much faster than viewport-based methods by a significant margin.
**Running Time Comparison.** We speed up quality inference by "downscale" a 360\({}^{\circ}\) image to a GSR sequence. To show the high computational efficiency of our method, we compare the preprocessing1 time, inference time, and total running time of different OIQA models, as depicted in Fig. 4. To facilitate visualization, we calculate the \(\log_{\hat{t}}\) value of the time cost \(\hat{t}\) (in seconds). The figures show that the proposed GSR-X model has a stable inference time (\(\approx 0.750\) seconds) for a 360\({}^{\circ}\) image, regardless of resolution, due to the fixed size of the GSR. This is significantly faster than MCIQA360 [3] which operates about \(1.5\) hours for an \(11\) K 360\({}^{\circ}\) image. Although VGCN and Assessor360 can maintain stable inference times by compromising image downsampling, their performance is inferior to the GSR-X model (_i.e._, \(\approx 7.944\) seconds for VGCN and \(\approx 4.522\) seconds for Assessor360). In conclusion, our method is \(6\)-\(7000\) times faster than other models. Notably, all the experiments are carried on the same computation devices, with configurations detailed in Sec. V-A and Table II.
Footnote 1: Preprocessing refers to the process of extracting viewports. Note that since the Assessor360 model was constructed in an end-to-end management, we only count its inference time.
Fig. 4: Efficiency comparison in the case of predicting the quality of the 360\({}^{\circ}\) image with a resolution of 4K, 8K, and \(11\)K, respectively. We present the \(\log_{\hat{t}}\) value of the time cost for a better visualization, where \(\hat{t}\) represents the time cost in seconds. Our framework operates \(\approx 0.750\) seconds, regardless of resolution, which is 6-7000 times faster than competing models.
**Qualitative Comparison.** We illustrate that GSR sequences can capture essential spatiotemporal information of hypothetical users' experiences to infer quality by presenting a predicted spatiotemporal quality distribution of the GSR sequences in Fig. 5. This figure shows that when the 360\({}^{\circ}\) image is locally distorted, the predicted quality changes gradually in the time domain. For example, when starting to view from point \(1\) (referring to the central red circle in Fig. 5), the quality of the initial GSR containing blur distortions is predicted to be low. As the exploration covers more distortion-free regions, the subsequent GSR containing fewer distortions gradually improves in quality. Conversely, for global distortions, the predicted quality is relatively uniform in both the spatial and temporal domains, regardless of the starting points of viewing and exploration time. This observation supports that although GSR has fewer semantics than the raw image, it is capable of capturing essential contents to extract quality-aware features. The resulting quality scores are highly consistent with human judgment, indicating that the proposed models are capable of modeling the dynamic perceptual experiences of humans in the 360\({}^{\circ}\) scene.
### _Generalizability Study_
We compare the generalizability of the proposed GSR-X model with competitive OIQA models in a cross-database setting. We use the CVIQD [1] and OIQA [2] databases to perform the generalizability comparison. As observed in Table VI, the models trained on the OIQA database commonly exhibit better generalizability compared to those trained on the CVIQD database. Surprisingly, VGCN [4], MC360IQA [3] and Assessor360 [11] show a significant performance discrepancy when tested in different cross-database settings. This implies that these models may not be able to effectively extract general quality-aware features and suffer from the overfitting problem. On the contrary, although the proposed GSR-X model is based on pre-trained parameters from the video classification task [58], which contains a larger domain gap with the quality assessment task, it demonstrates better generalizability in the OIQA task compared with competing models, and is able to adapt to more challenging real-world applications, such as OIQA for locally distorted images 360\({}^{\circ}\) under various viewing conditions.
### _Ablation Study_
In this subsection, we analyze the influence of viewing conditions, the scanpath generator, and the spherical tangent representation on the proposed framework by performing ablation experiments. We use the GSR-X as the test model and carry out experiments on the CVIQD [1] and JUFE [17] databases. The parameters of the comparison models in the same database are maintained to be the same.
**Impact of Accessibility of Viewing Conditions.** We investigate the effect of accessibility of viewing conditions on our method by creating a version of GSR-X that assumes unknown viewing conditions, called "GSR-X w/o \(\mathbf{\Omega}\)". The results shown in Table VII demonstrate that when the 360\({}^{\circ}\) images contain local distortions, the accessibility of viewing conditions has a significant impact on the performance of the GSR-X model. This is evidenced by the significant decrease in the performance
Fig. 5: The predicted spatiotemporal quality distributions of the GSR sequence when inferring from different starting points of viewing. Top: a locally blurred 360\({}^{\circ}\) image (with blur appearing near the point \(1\)). Bottom: a 360\({}^{\circ}\) image affected by global Gaussian noise.
of "GSR-X w/o \(\mathbf{\Omega}\)" on the JUFE database (_i.e._, \(\approx 0.150\) for the PLCC and SRCC values). However, this effect is not observed when the 360\({}^{\circ}\) images have global distortions, as evidenced by the satisfactory performance of "GSR-X w/o \(\mathbf{\Omega}\)" on the CVIQD database. These findings are consistent with the study [16], which found that viewing conditions are essential to evaluate the quality of locally distorted 360\({}^{\circ}\) images.
**Impact of Scanpath Generator.** We investigate the ability of the scanpath generator to provide diverse viewing behaviors in the 360\({}^{\circ}\) scene by asking three questions:
1. Does the temporal information obtained from the scanpath play a critical role in assessing the quality of 360\({}^{\circ}\) image?
2. Can the scanpath generator be used for data augmentation?
3. What is the impact of the number of generated scanpaths on the precision of the predictions?
To answer the first two questions, we have created two baseline models: "GSR-X w Random" and "GSR-X w Human". The former randomly extracts mini-patches from the 360\({}^{\circ}\) images to create GSR sequences, thus disregarding the temporal information. The latter uses human scanpaths to create GSR sequences instead of the generative model, meaning that the training samples in each iteration remain the same. The results of the experiment are presented in Table VII, from which two main conclusions can be drawn. First, by comparing the performance of "GSR-X w Random" and GSR-X on the two databases, it is evident that temporal information is more significant in evaluating the quality of 360\({}^{\circ}\) images that are distorted locally than those that are distorted globally. Second, the proposed GSR-X outperforms "GSR-X w Human", indicating that the scanpath generator contributes to data augmentation. This is because new training samples can be generated in each iteration, thus improving the model's performance. Lastly, to investigate the effect of the number of generated scanpaths on prediction accuracy, the GSR-X model was trained using GSR created with \(4\), \(16\), \(49\) and \(64\) scanpaths. As the number of scanpaths increases, the mini-patches become smaller with a fixed GSR size (_i.e._, \(224\times 224\)). As shown in Table VIII, the best performance of the GSR-X model can be achieved when using \(49\) scanpaths to create the GSR.
**Impact of Spherical Tangent Representation.** In the GSR conversion, we extract mini-patches from 360\({}^{\circ}\) images by utilizing spherical convolution [56], which is executed in the spherical tangent domain. Here, we aim to answer the question:
* How much performance gains can be achieved by extracting mini-patches in the spherical tangent domain as opposed to cropping mini-patches in the 2D plane?
To address this, we construct a baseline model called "GSR-X w ERP", which directly crops the mini-patches from a 2D equirectangular projection plane. The results are listed in Table VII, from which we observe that the spherical tangent representation is more effective. This may be attributed to the fact that the spherical tangent representation reduces the geometric distortions of the 2D plane while preserving the spherical properties of 360\({}^{\circ}\) images that are not accessible in the 2D plane. For example, the left and right edges of the images with equirectangular projection are discontinuous, which is not the case in the spherical tangent domain. However, the performance gain is less significant on CVIQD database, which may be due to the distortion artifacts being similar to 2D distortion artifacts, as discussed previously.
## VI Conclusion and Discussion
This paper introduces a novel GSR representation to assess the perceptual quality of 360\({}^{\circ}\) images. Our approach involves transforming a static 360\({}^{\circ}\) image into a dynamic GSR sequence
using a set of scanpaths produced under a specified viewing condition. This representation provides a flexible and effective way to assess the perceptual quality of 360deg images under various viewing conditions. The results of our experiments show that the predictions of our framework are in line with human perception in the challenging task of assessing the perceptual quality of locally distorted 360deg images under varied viewing conditions.
Our current study focuses on the comprehensive perceptual quality of 360deg images under a predefined viewing condition by using a set of generated scanpaths. However, due to the inherent randomness of scanpaths, adaptively selecting those that could improve prediction performance for a given scenario remains an intriguing and challenging problem, yet to be explored. Additionally, our framework exhibits potential in _personalized_ OIQA, which may be more suitable in VR applications, as user viewing behaviors tend to vary based on individual experiences and preferences. This can be achieved directly by pooling the quality scores of a single patch sequence rather than combining all sequences.
|
2309.05765 | Contrarian Majority rule model with external oscillating propaganda and
individual inertias | We study the Galam majority rule dynamics with contrarian behavior and an
oscillating external propaganda, in a population of agents that can adopt one
of two possible opinions. In an iteration step, a random agent interacts with
other three random agents and takes the majority opinion among the agents with
probability $p(t)$ (majority behavior) or the opposite opinion with probability
$1-p(t)$ (contrarian behavior). The probability of following the majority rule
$p(t)$ varies with the temperature $T$ and is coupled to a time-dependent
oscillating field that mimics a mass media propaganda, in a way that agents are
more likely to adopt the majority opinion when it is aligned with the sign of
the field. We investigate the dynamics of this model on a complete graph and
find various regimes as $T$ is varied. A transition temperature $T_c$ separates
a bimodal oscillatory regime for $T<T_c$ where the population's mean opinion
$m$ oscillates around a positive or a negative value, from a unimodal
oscillatory regime for $T>T_c$ in which $m$ oscillates around zero. These
regimes are characterized by the distribution of residence times that exhibits
a unique peak for a resonance temperature $T^*$, where the response of the
system is maximum. An insight into these results is given by a mean-field
approach, which also shows that $T^*$ and $T_c$ are closely related. | M. Cecilia Gimenez, Luis Reinaudi, Serge Galam, Federico Vazquez | 2023-09-11T18:47:29Z | http://arxiv.org/abs/2309.05765v1 | # Contrarian Majority rule model with external oscillating propaganda and individual inertias
###### Abstract
We study the Galam majority rule dynamics with contrarian behavior and an oscillating external propaganda, in a population of agents that can adopt one of two possible opinions. In an iteration step, a random agent interacts with other three random agents and takes the majority opinion among the agents with probability \(p(t)\) (majority behavior) or the opposite opinion with probability \(1-p(t)\) (contrarian behavior). The probability of following the majority rule \(p(t)\) varies with the temperature \(T\) and is coupled to a time-dependent oscillating field that mimics a mass media propaganda, in a way that agents are more likely to adopt the majority opinion when it is aligned with the sign of the field. We investigate the dynamics of this model on a complete graph and find various regimes as \(T\) is varied. A transition temperature \(T_{c}\) separates a bimodal oscillatory regime for \(T<T_{c}\) where the population's mean opinion \(m\) oscillates around a positive or a negative value, from a unimodal oscillatory regime for \(T>T_{c}\) in which \(m\) oscillates around zero. These regimes are characterized by the distribution of residence times that exhibits a unique peak for a resonance temperature \(T^{*}\), where the response of the system is maximum. An insight into these results is given by a mean-field approach, which also shows that \(T^{*}\) and \(T_{c}\) are closely related.
## I Introduction
In the last decades, statistical physics has expanded its scope to venture into the field of sociology, giving rise to a discipline called _sociophysics_[1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. A commonly studied phenomenon is the dynamics of opinion formation, by means of simple mathematical models. In these models, individuals are called agents, and each of them is characterized by the value of a variable that represents its opinion on a particular topic -such as the intention to vote for a candidate in a ballot- which, for simplicity, can take one of two possible values (\(+1\) or \(-1\)). The opinion of each agent can change after interacting with other agents following simple rules. One of the most implemented interaction rule is that introduced in a model by Galam [11] and extensively studied later on [6; 12; 13; 14], to which we refer as the Galam Majority Model (GMM), in which all agents of a group chosen at random adopt the opinion of the majority in that group. This local dynamics drives a steady increase of the initial global majority opinion (provided the system's symmetry is not broken at ties for even size groups) which eventually ends at a consensus, i.e., an absorbing state where all agents share the same opinion. Multiple extensions of the GMM have been studied in the literature, including the possibility of a contrarian behavior, that is, all members of a chosen group taking the minority opinion [7]. This work studied the effects of introducing a fixed fraction \(a\) of contrarian agents on the original GMM, where it was found that, instead of a frozen consensus as in the model with no contrarians, the system reaches an ordered stationary state for \(a<a_{c}\) and a disordered stationary state for \(a>a_{c}\). The transition value \(a_{c}\) separates an ordered phase where a large majority of agents hold the same opinion, from a disordered phase in which both opinions are equally represented in the population.
Many other opinion formation models with contrarians were also studied in [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. In particular, the effects of contrarian behavior was also investigated in the voter model (VM) for opinion formation [24], where agents interact by pairs and one adopts the opinion of the other with probability \(1-p\) (imitation) or the opposite opinion with probability \(p\) (contrarian). It was shown that the model displays a transition from order to disorder when the probability of having a contrarian behavior overcomes the threshold \(p_{c}=(N+1)^{-1}\) in a system of \(N\) agents. The contrarian voter model [24] was recently studied under the presence of a mass media propaganda that influences agents' decisions [34]. The propaganda was implemented in the form of an external oscillating field that tends to align agents' opinions in the
direction of the field. It was found a stochastic resonance (SR) phenomena within an oscillatory regime, that is, there is an optimal level of noise for which the population effectively responds to the modulation induced by the external field [35; 36].
In order to expand our knowledge on the combined effects of contrarians and propaganda on opinion models, we study in this article the GMM with contrarian behavior under the presence of an external field. Each agent in the population can either follow a majority rule that increases similarity with its neighbors or behave as a contrarian by adopting the opposite opinion, with respective probabilities \(p(t)\) and \(1-p(t)\). The majority probability \(p(t)\) varies in time according to an external field, based on a mathematical form introduced in [21; 22] for the Sznajd model and implemented in [34; 37] for the VM, so that agents tend to follow the majority when it is aligned with the field. By exploring the dynamics of the GMM model under the influence of an oscillating external field and the presence of contrarians, we aim to gain deeper insights into the manifestation of the SR phenomenon in opinion dynamics models. We show that this model exhibits unimodal and bimodal oscillatory regimes, as well as a SR that is observed close to the transition between the two regimes.
It is worth mentioning that, while GMM belongs to the class of "non-linear" models whose mean-field dynamics is associated to a double-well Ginzburg-Landau potential, the VM with contrarians described above belongs to a completely different class characterized by an associated zero potential that leads to a dynamics driven purely by noise [38]. A main consequence of this difference is that the average magnetization is conserved in the VM, while it is not in the GMM. Another consequence is that, in the version of these models with contrarians, the order-disorder transition in the thermodynamic limit (\(N\to\infty\)) takes place at a finite fraction of contrarian agents \(a_{c}>0\) in the GMM, while in the VM the transition happens at a vanishing contrarian probability (\(p_{c}\to 0\)). We also need to mention that the SR effect has also been observed in other opinion models. For instance in [21; 22] the authors found SR in a variation of the Sznajd model with stochastic driving and a periodic signal. The work in [14] analyzed a majority rule dynamics under the action of noise and an external modulation, and found a SR that depends on the randomness of the small-world network. There are also other works [39; 40; 41; 42; 43] that explored the combined effects of a stochastic driving and an external signal on a majority rule dynamics. However, none of these works have incorporated a contrarian behavior in the dynamics.
The rest of the article is organized as follows. We introduce the model in section II. In section III we present numerical simulation results for the evolution of the system and the behavior of different magnitudes that characterize the SR phenomena. In section IV we develop a mean-field (MF) approach that gives an insight into the system's evolution and the relation between the SR and the transition between different regimes. Finally, in section V we summarize our findings and discuss the results.
## II The model
We consider a population of \(N\) interacting agents where a given agent \(i\) (\(i=1,..,N\)) can hold one of two possible opinion states \(s_{i}=+1,-1\). We denote by \(\sigma_{+}(t)\) and \(\sigma_{-}(t)\) the fraction of nodes with respective states \(+1\) and \(-1\) at time \(t\), such that \(\sigma_{+}(t)+\sigma_{-}(t)=1\) for all \(t\geq 0\). In a time step \(\Delta t=1/N\) of the dynamics, we follow the basic GMM using groups of size three to update individual opinions. However, here for our purpose of investigating the effects of propaganda on individuals, we implement the rule in a different setting, which does not modify the outcome. Instead of selecting three agents randomly to update all of them at once, we pick one agent \(i\) with state \(s_{i}\) and a group of three other different agents \(j,k,l\) (\(i\neq j\neq k\neq l\)), all randomly chosen. In the \(N\gg 1\) limit, their respective states are \((s_{j},s_{k},s_{l})\) with probability \(\sigma_{s_{j}}\sigma_{s_{k}}\sigma_{s_{l}}\). A majority of \(+\) choices is thus obtained for the configurations \((+,+,+)\), \((+,+,-)\), \((+,-,+)\) and \((-,+,+)\), yielding an overall probability
\[P_{+}\equiv\sigma_{+}^{3}+3\sigma_{+}^{2}\sigma_{-}. \tag{1}\]
Similarly, a majority of \(-\) occurs for \((-,-,-)\), \((+,-,-)\), \((-,+,-)\) and \((-,-,+)\), with the overall probability
\[P_{-}\equiv\sigma_{-}^{3}+3\sigma_{-}^{2}\sigma_{+}. \tag{2}\]
Then, agent \(i\) updates its state in two steps. i) First, the update follows the basic GMM, where agent \(i\) simply adopts the majority state of the group of the three agents \(j,k,l\). We thus have \(s_{i}\to s_{i}=+1\) with probability \(P_{+}\), or \(s_{i}\to s_{i}=-1\) with probability \(P_{-}=1-P_{+}\). ii) Second, agent \(i\) can either preserve this majority state (\(s_{i}\to s_{i}\)) with probability \(p_{s_{i}}\), or change to the opposite (minority) state (\(s_{i}\to-s_{i}\)) with the complementary probability \(1-p_{s_{i}}\), where \(p_{s_{i}}\) is defined below. The implication of this second step is that each agent can behave as a "contrarian" by adopting the state opposed to the majority (minority state) with probability \(1-p_{s_{i}}\), or as a "majority follower" with probability \(p_{s_{i}}\). Thus, there is no fixed fraction of contrarian agents as in [7]
At this point, we introduce the effect of an external field \(H\) on agent \(i\) in state \(s_{i}\) within a Boltzmann scheme, by assuming that the probability \(p_{s_{i}}\) to preserve the majority state is larger when \(s_{i}\) is aligned with \(H\) [i.e. \(\text{sign}(s_{i})=\text{sign}(H)\)],
\[p_{s_{i},H}=\frac{e^{[s_{i}H]/T}}{e^{[s_{i}H]/T}+e^{-[s_{i}H]/T}}\;, \tag{3}\]
where \(T\geq 0\) is a parameter that plays the role of a _social temperature_ analogous to the contrarian feature of the GMM. The related probability to oppose the field is \(1-p_{s_{i},H}\). We assume that \(H\) is an oscillating periodic field \(H(t)=H_{0}\sin(\omega t)\) with amplitude \(H_{0}\) (\(0\leq H_{0}\leq 1\)), frequency \(\omega=2\pi/\tau\) and period \(\tau\), which represents an external propaganda. Thus, according to Eq. (3), agents are more likely to keep the opinion that is aligned with the propaganda. In addition to the external field, we introduce an individual "inertia" parameter \(I\), which provides an agent with a weight to preserve its current state against a field favoring the opposite state. It is a self-interaction \(-Is_{i}s_{i}\) that modifies Eq. (3) as
\[p_{s_{i},a,H}=\frac{e^{[Is_{i}+H]s_{i}/T}}{e^{[Is_{i}+H]s_{i}/T}+e^{-[Is_{i}+H ]s_{i}/T}}\;, \tag{4}\]
which can be rewritten as
\[p_{s_{i},1,H}=\frac{e^{[1+s_{i}H]/T}}{e^{[1+s_{i}H]/T}+e^{-[1+s_{i}H]/T}}\;, \tag{5}\]
where \(I,H,T\) have been rescaled as \(1\), \(\frac{H}{T}\), \(\frac{T}{T}\) using \(s_{i}^{2}=1\).
At this stage we combine the GMM with the inertia and field effects by taking
\[p_{s_{i}}(t)=\frac{e^{[1+s_{i}H(t)]/T}}{e^{[1+s_{i}H(t)]/T}+e^{-[1+s_{i}H(t)]/T }} \tag{6}\]
for the probability of agent \(i\) to keep the majority state \(s_{i}\), and \(1-p_{s_{i}}(t)\) for the probability to adopt the opposite (minority) state \(-s_{i}\), which can be interpreted as a noise. Finally, combining Eqs. (1), (2) and (6), the probability \(\mathcal{P}_{+}\) for a randomly selected agent \(i\) to adopt the state \(+\) in a single time step \(\Delta t\) is given by
\[\mathcal{P}_{+}=(\sigma_{+}^{3}+3\sigma_{+}^{2}\sigma_{-})\frac{e^{[1+H(t)]/T }}{e^{[1+H(t)]/T}+e^{-[1+H(t)]/T}}+(\sigma_{-}^{3}+3\sigma_{-}^{2}\sigma_{+}) \frac{e^{-[1-H(t)]/T}}{e^{[1-H(t)]/T}+e^{-[1-H(t)]/T}}\;, \tag{7}\]
where the first term comes from following a local majority \(+\) among the three selected agents, which happens with probability \(P_{+}p_{+}(t)\), while the second term corresponds to opposing the state \(-\) in case of a majority of \(-\) among the three selected agents, which happens with probability \(P_{-}[1-p_{-}(t)]\). Analogously, the state \(-\) is selected with probability \(\mathcal{P}_{-}\equiv 1-\mathcal{P}_{+}\).
As noted above, only the "focal agent" \(i\) updates its state, unlike in the original GMM where all agents in the chosen group update their states. Equation (6) shows that individuals are more prone to adopt the opinion of the majority when it is aligned with the propaganda. In addition, \(p_{+}\) and \(p_{-}\) approach the value \(1\) as \(T\to 0\), which makes this case equivalent to the original GMM, with neither contrarians nor external field. In the opposite limit \(T\rightarrow\infty\), \(p_{+}\) and \(p_{-}\) approach the value \(1/2\), which corresponds to the pure noise case where agents take one of the two opinions at random, independent of the field.
## III Numerical results
### Evolution of the magnetization
We start by studying the time evolution of the mean opinion of the population or magnetization defined as \(m(t)\equiv\frac{1}{N}\sum_{i=1}^{N}s_{i}(t)\), for the simplest case of zero field \(H=0\), which corresponds to the contrarian GMM with symmetric majority probabilities \(p_{+}=p_{-}=p=(1+e^{-2/T})^{-1}\). We run several independent realizations of the dynamics where, initially, each agent adopts state \(+1\) or \(-1\) with respective probabilities \(\sigma_{+}(0)\) and \(\sigma_{-}(0)\), leading to an initial average magnetization \(m(0)=\sigma_{+}(0)-\sigma_{-}(0)\). Due to the symmetry of the system, the evolution of the average value of \(m\) over many realizations starting from \(m(0)=0\) gives \(\langle m\rangle(t)\simeq 0\) for all \(t\geq 0\), which does not describe the correct behavior of the system. Instead, we looked at the evolution of the absolute value of the
magnetization, \(|m|\), as we show in Fig. 1(a), for various values of \(p\). In Fig. 1(b) we show in circles the stationary value of \(\langle|m|\rangle\) as a function of \(T\) for \(H=0\). We observe that, as \(T\) increases, the system displays a transition from an ordered state (\(|m|>0\)) for \(T<T_{c}^{0}\), to a disordered state (\(|m|\simeq 0\)) for \(T>T_{c}^{0}\), where \(T_{c}^{0}\) is a transition temperature. This order-disorder transition, reminiscent of the GMM with a fixed fraction of contrarian agents [7], is induced by the presence of a contrarian behavior that acts as a source of external noise, preventing the system to reach full consensus. When the noise amplitude, controlled by \(T\), overcomes a threshold value \(T_{c}^{0}\) the system reaches complete disorder. In section IV we develop a mean-field approach that allows to estimate the transition temperature as \(T_{c}^{0}\simeq 1.24\). When the field is turned on, these results change completely. In the case that the field remains constant in time (constant propaganda H=const), the symmetry of the system is broken in direction of \(H\), increasing the stationary value of \(\langle|m|\rangle\) as compared to the \(H=0\) case. This effect can be seen in Fig. 1(b), where we see that \(\langle|m|\rangle^{*}\) increases monotonically with \(H\). Besides, the order-disorder transition disappears for \(H>0\) (see \(H=0.1\) and \(H=0.5\) curves).
If we now let the field oscillate in time, a series of different regimes emerge. In Fig. 2 we show the evolution of \(m\) in a single realization under the effects of an oscillating field, for three different amplitudes \(H_{0}\), period \(\tau=256\) and various temperatures. For the \(H_{0}=0.1\) and \(H_{0}=0.5\) cases [panels (a) and (b)], we can see that for low temperatures \(m\) oscillates around a positive value or negative value, and that oscillations vanish for small enough \(T\), where \(m\) stays in a value close to \(1.0\) (consensus), as we can see for \(T=0.2\) and \(T=0.1\) in panels (a) and (b), respectively. The center of oscillations can jump from positive to negative values and vice-versa (bimodal regime), as we can see in panel (b) for \(T=0.5\). Above a given temperature threshold, \(T_{c}\simeq 1.0\) for \(H_{0}=0.1\) [panel (a)] and \(T_{c}\simeq 0.5\) for \(H_{0}=0.5\) [panel (b)], the magnetization oscillates around \(m=0\) (unimodal regime). This behavior is reminiscent of the ordered and disordered phases in the model without field [Fig. 1(b)], although the transition temperature \(T_{c}^{0}\simeq 1.24\) for \(H=0\) is quite different from that of the model with oscillating field. An insight into this behavior shall be given in section IV. For \(H_{0}=1.0\) [panel (c)] oscillations are centered at \(m=0\) even for small \(T\), and thus the bimodal regime is not observed. Finally, at very large temperatures the high levels of noise leads to a purely stochastic dynamics where agents adopt an opinion at random, and thus \(m\) fluctuates around zero.
### Residence times
In order to characterize the different regimes described in the last section, we study here the residence time \(t_{r}\), defined as the time interval between two consecutive changes of the sign of \(m\), i.e., when \(m\) crosses the center value \(m=0\). In a single realization, \(m\) can change sign multiple times depending on the parameter values, leading to a distribution of the residence time that is particular of each regime. Results are shown in Fig. 3 for \(N=1025\), \(H_{0}=0.1\), \(\tau=256\) [panel (a)] and \(\tau=1024\) [panel (b)]. In the unimodal regime \(m\) follows the oscillations of \(H(t)\) around zero, and thus \(m\) tends to change sign when \(H\) does, every time interval \(\tau/2\). Therefore, the residence time distribution (\(RTD\)) is peaked at \(t_{r}\simeq\tau/2\), as shown in panel (a) for temperatures \(T=1.04\) and \(T=1.3\), and in panel (b) for \(T=0.98\) and \(T=1.3\). In the bimodal regime, the RTD exhibits multiple peaks at \(t_{r}=(n+1/2)\tau\) (\(n=0,1,2,..\)) (see panels for \(T=0.95\)). Here \(m\) tends to perform oscillations around a positive (negative) value
Figure 1: (a) Time evolution of the average value of the absolute magnetization \(|m|\) in a population of \(N=10^{3}\) agents, zero field \(H=0\) and various values of majority probability \(p=(1+e^{-2/T})^{-1}\), as indicated in the legend. (b) Stationary value of \(\langle|m|\rangle\) vs \(T\) for constant fields \(H=0.0\) (circles), \(H=0.1\) (squares) and \(H=0.5\) (diamonds). The solid line is the analytical expression from Eq. (11), while the dashed lines are the numerical integration of Eq. (8). The averages were done over \(10^{3}\) independent realizations starting from a symmetric condition \(m_{0}=0\).
until it changes to negative (positive) oscillations, and back to positive (negative) oscillations again, as we observe in Fig. 2(b) for \(T=0.5\). These changes are more likely to happen when \(H\) changes sign, in the first attempt at time \(t=\tau/2\), or in the second attempt one period later (at \(t=3\tau/2\)), or in the third attempt at \(t=5\tau/2\) and so on, leading to the different peaks in the \(RTD\). Finally, for very large \(T\) the \(RTD\) shows an exponential decay due to the stochastic fluctuations of \(m\) around zero (panels for \(T=10\)).
### Stochastic resonance
The patterns of the RTD shown in section III.2 can be employed to quantify the phenomena of stochastic resonance, as it was done in related systems [14; 35]. The sensitivity or response of the system to the external oscillating field can be measured by the area \(\mathcal{A}\) under the first peak around \(\tau/2\) in the RTD histogram. It is expected that \(\mathcal{A}\) reaches a maximum at the resonance temperature \(T^{*}\), when \(m\) resonates with the field \(H\). This method to quantify the resonance is an alternative to the study of the signal-to-noise ratio [21; 22; 34]. Figure 4(a) shows the response \(\mathcal{A}\) vs \(T\) for a field of amplitude \(H_{0}=0.1\). Each curve corresponds to a different period \(\tau\). We observe that \(\mathcal{A}\) reaches a maximum value at a temperature \(T^{*}\) that depends on \(\tau\). The RTD for the resonance temperatures \(T^{*}=1.04\) and \(T^{*}=0.98\) for periods \(\tau=256\) and \(\tau=1024\), respectively, are shown in the top-right panels of Figs. 3(a) and 3(b), where we see the existence of a well defined peak centered at \(t_{r}=\tau/2\). For larger temperatures (see \(T=1.3\)) there is also a peak at \(\tau/2\), although lower than that for \(T^{*}\), and the RTD exhibits another pronounced peak near \(t_{r}=0\), corresponding to the short crossings of \(m(t)\) that become more frequent as \(T\) increases (larger fluctuations in \(m\)).
## IV Mean-field approach
In this section we analyze the behavior of the model within a MF approach, by deriving a rate equation for the evolution of \(m\) that corresponds to the dynamics introduced in section II. Let us write the fractions of \(+\) and
Figure 3: Normalized histograms of the residence time \(t_{r}\) in a system of \(N=1025\) agents under a field of amplitude \(H_{0}=0.1\), period \(\tau=256\) (a) and \(\tau=1024\) (b), and the temperatures indicated in the legends. The bottom-right panels are on a linear-log scale.
Figure 2: Evolution of \(m\) in a single realization for a population of \(N=1024\) agents under an oscillating field with period \(\tau=256\) and amplitudes \(H_{0}=0.1\), \(0.5\) and \(1.0\), panels (a), (b) and (c), respectively, and the temperatures indicated in the legends. Solid lines correspond to MC simulations, while dashed lines in panel (a) represent the numerical integration of Eq. (8).
agents in terms of the magnetization \(m\), \(\sigma_{+}=(1+m)/2\) and \(\sigma_{-}=(1-m)/2\). As we described in section II, in a time step \(\Delta t=1/N\) a random agent \(i\) with state \(s_{i}=-1\) is chosen with probability \(\sigma_{-}\), and then adopts the state \(+\) (\(s_{i}=-1\to s_{i}=+1\) flip) with probability \({\cal P}_{+}=P_{+}p_{+}+P_{-}(1-p_{-})\), which corresponds to adopt either the majority state \(+\) or a selected \(+\) majority, or the minority state \(+\) of a selected \(-\) majority, where \(P_{+}\) and \(P_{-}\) are given by Eqs. (1) and (2), respectively. This flip \(-1\to+1\) leads to an overall change \(\Delta m=2/N\) in \(m\). Conversely, with probability \(\sigma_{+}\) the chosen agent \(i\) has state \(+1\), and flips to \(-1\) (\(s_{i}=+1\to s_{i}=-1\) flip) with probability \({\cal P}_{-}=P_{-}p_{-}+P_{+}(1-p_{+})\), leading to a change \(\Delta m=-2/N\). Assembling these factors, the mean change of \(m\) in a time step can be written as
\[\frac{dm}{dt}=\frac{1}{1/N}\left[\sigma_{-}{\cal P}_{+}\left(\frac{2}{N} \right)-\sigma_{+}{\cal P}_{-}\left(\frac{2}{N}\right)\right],\]
which becomes, in the \(N\to\infty\) limit, the rate equation
\[\frac{dm}{dt}=\frac{1}{2}m(m^{2}-5)+\frac{1}{2}p_{+}(1+m)^{2}(2-m)-\frac{1}{2 }p_{-}(1-m)^{2}(2+m), \tag{8}\]
after replacing the expressions for \({\cal P}_{+}\) and \({\cal P}_{-}\) and doing some algebra. Here
\[p_{+}(t)=\frac{e^{[1+H(t)]/T}}{e^{[1+H(t)]/T}+e^{-[1+H(t)]/T}}\quad\mbox{and} \quad p_{-}(t)=\frac{e^{[1-H(t)]/T}}{e^{[1-H(t)]/T}+e^{-[1-H(t)]/T}} \tag{9}\]
are the probabilities of adopting the state \(+1\) and \(-1\) of a majority, respectively, as defined in Eq. (6).
For the zero field case (\(H_{0}=0\)) is \(p_{+}=p_{-}=p=(1+e^{-2/T})^{-1}\), and thus Eq. (8) is reduced to the simple equation
\[\frac{dm}{dt}=\frac{1}{2}m\left[6p-5-(2p-1)m^{2}\right]. \tag{10}\]
Equation (10) has three fixed points corresponding to the possible stationary states of the agent based model. The fixed point \(m_{0}^{*}=0\) is stable for \(p<5/6\) and corresponds to a disordered active state with equal fractions of \(+\) and \(-\) agents (\(\sigma_{+}=\sigma_{-}=1/2\)), whereas the two fixed points
\[m_{\pm}^{*}=\pm\sqrt{\frac{6p-5}{2p-1}} \tag{11}\]
are stable for \(p>5/6\), and they represent asymmetric active states of coexistence of \(+\) and \(-\) agents, with stationary fractions \(\sigma_{+}^{*}=(1+m_{+}^{*})/2>\sigma_{-}^{*}=(1-m_{+}^{*})/2\) and \(\sigma_{+}^{*}=(1+m_{-}^{*})/2<\sigma_{-}^{*}=(1-m_{-}^{*})/2\). The stable fixed points are plotted by a solid line in Fig. 1(b), where we observe a good agreement with MC simulation results (solid circles). Equation (11) shows the existence of a transition from order to disorder as \(T\) overcomes the value \(T_{c}^{0}=2/\ln(5)\simeq 1.24\) (\(p_{c}^{0}=5/6\)), as we already mentioned in section III.1. Notice that the probability of behaving as a contrarian \(1-p_{c}^{0}=1/6\) is identical to the critical proportion of contrarians \(a_{c}=1/6\) obtained in the GMM for groups of size 3 [7]. Given that Eq. (10) can be rewritten as a Ginzburg-Landau equation with an associated
Figure 4: (a) Response \({\cal A}\) as a function of the temperature \(T\) for a field of amplitude \(H_{0}=0.1\) and periods \(\tau\) indicated in the legend. (b) Resonance temperature \(T^{*}\) [maximum of \({\cal A}\) vs \(T\) curves from (a)] and transition temperature \(T_{c}\) vs period \(\tau\).
double-well potential with two minima at \(m^{*}_{\pm}\), we expect a bistable regime for \(T<T_{c}\), where in a single realization \(m\) jumps between \(m^{*}_{+}\) and \(m^{*}_{-}\).
For a field that is constant in time (\(H=\text{const}\neq 0\)) the fixed points of Eq. (8) are given by the roots of a cubic polynomial, and \(m=0\) is not longer a root. Only one root is real, and corresponds to the stationary state of the agent-based model. As the analytical expression for the real root is large and not very useful, we integrated Eq. (8) numerically to find the stationary value \(m^{*}\), which we plot by a dashed line in Fig. 1(b) for \(H=0.1\) and \(0.5\). We observe a good agreement with MC simulations (symbols). A positive field \(H>0\) breaks the symmetry in favor of the \(+\) state, given that \(p_{+}>p_{-}\), leading to a positive stationary value \(m^{*}>0\) that increases monotonically with \(H\).
For an oscillating field \(H(t)\), we have that \(p_{+}(t)\) and \(p_{-}(t)\) oscillate in time according to \(H(t)\), which in turn leads to oscillations in \(m(t)\). In order to explore, within the MF approach, the behavior of \(m\) in the different regimes described in section III.1, we plot in Fig. 5(a) the evolution of \(m\) obtained from the numerical integration of Eq. (8) for \(H_{0}=0.1\), \(\tau=256\), and various temperatures. For low temperatures we see that \(m\) oscillates around a positive value (it could also be a negative value for other initial conditions), but when the temperature is increased beyond a threshold value oscillations turn to be around \(m=0\). At first sight, this transition that happens in the oscillatory regime of \(m\), already reported in section III.1 from MC simulations, appears to be quite sharp, where the center of oscillations seems to jump from a large value to zero after a small increment of \(T\). To better characterize the transition we plot in Fig. 5(b) the temporal average of \(m\) from \(t=0\) to \(t=1000\tau\), called \(\overline{m}\), as a function of \(T\) and for several periods \(\tau\). The value of \(\overline{m}\) can be seen as an order parameter, which takes a positive or negative value in the bimodal regime and a value close to zero in the unimodal regime. We can see that \(\overline{m}\) decreases continuously with \(T\) for low \(\tau\) (see curve for \(\tau=64\)), and that the transition becomes more abrupt as \(\tau\) increases (see curves for \(\tau\geq 256\)). The inset shows a more detailed view of the transition in the value of \(\overline{m}\).
In Fig. 2(a) we compare the evolution of \(m\) obtained from the MF approach (dashed lines) with that from MC simulations, for \(H_{0}=0.1\), \(\tau=256\), and various temperatures. We observe a good agreement with single realizations of the dynamics, except for the temperature \(T=1.0\) that is close to the transition value \(T_{c}\simeq 0.981\), estimated from Fig. 5(b) as the point where \(\overline{m}\) becomes zero. This discrepancy is due to the fact that the MF approach cannot reproduce the random jumps of \(\overline{m}\) from the value \(\overline{m}\simeq 0.564\) in the bimodal regime to \(\overline{m}\simeq 0\) in the unimodal regime. These jumps are induced by finite-size fluctuations, and are more frequent when the control parameter \(T\) is close to the transition point \(T_{c}\).
An insight into the behavior of the resonance temperature \(T^{*}\) with the period \(\tau\) can be obtained from the MF approach assuming that the response \(\mathcal{A}\) reaches a maximum value at a temperature similar to the transition point \(T_{c}\), that is, we expect \(T^{*}\simeq T_{c}\). This is because in the bimodal regime \(T<T_{c}\) the magnetization \(m\) oscillates around a positive or a negative value and eventually crosses \(m=0\) around times \(t=\tau/2\), \(3\tau/2\), etc., by finite-size fluctuations, leading to multiple peaks in the residence time distribution. Then, at \(T=T_{c}\), oscillations start to be centered at \(\overline{m}=0\), and thus we expect that the \(RTD\) shows a single peak at \(\tau/2\). By increasing \(T\) beyond \(T_{c}\) we expect that the height of the peak for \(T=T_{c}\) is reduced by the presence of a higher noise that induces another maximum of the \(RTD\) at \(t=0\), as explained in section III.2, leading to a smaller \(\mathcal{A}\). Therefore, we expect that \(\mathcal{A}\) is maximum at \(T\simeq T_{c}\). Figure 4(b) shows in diamonds the value of \(T_{c}\) obtained from Fig. 5(b) for various periods \(\tau\). We see that
Figure 5: (a) Time evolution of the magnetization \(m\) from Eq. (8) for a field of amplitude \(H_{0}=0.1\), period \(\tau=256\), and the temperatures indicated in the legend. Horizontal dashed lines represent the time average value of \(m\), \(\overline{m}\), in the interval \(t\in(0,1000\tau)\). (b) Time average of the magnetization, \(\overline{m}\), vs temperature \(T\) for the field’s periods indicated in the legend. The inset shows a closer look around the transition values \(T_{c}\).
decreases with \(\tau\), as it happens with \(T^{*}\) (circles), although discrepancies between \(T_{c}\) and \(T^{*}\) increase as \(\tau\) decreases.
## V Summary and discussion
In this article we studied the dynamics of the binary-state majority rule model introduced by Galam for opinion formation, under the presence of an external propaganda and contrarian behavior. When an agent has to update its opinion, it can either follow the majority opinion among three random neighbors, similarly to the original GMM, or adopt the opposite (contrary) opinion, i.e., the minority opinion. The probability to adopt the majority opinion \(p_{\pm}(t)\) is coupled to an external field that oscillates periodically in time (propaganda), in a way that agents are more likely to adopt the majority opinion when it is align with the field. This rule tries to reproduce a reinforcing mechanism by which individuals have a tendency to follow the majority opinion when it is in line with mass media propaganda. Besides, the majority probability \(p_{\pm}\) depends on a parameter \(T\) (temperature) that acts as an external source of noise, in such a way that by increasing \(T\) from zero the system goes from following the majority opinion only (\(p_{\pm}=1\) for \(T=0\)) to adopting a random opinion for large temperatures (\(p_{\pm}=0.5\) for \(T\gg 1\)).
We explored the model in complete graph (all-to-all interactions) and found different phenomena associated to different regimes as \(T\) is varied. For \(T\) below a threshold value \(T_{c}\) the system is in a bimodal regime, where the mean opinion \(m\) oscillates in time around a positive or negative value, \(\overline{m}_{\pm}\), and performs jumps between \(\overline{m}_{+}\) and \(\overline{m}_{-}\) due to finite-size fluctuations, similarly to what happens in a bistable system. As the temperature is increased beyond \(T_{c}\) there is a transition to an unimodal regime in which \(m\) oscillates around zero, where the amplitude of oscillations decreases with \(T\) and eventually vanishes in the \(T\gg 1\) limit that corresponds to pure noise. The transition at \(T_{c}\) becomes more abrupt as the period \(\tau\) of the field increases. We also studied the response of the system to the external field, by means of the distribution of residence times, i.e., the time interval between two consecutive changes of the sign of \(m\). We found that there is an optimal temperature \(T^{*}\) for which the response is maximum, that is, a stochastic resonance phenomenon induced by the external noise controlled by \(T\). Also, we developed a mean-field approach that lead to a non-linear rate equation for the time evolution of \(m\) in the thermodynamic limit, whose numerical solution agrees very well with MC simulations of the model. We used this equation to give a numerical estimate of \(T_{c}\), and found that the behavior of \(T_{c}\) with the period \(\tau\) is qualitatively similar to that of \(T^{*}\). Although the transition temperature \(T_{c}\) is similar to the resonance temperature \(T^{*}\) only for large \(\tau\), this analysis shows that they are related.
A possible interpretation of these results in a social context is the following. Reacting with a contrarian attitude occasionally (small \(T/\)low noise) on a given issue, that is, adopting an opposite position to that of the majority of our acquaintances, leads to a state of collective agreement in a population, which can be reversed completely after some time by means of a collective decision, independently of the external propaganda. This alternating behavior between opposite opinions might be seen as more "socially healthy" than a frozen full consensus in one of the two alternatives, which happens in populations with a total absence of contrarian attitudes (\(T=0\)). However, having a contrarian behavior more often could induce a collective state where the mean opinion oscillates in time following the external propaganda, which can be interpreted as a society whose opinions are manipulated optimally by the mass media, in opposition to collective freedom. Finally, in the extreme case of having a very frequent contrarian attitude (\(T\gg 1\)) the population falls into a state of opinion bipolarization, where there are two groups of similar size with opposite opinions.
The results presented in this article correspond to a fully connected network. Although we expect that the conclusions remain valid qualitatively for other interaction topologies, it might be worthwhile to study the model in complex networks like scale-free or Erdos Renyi networks, which better represent social interactions. It might also be interesting to explore how the stochastic resonance effect depends on the topology of the network.
###### Acknowledgements.
The authors are grateful to CONICET (Argentina) for continued support.
|
2309.12219 | Generating robotic elliptical excisions with human-like tool-tissue
interactions | In surgery, the application of appropriate force levels is critical for the
success and safety of a given procedure. While many studies are focused on
measuring in situ forces, little attention has been devoted to relating these
observed forces to surgical techniques. Answering questions like "Can certain
changes to a surgical technique result in lower forces and increased safety
margins?" could lead to improved surgical practice, and importantly, patient
outcomes. However, such studies would require a large number of trials and
professional surgeons, which is generally impractical to arrange. Instead, we
show how robots can learn several variations of a surgical technique from a
smaller number of surgical demonstrations and interpolate learnt behaviour via
a parameterised skill model. This enables a large number of trials to be
performed by a robotic system and the analysis of surgical techniques and their
downstream effects on tissue. Here, we introduce a parameterised model of the
elliptical excision skill and apply a Bayesian optimisation scheme to optimise
the excision behaviour with respect to expert ratings, as well as individual
characteristics of excision forces. Results show that the proposed framework
can successfully align the generated robot behaviour with subjects across
varying levels of proficiency in terms of excision forces. | Arturas Straizys, Michael Burke, Subramanian Ramamoorthy | 2023-09-21T16:18:33Z | http://arxiv.org/abs/2309.12219v1 | # Generating robotic elliptical excisions with human-like tool-tissue interactions
###### Abstract
In surgery, the application of appropriate force levels is critical for the success and safety of a given procedure. While many studies are focused on measuring in situ forces, little attention has been devoted to relating these observed forces to surgical techniques. Answering questions like "Can certain changes to a surgical technique result in lower forces and increased safety margins?" could lead to improved surgical practice, and importantly, patient outcomes. However, such studies would require a large number of trials and professional surgeons, which is generally impractical to arrange. Instead, we show how robots can learn several variations of a surgical technique from a smaller number of surgical demonstrations and interpolate learnt behaviour via a parameterised skill model. This enables a large number of trials to be performed by a robotic system and the analysis of surgical techniques and their downstream effects on tissue. Here, we introduce a parameterised model of the elliptical excision skill and apply a Bayesian optimisation scheme to optimise the excision behaviour with respect to expert ratings, as well as individual characteristics of excision forces. Results show that the proposed framework can successfully align the generated robot behaviour with subjects across varying levels of proficiency in terms of excision forces.
## I Introduction
Surgical excision implies the application of physical forces necessary for tissue separation [1]. A successful procedure requires appropriate levels of excision forces - sufficient for cutting, yet conservative to avoid damaging tissue (excessive force can account for more than half of the medical errors committed by surgical trainees [2]). Cutting forces, on the other hand, strongly depend on the configuration of the blade [3, 4, 5], and therefore on the excision technique. In order to derive optimal surgical techniques, in the context of surgical training or autonomous surgery, tool-tissue interaction forces and their downstream effect on the tissues must be studied in a controlled manner. A comprehensive analysis of a wide range of behaviours across different levels of expertise is needed to identify good and bad practices and to measure their benefit or harm.
Unfortunately, such studies require a large number of participants at different stages of their professional development, which is extremely time-consuming and challenging in terms of logistics. As an alternative, we can use a considerably smaller number of subjects to teach a robot performing the excisions from various levels of proficiency and apply machine learning techniques to interpolate the behaviour between the demonstrations. A suitably parameterised behaviour model would let us generate a large number of trials with tight control over the process parameters, such as excision velocity, blade insertion angle, etc. Such a robotic setup would facilitate the exploration of the downstream effects of the cutting technique on the tissues, and therefore provide a deeper insight into its efficacy and safety. Importantly, this gives us the ability to align robotic cutting behaviour with the desired characteristics of excision forces to analyse various techniques and their influence on tissue outcome.
A standard approach to accomplish this relies on imitation learning techniques [6], such as the Dynamic Movement Primitive (DMP) [7], to learn excision behaviours directly from demonstrations. In the case of the classical DMP formulation, the encoded policy can be generalised via hyper-parameters, such as the goal state and temporal scaling, as well as the coupling terms [8]. Unfortunately, although this allows exploration around the demonstrated trajectories, the parameterised policies are restricted to individual demonstrations with no relation to one another. Alternatively, one could apply DMP formulations that leverage multiple demonstrations to capture behaviour variability as a separate task parameter [9, 10], which can be interpolated to synthesise unseen behaviours [11]. In the context of learning surgical excisions, these methods allow explicit capture of different "styles" of cutting tissues, which is particularly relevant for our task. However, these methods are unsuitable to encode the end-effector's pose trajectory in Cartesian space, as the orientation component requires special treatment of the SO(3) structure [12]. This presents challenges to apply the above methods to learn cutting skills, as the position and orientation trajectories of a blade are inherently coupled due to the nonholonomic nature of the cutting motion [13, 14].
As an alternative, in this paper we introduce a simple parametric model of elliptical excision that decomposes the skill into nominal and behaviour-driven components, each of which can be learned from demonstrations. Here, we focus on modelling a sawing movement as the most dominant characteristic of the elliptical excision technique [15] observed in human trials [16]. Our model encodes the behaviour with a single real-valued parameter \(\rho\) that determines the amount of sawing movement applied during excision. We then show how this model can generate a variety of human-like excision trajectories, and apply Bayesian optimisation to align the generated behaviour with expert ratings.
Finally, as the core contribution of this paper, we propose a framework for aligning human-like robotic elliptical excisions with the desired characteristics of excision forces and demonstrate its applicability for analysing excision skills. In this framework (Fig. 1), we generate an elliptical excision behaviour using the parametric model described above, which generates the pose trajectory of the blade, specified by a behaviour parameter \(\rho\) and a nominal cutting trajectory. The robot executes the generated trajectory in a real-world experiment. A suitable model characterising the excision behaviour from the excision force measurements [16, 17, 18] allows us to 1) calibrate the robotic behaviours to match the desired characteristics of tool-tissue interaction, and 2) use the aligned robotic behaviour to analyse the excision techniques in a well-controlled and repeatable manner. Here, we propose a Bayesian Optimisation scheme to minimise the number of phantom excisions required for behaviour tuning.
## II Parametric generation of human-like excision trajectories
The primary objective of our excision trajectory generator is to produce realistic pose trajectories of the blade, mirroring those typically observed during an elliptical excision procedure. In particular, we are interested in learning the auxiliary sawing movement of the blade, which can play a role in assisting the excision by lowering the cutting forces required for the task [3]. In this section, we introduce a generative model of blade trajectories with a single parameter \(\rho\) that specifies the amount of sawing movement applied to any desired elliptical excision motion.
Our central hypothesis is that elliptical excision comprises two movement components: a nominal smooth cutting motion along the ellipse and an adaptive sawing motion that assists the excision. To better understand the interplay between these two components, we recorded ten elliptical excisions with varying amounts of sawing behaviour, from a highly pronounced sawing movement to an extremely smooth excision. Fig. 2 (top and middle rows) shows the measured position and orientation trajectories of the blade in each trial. Notice the back-and-forth oscillation of the blade on the \(XY\) plane, a distinct feature of the sawing movement. It should be noted that sawing is executed at a noticeably different pitch when compared to smoother executions. Unsurprisingly, smoother excisions result in a much lower spread of the pose trajectories, which highlights the challenge of maintaining consistent motion when sawing. As expected, the sawing motion is most dominant along the \(z\) axis, i.e. the cutting depth. However, the sawing is also reflected in the blade's \(x\), \(y\) and pitch trajectories.
Fig. 2 (bottom row) shows the trajectories of differenced \(x\), \(y\) and pitch motion components, compared to a differenced \(z\) trace, all from one of the observed sawing behaviours. The entire trace of \(x\), and the first half of the parabola in \(y\) trajectory, are noticeably in-phase with the \(z\). In contrast, the whole trace of the pitch, and the second half of the parabola in \(y\) trajectory, are noticeably out-of-phase with the \(z\) trajectory. The relationship between \(z\) and \(xy\) trajectories indicates that sawing movement is achieved by propagating the blade forward on the ascent, with residual backward motion on the blade's descent. The relation between \(z\) and pitch trajectories suggests that consistent modulation of the insertion angle is also a part of the slicing motion.
### _Modeling blade trajectories for elliptical excisions_
Given the above insights, we model the elliptical excision behaviour as follows. First, we decompose the cutting motion into a _nominal_ movement component that follows the desired smooth cutting contour, and a _behaviour_ component that characterises the manner of task execution (e.g. sawing vs smooth excisions). For the \(z\) trajectory, this is written as
\[z_{m}=z_{n}+z_{b} \tag{1}\]
where \(z_{m}\), \(z_{n}\) and \(z_{b}\) are the measured, nominal and behaviour trajectories along the \(z\) axis, respectively.
Next, we assume that a nominal movement component can be approximated by low-pass filtering of the measurements, and therefore the behaviour component can be computed as \(z_{b}=z_{m}-z_{n}\). The obtained behaviour component \(z_{b}\) can be used to model the observed \(x\), \(y\) and pitch trajectories of the
Fig. 1: The proposed framework to generate human-like excision behaviours operates as follows. At a high level, an objective function takes in the sawing parameter \(\rho\) and outputs an observation similarity score \(y\). The goal of the optimiser is to find the value for \(\rho\) that maximises the observation variable \(y\). The inner structure of the proposed objective function consists of the proposed trajectory generator (\(\mathcal{T}\)) that given the scalar \(\rho\in[1,10]\) and a nominal trajectory \(\tau_{\text{nom}}\), generates the pose trajectory of the blade \(\tau\). Next, the robot executes the trajectory \(\tau\), and collected forces \(\psi\) are converted to a set of performance features by model \(\mathcal{F}\). The obtained features are then used to define the final objective function, e.g. the similarity score between the robot-executed task and the performance of an actual surgeon, which updates the optimiser and generates a new excision behaviour.
blade, as follows:
\[\begin{split} x_{m}&=x_{n}+c_{x}z_{b}\\ y_{m}&=y_{n}+c_{y}z_{b}\\ \beta_{m}&=\beta_{n}+c_{\beta}z_{b}\end{split} \tag{2}\]
Here \(x_{m}\), \(y_{m}\) and \(\beta_{m}\) are the measured \(x\), \(y\) and pitch trajectories, respectively, and \(x_{n}\), \(y_{n}\) and \(\beta_{n}\) are their corresponding nominal trajectories. \(c_{x}\), \(c_{y}\) and \(c_{\beta}\) are the scaling coefficients (We invert the sign of \(c_{y}\) in the second half of the \(y_{m}\) parabola to correctly represent the modulation of parabolic nominal trajectories on the \(XY\) plane).
As a result, we model the \(x\), \(y\) and pitch trajectories as a function of \(z_{b}\). According to this model, any modulation along the cutting depth is reflected in the \(x\), \(y\) and pitch trajectories. Fig. 3 (top row) compares the actual trajectories with those predicted by our model for one of the observed sawing behaviours. Here, we used a first-order Butterworth filter with a cutoff frequency of 0.6 Hz to obtain the nominal trajectories from the raw measurements and set \(c_{x}\), \(c_{y}\) and \(c_{\beta}\) parameters to 0.2, 0.13 and -1.2 values, respectively.
### _Learning cutting behaviour_
The movement component \(z_{b}\), which defines the excision behaviour, can be learned using one of many supervised learning techniques. In this work, we first apply the following function approximation to encode the behaviour component \(z_{b}\):
\[z_{b}(t)\approx\frac{\sum_{i=1}^{N}\psi_{i}(t)\theta_{i}}{\sum_{i=1}^{N}\psi_{ i}(t)}, \tag{3}\]
where \(\psi_{i}(t)=\text{exp}(-h_{i}(t-c_{i})^{2})\) are Gaussian basis functions, \(N\) is the number of basis functions, \(t\) is the timestep, \(c_{i}\) and \(h_{i}\) are the centres and widths of the basis functions, and \(\theta_{i}\) are the weights of the basis functions.
The vector of learned weights \(\boldsymbol{\theta}=[\theta_{1},...,\theta_{N}]\) can be viewed as a compressed representation of the \(z_{b}\) time series.
Fig. 3: (Top row) Comparison of measured \(x\), \(y\) and pitch trajectories (blue lines) with model predictions (orange lines). The nominal trajectories obtained by low-pass filtering raw measurements are denoted with green. (Bottom row) Comparison of measured \(z\) trajectories for three different sawing behaviours (black lines) and corresponding synthetic trajectories generated by the model (coloured semi-transparent lines).
Fig. 2: Measured individual position (top row) and orientation (middle row) trajectories of the blade for each of the demonstrated behaviour (the darker green lines correspond to smoother excisions). (Bottom row) Differenced measurements of \(x\), \(y\) and pitch versus differenced measurements of \(z\). Note: the original measured trajectories are shown as black dashed lines in the above plots.
We fit a second-order autoregression model to vector \(\mathbf{\theta}\):
\[\theta_{i}=c+a_{1}\theta_{i-1}+a_{2}\theta_{i-2}+\epsilon_{i} \tag{4}\]
where \(a_{k}\) and \(c\) are the autoregression coefficients and bias constant, respectively; \(\epsilon_{i}\) is the white noise term.
Finally, given the excision behaviour label (1 to 10, where 1 represents the distinct sawing cuts, and 10 denotes smooth excisions), we fit a linear model to predict the \(a_{1}\) and \(a_{2}\) coefficients. Thus, given a real number \(\rho\in[1,10]\), our model generates the behaviour component \(z_{b}\), which along with a nominal cutting trajectory can be used to produce a human-like pose trajectory of the blade with desired sawing behaviour to be executed by a robot. Fig. 3 (bottom row) compares the \(z\) trajectories sampled from our model with the actual measurements \(z_{m}\) for different sawing behaviours.
## III Optimisation of elliptical excision technique
Optimisation of the robotic behaviour with respect to the excision force characteristics can offer interesting prospects for studies on the effects of surgical procedures. For example, the alignment of robot behaviour with excision forces that match those of expert surgeons is particularly promising for robotic surgery applications. Equally important is the ability to faithfully recreate sub-standard excision techniques and study the common mistakes observed in less experienced surgeons or trainees. In addition, technique optimisation concerning force characteristics critical to the procedure's safety could positively contribute to the existing surgical training and practice.
In this work, we use an excision force model described in [16], characterising the excision performance from force measurements; however, other force-based characterisation approaches can be readily applied in the proposed framework. Below, we provide a brief overview of the force-based characterisation of elliptical excision used in this study, followed by descriptions of the objective function and the proposed method for technique optimisation.
### _Performance characterisation from force measurements_
A model of excision forces introduced in [16] allows characterising the performance of the generated excision behaviours directly from the measurements of the excision forces. Alongside descriptive statistics of excision forces, this model parameterises task-related characteristics correlated with elliptical excision performance, such as abruptness of task execution flow. This approach models the elliptical excision process as a hybrid system, with underlying continuous dynamics of viscoelastic interaction between tissues and the blade, as well as discrete event dynamics, typically associated with tissue re-tensioning or blade re-orientation.
First, the model approximates the process of cutting a viscoelastic object as a continuous blade's movement through Maxwell material using the following constitutive law:
\[\frac{\eta}{E}\dot{f}+f=\eta\dot{x} \tag{5}\]
with \(f\) is the excision force, \(\dot{f}\) the time derivative of the excision force, \(\dot{x}\) is the blade's velocity,\(E\) and \(\eta\) the Maxwell model's spring and damper coefficients, respectively.
The model assumes that an excision is executed using \(K\) cutting regimes, where each regime \(k\) corresponds to a constant velocity of the blade \(v_{k}\). System uncertainty is modelled using white Gaussian noise with variance \(\sigma_{k}\), \(\tilde{v}_{k}\sim\mathcal{N}(v_{k},\sigma_{k}^{2})\). Cutting regimes are switched according to \(K\times K\) transition matrix \(\mathbf{Q}\), which along with \(v_{k}\) and \(\sigma_{k}^{2}\), can be learned by fitting a Hidden Markov Model (HMM) to \(\dot{x}\). Under the assumption of the Maxwell model, \(v_{k}\) can be obtained directly from force measurements \(f\), captured using a suitably instrumented scalpel.
The learned model parameters encode the amplitude and temporal features of the excision forces that characterise the manner of task execution. For instance, parameters \(\{v_{1}...v_{K}\}\) describe the dominant force levels of the excision forces, and thus the overall magnitude and spread of forces applied during the excision. Along with transition probability matrix \(\mathbf{Q}\), which captures the temporal characteristics of the excision forces, these parameters can encode meaningful features, such as Energy (where increased energy reflects higher cutting forces applied for a longer duration) or Smoothness (where increased smoothness reflects the higher probability of sudden rise and fall of excision forces).
### _Objective function_
Fig. 1 shows a diagram of the proposed framework for aligning robotic cutting behaviour with desired characteristics of excision forces, typically observed in human trials. At every \(n\)-th iteration, the robot executes the excision trajectory \(\tau_{n}\) generated by the proposed trajectory model (\(\mathcal{T}\)) using candidate behaviour parameter \(\rho_{n}\) and a fixed nominal trajectory \(\tau_{\text{nom}}\). After the execution, the recorded excision force profile \(\psi_{n}\) is provided to the force-based characterisation model (\(\mathcal{F}\)), which encodes the characteristics of the excision forces, as described in the previous section. The features extracted by model \(\mathcal{F}\) are then used to evaluate the objective function \(g(\rho_{n})\). Two scoring methods are used to evaluate the proposed objective function:
#### Iii-B1 A single characteristic of excision forces
is used for aligning the generated robot behaviour with respect to a certain characteristic of the excision forces, e.g. amplitude or smoothness evaluated by model \(\mathcal{F}\).
#### Iii-B2 Expert rating
This objective function is used to optimise the robot behaviour with respect to a predicted expert score. [16] evaluated the magnitude-based characteristics of the excision forces (the Amplitude and Consistency features, captured by model \(\mathcal{F}\)) using 15 expert-labelled excision trials. Here, we apply k-nearest neighbour regression (with \(k=1\)) to predict the expert rating given the excision features.
### _Bayesian optimisation of excision behaviour_
We apply Bayesian Optimisation (BO) [19] to reduce the number of cutting experiments required to generate a given human-like excision behaviour. At each iteration, BO optimises an acquisition function \(\alpha\) to choose the next candidate
sawing parameter \(\rho\) for trajectory generation in order to evaluate the objective function \(g\) (as described above). We model the mapping beween trajectory parameter \(\rho\) and the objective function using a Gaussian process (GP) [20]:
\[g(\rho)\sim\mathcal{GP}\left(m(\rho),\kappa(\rho,\rho^{\prime})\right) \tag{6}\]
where \(m(\rho)\) and \(\kappa(\rho,\rho^{\prime})\) are the mean and kernel functions.
For \(\alpha\), we use the Expected Improvement acquisition function with the following closed-form expression [21]:
\[\alpha(\rho)=\left(m(\rho)-g(\rho^{+})-\epsilon\right)\Phi(Z)+\sigma(\rho) \phi(Z), \tag{7}\]
where
\[Z=\begin{cases}\frac{m(\rho)-g(\rho^{+})-\epsilon}{\sigma(\rho)},&\text{if } \sigma(\rho)>0\\ 0,&\text{if }\sigma(\rho)=0\end{cases}\]
Above, \(m(\rho)\) and \(\sigma(\rho)\) are the mean and the standard deviation of the GP posterior, and \(\Phi\) and \(\phi\) are the cumulative probability function and the probability density function, respectively. \(\rho^{+}\) is the current optimal choice of sawing parameter, and \(\epsilon\) is a scalar that sets the tradeoff between exploration and exploitation during optimisation.
We use a Matern kernel [22] function \(\kappa(\rho,\rho^{\prime})\) with the following analytical expression:
\[\kappa(\rho_{i},\rho_{j})=\frac{1}{\Gamma(\nu)2^{\nu-1}}\Big{(}\frac{\sqrt{2 \nu}}{l}r\Big{)}^{\nu}K_{\nu}\Big{(}\frac{\sqrt{2\nu}}{l}r\Big{)} \tag{8}\]
where \(r=|\rho_{i}-\rho_{j}|\), \(\Gamma(\cdot)\) is the gamma function, \(K_{\nu}\) the Bessel function, \(\nu\) and \(l\) kernel hyperparameters.
In our experiments, we set \(\epsilon=2\), \(\nu=2.5\) and \(l=1\). These parameters were chosen manually during setup calibration.
## IV Experiments and Results
We performed two sets of experiments with the following objectives: 1) to find the excision technique that maximises the smoothness feature of the applied cutting forces, and 2) to find the excision technique whose excision forces predict the highest expert ratings.
Fig. 4 shows the optimisation results for six iterations of smoothness feature optimisation. The first trial was initialised with a sample \(\{\rho_{1}=1,y_{1}\}\), where \(y_{1}\) is the force smoothness feature evaluated by model \(\mathcal{F}\). Optimisation results confirmed our expectations that smoother excision trajectories (e.g. generated by model \(\mathcal{T}\) using larger values of \(\rho\) parameter) result in smoother excision forces.
Fig. 5 shows the results for six iterations of expert score optimisation, for each of the four experts. As before, we initialised the optimisation with \(\{\rho_{1}=1,y_{1}\}\) pair (this time, \(y_{1}\) is the predicted expert score, as described in section III-B2). The obtained mean of the posterior GP predicts higher expert scores for the excision behaviours generated using larger \(\rho\) values (i.e. smoother trajectories). In other words, the results suggest that sawing movement is more likely to be penalised by experts. In addition, the experiment demonstrates that the sawing behaviour parameterised by \(\rho\) can achieve different modulations of excision forces. For example, the score from Expert A highlights the rater's preference towards excisions with a more pronounced force modulation, as reflected by the Consistency feature. Notice that the posterior GP successfully captures this preference with the optimal sawing parameter \(\rho\approx 6\).
### _Excision force characteristics versus \(\rho\) parameter_
We analysed the \(\{\rho,y\}\) datapoints collected in the above experiments to explore the relationships between the sawing parameter \(\rho\) and characteristics of the excision forces. Fig. 6 (left) shows a scatter plot of the \(\rho\) parameter values vs the smoothness feature of the excision forces evaluated by model \(\mathcal{F}\). As in the first experiment, the sawing behaviour shows a strong positive correlation with the Smoothness feature (Pearson's \(r=0.74\), \(p<0.05\)). This relationship has an intuitive interpretation that smoother excision trajectories must result in a more uniform application of excision forces. The smoothness of the excision forces achieved by two medical students (dotted green lines) and two practising surgeons (dashed red lines) suggest that more experienced surgeons are likely to apply more uniform blade trajectories.
Fig. 6 (middle) shows the contour plots of the Amplitude and Consistency features of excision forces against the sawing parameter \(\rho\). The results show that \(\rho\) has no significant correlation with the excision force amplitude - both smooth and sawing excision trajectories can yield equally low or high cutting forces. On the other hand, the consistency of excision forces (a feature that reflects the inverse of the spread of force levels during excision), highlights a strong correlation with sawing parameter \(\rho\) (Pearson's \(r=0.82\), \(p<0.05\)). This relationship is explained by Fig. 3 (bottom row), where larger \(\rho\) values correspond to the noticeably lower variations of \(z_{b}\), and as the result, to lower variations along \(x\), \(y\) and pitch components of the excision trajectories. This observation agrees with a general intuition that it is more difficult to apply excision forces consistently when sawing.
The Confidence feature (Fig. 6 right), which characterises both the uniformity and consistency of the excision forces, shows a significant alignment with sawing parameter \(\rho\) (Pearson's \(r=0.78\), \(p<0.05\)). Similar to the Amplitude feature, the experiment results show a weak relationship (Pearson's \(r=0.58\), \(p<0.05\)) between \(\rho\) and the Energy feature. Fi
Fig. 4: Gaussian process model fit to six datapoints (black dots) collected by optimising \(\rho\) with respect to the Smoothness feature. (The black line is the posterior mean, and the shaded region is 95\(\%\) confidence interval).
nally, Fig. 6 shows the model \(\mathcal{F}\) characterisation of excision forces from two professional surgeons (denoted as **S**) and two medical trainees with no experience at elliptical excision task (denoted as **T**). Notice that model characterisation of surgeon excisions is aligned with the higher sawing parameter \(\rho\) values, whereas the performance of trainees matches the region of lower \(\rho\) values. Again, this indicates that surgeons are likely to exhibit smoother excision trajectories when compared to less experienced medical students.
## V Discussion and conclusions
In this study, we used a model of excision forces to optimise the pose trajectories of a blade in an elliptical excision task. We proposed an excision trajectory generator capable of producing human-like elliptical excision behaviours parameterised by a single parameter \(\rho\) that defines the amount of sawing motion applied during excision. These models can be used to efficiently optimise the excision technique with desired excision characteristics using Bayesian optimisation. More specifically, we show how to align robotic excision behaviours with features that satisfy the criteria of surgical experts and behaviours that resemble performance characteristics of surgeons at various experience levels. Experimental results indicate that the proposed \(\rho\)-parameterisation can successfully produce cutting force modulation that maximises the performance assessment of experts.
Our analysis suggests that professional surgeons are more likely to apply smoother excision trajectories, whereas less experienced medical trainees exhibit greater sawing behaviour. It is well known that sawing movement assists the cutting process by lowering the forces required to separate the material [3]. We hypothesise that inexperienced trainees employ this strategy to overcome the need to apply excessive excision forces to perform a controlled cut. Although this study is limited to scalpel trajectories only, the cutting forces are strongly affected by the tissue tensioning controlled by the non-dominant hand. Hence, we hypothesise that experienced surgeons, while applying the smooth excision trajectories, actively tension the tissues to assist the excision. This combination of excision and tissue tensioning strategies is an interesting line of future work to better understand the dexterous manipulation skills underlying the elliptical excision task. Finally, we want to emphasise the promising prospect of using the proposed framework for studying surgical techniques at various levels of expertise, analysing common mistakes and their effects on the tissues.
Fig. 5: Results for behaviour (\(\rho\)) optimisation with respect to the interpolated performance score from four experts. (Top row) Gaussian process models fit to six observations (black dots) obtained during optimisation. The black lines show the posterior, the shaded regions illustrate the \(95\%\) confidence intervals, and the red dashed lines highlight the optimal \(\rho\) parameters for each of the experiments. (Bottom row) Contour plot of the interpolated expert scores over the feature space of the excision force model \(\mathcal{F}\). The orange dots are the individual sample points \(\rho\) used during optimisation.
Fig. 6: (Left) \(\rho\) parameter vs evaluated smoothness feature. (Middle and right) Contour plots of the \(\rho\) parameter values obtained during experiments. The orange dots are the individual datapoints with shown values for \(\rho\) parameter. Note: **T** and **S** denote medical trainee and professional surgeon, respectively. |
2303.18009 | Could compact stars in globular clusters constrain dark matter? | The dark matter content of globular clusters, highly compact gravity-bound
stellar systems, is unknown. It is also generally unknow*able*, due to their
mass-to-light ratios typically ranging between 1$-$3 in solar units,
accommodating a dynamical mass of dark matter at best comparable to the stellar
mass. That said, recent claims in the literature assume densities of dark
matter around 1000 GeV/cm$^3$ to set constraints on its capture and
annihilation in white dwarfs residing in the globular cluster M4, and to study
a number of other effects of dark matter on compact stars. Motivated by these
studies, we use measurements of stellar kinematics and luminosities in M4 to
look for a dark matter component via a spherical Jeans analysis; we find no
evidence for it, and set the first empirical limits on M4's dark matter
distribution. Our density upper limits, a few $\times \ 10^4$ GeV/cm$^3$ at 1
parsec from the center of M4, do not negate the claims (nor confirm them), but
do preclude the use of M4 for setting limits on non-annihilating dark matter
kinetically heating white dwarfs, which require at least $10^5$ GeV/cm$^3$
densities. The non-robust nature of globular clusters as dynamical systems,
combined with evidence showing that they may originate from molecular gas
clouds in the absence of dark matter, make them unsuitable as laboratories to
unveil dark matter's microscopic nature in current or planned observations. | Raghuveer Garani, Nirmal Raj, Javier Reynoso-Cordova | 2023-03-31T12:30:36Z | http://arxiv.org/abs/2303.18009v1 | # Could compact stars in globular clusters constrain dark matter?
###### Abstract
The dark matter content of globular clusters, highly compact gravity-bound stellar systems, is unknown. It is also generally unknow_able, due to their mass-to-light ratios typically ranging between 1\(-\)3 in solar units, accommodating a dynamical mass of dark matter at best comparable to the stellar mass. That said, recent claims in the literature assume densities of dark matter around 1000 GeV/cm\({}^{3}\) to set constraints on its capture and annihilation in white dwarfs residing in the globular cluster M4, and to study a number of other effects of dark matter on compact stars. Motivated by these studies, we use measurements of stellar kinematics and luminosities in M4 to look for a dark matter component via a spherical Jeans analysis; we find no evidence for it, and set the first empirical limits on M4's dark matter distribution. Our density upper limits, a few \(\times\) 10\({}^{4}\) GeV/cm\({}^{3}\) at 1 parsec from the center of M4, do not negate the claims (nor confirm them), but do preclude the use of M4 for setting limits on non-annihilating dark matter kinetically heating white dwarfs, which require at least 10\({}^{5}\) GeV/cm\({}^{3}\) densities. The non-robust nature of globular clusters as dynamical systems, combined with evidence showing that they may originate from molecular gas clouds in the absence of dark matter, make them unsuitable as laboratories to unveil dark matter's microscopic nature in current or planned observations.
## I Introduction
Globular clusters, a.k.a. globulars, appear to surround all galaxies. Weighing \(\mathcal{O}(10^{5})M_{\odot}\) and spanning \(\mathcal{O}(1)\) pc, they are extremely dense spherical collections of stars bound by gravity. Unlike dwarf spheroidal galaxies (of similar masses but much greater spatial extent), no non-baryonic mass content is required to account for the stellar dynamics of globular clusters; this is in fact just what differentiates star clusters from galaxies [1; 2]. This point is further corroborated by studies that fail to find compelling empirical evidence for a dark matter (DM) component in several globulars, which instead are only able to set upper bounds on the DM density; see Table 1.
There is no widely accepted theory of the origin of globular clusters. One set of models [3] assumes that they are formed in DM subhalos approximately at the same time as the host galaxy. It then suggests that after their formation they merged with the Galactic halo, followed by severe tidal stripping, leaving the clusters with small \(\mathcal{O}(1)\) mass-to-light ratios; this picture is also borne out by N-body simulations [4]. Other models suggest that globular clusters may have formed in DM-poor environments, i.e., from giant star-forming molecular gas clouds that either collapse [5] or get compressed by shock waves from galaxy mergers, as observed in the Antennae Galaxies [6]. In any case, an important hint on globular cluster formation comes from the observation of a linear relation between the total mass of globulars and the mass of their parent halo [7], suggesting that the globular formation rate is proportional to the available initial gas mass, which in turn must be proportional to the initial halo mass. It also suggests that globular clusters form early in the galaxy's history before star formation is suppressed by feedback mechanisms.
In scenarios of globular cluster formation in DM-rich environments, it is estimated and suggested by simulations [8; 4] that, today, in the cores of globular clusters only an \(\mathcal{O}(10^{-4})-\mathcal{O}(10^{-3})\) fraction of the DM from the original subhalo is left over from tidal stripping. (The dispersal of stellar material, on the other hand, may be hindered by high pressures generated by gas collisions [9].) Therefore, as crack subhalos typically weigh \(10^{6}-10^{8}\)\(M_{\odot}\), the DM content of globulars could weigh anywhere between \(10^{2}M_{\odot}\) and \(10^{5}M_{\odot}\). For a scale radius \(r_{s}\) of 5 pc, the scale density \(\sim\) (cluster mass)/\(r_{s}^{3}\) could thus range from a few GeV/cm\({}^{3}\) to upwards of \(10^{3}\) GeV/cm\({}^{3}\). This, indeed, is the range spanned by estimates quoted in the literature of the core density of one of the most studied nearby globulars, M4 (NGC 6121); e.g., Ref. [10] near the lower end, and Ref. [11] near the upper end1.
Footnote 1: Both estimates account for a modest enhancement in DM density as DM orbits contract in response to baryons collecting closely.
If the estimate of Ref. [11] were true, it would have dramatic implications for models of particle DM. Firstly, high DM densities in globular clusters would benefit searches that look for DM annihilation products [12; 13; 14; 15; 16; 17; 18; 19]. More pertinently, as pointed out in Ref. [11], DM particles in M4/NGC6121 could capture in its white dwarf (WD) population via scattering on nuclei, self-annihilate
to Standard Model (SM) states in their interior, and overheat the stars. The observed luminosities of the WDs in M4 would then place wide-ranging constraints on DM-nucleon cross sections, often outdoing underground direct detection searches. Motivated by this, a number of papers have since appeared exploring far-reaching particle physics implications of globular cluster DM capturing in celestial bodies [20, 21, 22, 23, 24, 25, 26, 27], all of them assuming a DM density around 1000 GeV/cm\({}^{3}\) as quoted in Ref. [11].
In this work we quantify the DM content in M4/NGC6121 from stellar data. By performing a spherical Jeans analysis using measurements of stellar line-of-sight velocities and surface luminosities via a Markov chain Monte Carlo (MCMC) approach, we find no evidence for DM in M4/NGC6121, and set upper bounds on its DM distribution. These limits are relatively weak, i.e., comparable to the visible stellar mass, mainly as a result of the M4/NGC6121 stellar kinematic data accommodating mass-to-light ratios (in solar units) \(\Upsilon\sim 1-2\). This is consistent with Ref. [28] which found \(\Upsilon=1.7\pm 0.1\ M_{\odot}/L_{\odot}\) using N-body simulation-based fits without accounting for DM. (Similarly, the derived V-band mass-to-light ratios of the closest (\(<5\) kpc from the Sun) globulars are typically about 2 M\({}_{\odot}\)/L\({}_{\odot}\)[29], and consistent with theoretical expectations for stellar systems that have evolved without DM.) Such weak bounds on DM densities in globular clusters are not an exception but the rule. As argued in Ref. [1], in smallish, dense stellar systems such as globular clusters and ultra-compact dwarf galaxies, the typical dynamical mass-to-light ratio of \(\sim 1-5\ M_{\odot}/L_{\odot}\) makes it very difficult to determine the presence of DM from stellar kinematics even if these systems do reside in a DM halo. This is in contrast to, say, dwarf spheroidals, which exhibit \(\Upsilon\sim 10-100M_{\odot}/L_{\odot}\), and are definitely known to contain DM. Crucially for us, as a result of this inevitable uncertainty in the DM content of globular clusters, it is well-high impossible to make empirical statements about whether DM can impact compact stars in them. That is the main message of our study.
Given this limitation, we then ask another question: could observed WDs in M4/NGC6121 possibly constrain dark _kinetic_ heating of WDs through DM capture? This is a possibility for DM with self-annihilation cross sections that are negligible or, as in the case of asymmetric DM, perhaps zero; the WD heating comes entirely from the transfer of DM kinetic energy during capture. As the kinetic energy of DM falling into WDs is at most \(\sim 10^{-2}\times\) the mass energy, much higher DM densities are required for this heating process to be interesting. We find that the upper bounds we have obtained on the DM densities in M4 _are_ tight to disfavour the usefulness of this mechanism over a wide range of parameters. Moreover, our results also impact other scenarios in the literature where DM densities much higher than in the solar neighborhood were assumed in globular clusters, involving capture in neutron stars (NSs), triggering thermonuclear explosions in WDs, and DM in the form of primordial black holes (PBHs). Our paper comments on them.
It is organized as follows. In Section II we derive limits on the DM content of M4/NGC6121 using spherical Jeans and MCMC analyses. In Section III we discuss the implications of our limits for DM heating of WDs and other scenarios of DM confronting compact stars in globular clusters. In Section IV we provide a summary and the scope of our work. Appendices A and B provide technical details, and Appendix C surveys efforts to look for DM in globular clusters and luminosity measurements of WDs in them.
## II Limits on dark matter density in M4/NGC6121
The top left panel of Figure 1 shows our 95% C.L. upper limits on the dark matter distribution in M4/NGC6121 We display these in the plane of the scale density \(\rho_{s}\) vs the scale radius \(r_{s}\) of a Navarro-Frenk-White (NFW) [66] profile of DM density,
\[\rho_{\rm NFW}(r)=\frac{\rho_{s}}{\left[\frac{r}{r_{s}}\right]\!\left[1+ \frac{r}{r_{s}}\right]^{2}}\, \tag{1}\]
as well as a Burkert profile that is more cored in the inner halo regions,
\[\rho_{\rm Bur}(r)=\frac{\rho_{s}}{\left[1+\frac{r}{r_{s}}\right]\!\left[1+ \left(\frac{r}{r_{s}}\right)^{2}\right]}. \tag{2}\]
As we have not found evidence for DM, and as this null result does not depend crucially on the choice of DM profile for reasons argued in the Introduction, we do not consider other profiles.
Our limits were obtained by performing a Jeans analysis via a Markov chain Monte Carlo (MCMC) technique. We briefly outline the method below and in detail in Appendix A, but first one can gain a rough understanding of our limits as follows.
The total mass of M4/NGC6121 \(\simeq 10^{5}M_{\odot}\) and its mass-to-light ratio (which we independently determine in our fit) in solar units is about unity [67]. Thus we can expect the maximum allowed DM mass to vary between \(\sim 10^{4}-10^{6}\ M_{\odot}\). The mass of NFW and Burkert halos obtained by integrating over Eq. (1) and Eq. (2) is:
\[M_{\rm NFW} = 4\pi\rho_{s}r_{s}^{3}\bigg{[}\log(\kappa+1)-\frac{\kappa}{\kappa +1}\bigg{]}\,\] \[M_{\rm Burk} = \pi\rho_{s}r_{s}^{3}[\log((\kappa^{2}+1)(\kappa+1)^{2})-2\tan^{-1 }\kappa], \tag{3}\]
where \(\kappa\equiv r_{\rm max}/r_{s}\) determines the radius \(r_{\rm max}\) at which the halo is truncated; the \(\kappa\)-dependent term in Eq. (3) is an \({\cal O}(1)\) number. It may now be seen that the exclusion \(\{r_{s},\rho_{s}\}\) in Fig. 1 do indeed give \(M_{\rm NFW}\) and \(M_{\rm Burk}\) in the maximum allowed range.
Of course, it is not the total mass, but the mass _profiles_ (both dark and stellar) of a structure that determine its velocity dispersion profile. To estimate these profiles we perform a spherical Jeans analysis [68] assuming that M4/NGC6121 is virialized. This is a justified assumption since its half-mass relaxation time, about 0.87 Gyr [67], is much shorter than its age inferred from the cooling sequence of its constituent WDs, about 12 Gyr [30]. Assuming equal polar and azimuthal dispersion speeds, \(\sigma_{\theta}^{2}=\sigma_{\phi}^{2}\), the stellar population can be described by the non-collisional Jeans equation:
\[\rho_{\star}^{-1}(r)\frac{\partial}{\partial r}\left(\rho_{\star}(r)\sigma_{ r}^{2}\right)+\frac{2\beta(r)\sigma_{r}^{2}}{r}=-\frac{GM_{\rm enc}(r)}{r^{2}}, \tag{4}\]
where \(\sigma_{r}\) is the radial dispersion velocity, \(\beta\equiv 1-\sigma_{\theta}^{2}/\sigma_{r}^{2}\) is the anisotropy, \(\rho_{\star}(r)\) is the stellar density profile, and \(M_{\rm enc}(r)=M_{\star}(r)+M_{\rm DM}(r)\) is the total mass enclosed within a radius \(r\), with \(M_{\star}(M_{\rm DM})\) the stellar (DM) mass.
For our fit we use stellar line-of-sight (LOS) velocity data from the Very Large Telescope and Keck Observa
\begin{table}
\begin{tabular}{|l|c|l|l|l|} \hline globular & \(d\) (kpc) & \(M/L_{\rm V}\) (\(\odot\)) & DM hint? & WDs? \\ \hline \hline NGC 6121 (M4) & 2.2 & 1.7 \(\pm\) 0.1 & � [this work] & [30] \\ \hline Kron 3 & 61.0 & 1.2 \(\pm\) 0.3 & � [31] & \\ NGC 121 & 64.9 & 0.9 \(\pm\) 0.3 & � [31] & \\ NGC 1851 & 12.0 & \(1.3\pm 0.2\) & � [18] & \\ NGC 2808 & 10.0 & \(1.4\pm 0.1\) & � [18; 32] & [33; 34] \\ NGC 3201 & 4.7 & \(2.6\pm 0.1\) & � [18] & \\ NGC 4590 (M68) & 10.4 & \(1.9\pm 0.1\) & � [35] & \\ NGC 5024 (M53) & 18.5 & \(2.0\pm 0.1\) & � [35] & \\ NGC 6093 (M80) & 10.3 & \(2.0\pm 0.1\) & � [18; 32] & [34] \\ NGC 6656 (M22) & 3.3 & \(2.0\pm 0.1\) & � [35; 36; 37] & [38; 39] \\ NGC 6752 & 4.1 & \(2.2\pm 0.1\) & � [32; 37] & [40; 41] \\ NGC 6397 & 2.3 & 2.4 \(\pm\) 0.5 & � [42; 43; 44] & [45; 46; 47]\({}^{\rm a}\) \\ NGC 6809 (M55) & 5.4 & 2.1 \(\pm\) 0.1 & � [31] & \\ NGC 6838 (M71) & 4.0 & 1.0 \(\pm\) 0.05 & � [48; 49] & [48] \\ NGC 7078 & 10.7 & \(1.3\pm 0.1\) & � [32] & [34] \\ NGC 7089 (M2) & 11.7 & \(1.8\pm 0.1\) & � [18] & \\ NGC 7099 (M30) & 8.5 & \(1.6\pm 0.1\) & � [18; 35] & \\ \hline NGC 104 (47 Tuc) & 4.5 & \(1.9\pm 0.1\) & � [31; 37; 49] & [50; 51; 52] \\ & & & ✓ [53] & \\ NGC 2419 & 88.5 & 1.6\(\pm 0.2\) & � [32; 54; 55] & \\ & & & ✓ [55] & \\ NGC 3201 & 4.7 & \(2.6\pm 0.1\) & � [56] & [4; 57] \\ & & & ✓ [58] & \\ NGC 5139 (\(\omega\) Cen) & 5.4 & \(2.8\pm 0.1\) & � [37] & [59; 60] \\ & & & ✓ [16; 61] & \\ NGC 6544 & 2.5 & \(2.3\pm 0.5\) &? [62] & [63; 64] \\ \hline NGC 5128 population & 3–5 Mpc & \(>6\)[65] &? [65] & \\ \hline \end{tabular}
\end{table}
Table 1: A non-exhaustive list of globular clusters in which a dark matter component was searched for using stellar data. Except for the globular population in the elliptical galaxy NGC 5128, the distances from the Sun \(d\) and mass-to-light ratios in the V-band in units of \(M_{\odot}/L_{\odot}\) are taken from Ref. [29]. A ‘✓’ indicates DM was not found by the study cited, a ‘✓’ indicates statistical evidence, and ‘?’ indicates ambiguous conclusions. The last column lists references on observations of white dwarfs, which in principle may be used to set limits on DM-induced heating if unambiguous evidence for a DM component is found in the corresponding globular cluster.
tory as collected in Ref. [69]. Solving Eq. (4) for the radial velocity dispersion, we project it on to the LOS:
\[\sigma_{\rm LOS}(r)=\frac{2}{\Sigma_{*}(r)}\int_{r}^{\infty}dr^{\prime}\bigg{(}1 -\beta(r^{\prime})\frac{r^{2}}{r^{\prime 2}}\bigg{)}\frac{r^{\prime}\rho_{\star}(r^{ \prime})\sigma_{r}^{2}(r^{\prime})}{\sqrt{r^{\prime 2}-r^{2}}}, \tag{5}\]
where \(r^{\prime}\) is the 2D-projected radius and \(\Sigma_{\star}\) the stellar tracer surface mass density. Data on the latter is obtained from Ref. [70], which compiles Gaia DR2 and Hubble Space Telescope surface brightness measurements for a large sample of globular clusters.
Using the aforementioned data, and following the procedure of Ref. [37] to select members and remove possible binaries, we perform our MCMC analysis with 13 free parameters:
\[{\rm DM~{}distribution:} \ \{\rho_{s},r_{s}\} \tag{6}\] \[{\rm stellar~{}distribution:} \ \{\rho_{\star}~{}{\rm parameters}~{}(\ref{eq:1}),M_{\star}\}\] velocity anisotropy: \[\ \{\beta~{}{\rm parameters}~{}(\ref{eq:1})\}~{}.\]
The stellar distribution is modeled as a sum of three Plummer spheres [71], and the velocity anisotropy profile is modeled as a smoothly varying two-part function; we describe these in detail in Appendix A. Due to tidal stripping, the stellar profile is in principle truncated at some radius \(r_{t}\), which is not accommodated by a Plummer sphere decomposition. However, in practice, the data points available for surface luminosity and velocity dispersion are at radii far below \(r_{t}\simeq 50\) pc as reported in Ref. [67] or \(r_{t}\simeq 20\) pc as estimated in Ref. [11], and thus the effect is negligible. It is also reasonable to assume that the DM profile is truncated at \(r=r_{t}\), and again the effect of not including this truncation in practice is negligible.
In the top right panel of Fig. 1 we plot the 95% C.L. upper limits on the DM mass enclosed within a radius \(r\). We see that the slope of these limits is steeper at large \(r\) and gentler at small \(r\). As argued in Ref. [18] for bounds on other globulars, this is because of an observational bias in the stellar kinematic data, which is only available for \(r\gtrsim\mathcal{O}(1)\) pc. For large \(r\) that includes much of the kinematic data, the NFW and Burkert profiles that maximize the enclosed \(M_{\rm DM}\) are those with \(r_{s}\gtrsim r\). It may seen from Eq. (3) that for small \(\kappa\) (where now \(r_{\rm max}\to r\)) the enclosed \(M_{\rm DM}\propto r_{s}r^{2}\) for NFW and \(M_{\rm DM}\propto r^{3}\) for Burkert profiles - hence the steeper Burkert bound in the plot. On the other hand, for \(r\lesssim\mathcal{O}(1)\) pc, the profiles that maximize the enclosed \(M_{\rm DM}\) are those with \(r_{s}\lesssim O(1)\) pc: now the first term in Eq. (3) dominates and the enclosed \(M_{\rm DM}\propto r_{s}^{3}\log(r/r_{s})\) for both profiles.
In the next section we will discuss the implications of these results for DM interactions with compact stars.
## III Dark matter, compact stars, and globular clusters
We now turn to the question of whether the allowed DM densities rule out the use of compact stars in globular clusters as detectors of DM. We discuss first white dwarfs as thermal detectors, then other signatures of dark matter encountering compact stars in general.
### Dark matter capture and heating of white dwarfs
DM particles intercepted by compact objects can be efficiently captured in their deep gravitational potential wells by losing energy via scattering on stellar constituents; see Refs. [72; 73] and references therein. Assuming a Maxwell-Boltzmann distribution of DM velocities with dispersion \(v_{d}\), the rate of DM capture in a WD of mass \(M_{\rm WD}\) and radius \(R_{\rm WD}\) is given by [74]
\[C_{\chi}=\frac{\rho_{\chi}}{m_{\chi}}\pi R_{\rm WD}^{2}\,\frac{\gamma^{2}-1} {v_{\star}}\,{\rm erf}\left(\sqrt{\frac{3}{2}}\frac{v_{\star}}{v_{d}}\right) \times p_{\sigma}~{}, \tag{7}\]
in the limit where the WD surface escape speed \(v_{\rm esc}=\sqrt{2GM_{\rm WD}/R_{\rm WD}}\gg v_{d},v_{\star}\), with \(v_{\star}\) the WD speed. Here \(\gamma=(1-v_{\rm esc}^{2})^{-1/2}\) and \(p_{\sigma}=1-e^{-\tau}\) is the probability for incident DM to scatter given an optical depth \(\tau\). In the optically thin limit, \(p_{\sigma}\simeq\tau=\sigma_{\chi T}/\sigma_{\rm geo}\), where \(\sigma_{\chi T}\) is the DM cross section for scattering on target \(T\) (nucleus or electron), and \(\sigma_{\rm geo}\) is the WD geometric cross section, \(\sigma_{\rm geo}=\pi R_{\rm WD}^{2}/N_{T}\), where \(N_{T}\) is the number of targets in the WD. For simplicity we assume below that WDs are composed dominantly of \({}^{12}\)C.
DM capture adds energy to the WD medium by transfer of kinetic energy at a rate \(\dot{Q}_{\rm kin}=(\gamma-1)m_{\chi}C_{\chi}\). In some DM scenarios, captured DM possibly self-annihilates to SM states and heats the WD further at a rate \(\dot{Q}_{\rm kin+ann}=\gamma m_{\chi}C_{\chi}\). As \(\gamma\simeq 1\) for WDs it is the latter mechanism, if available, that dominates WD heating. Under thermal equilibrium the WD luminosity equals the DM heating rate, and for DM-nucleus scattering cross section equal to or below the geometric value and DM mass above the WD evaporation mass \(\sim\mathcal{O}({\rm MeV})\)[75; 25], we obtain a blackbody temperature (as seen by a distant observer) of
\[T_{\rm kin}^{\infty}\approx 1100\,{\rm K}\Bigg{[}\frac{\alpha_{\rm kin }}{3\times 10^{-7}}\left(\frac{\rho_{\chi}}{10^{3}\,{\rm GeV/cm^{3}}}\right) \left(\frac{\sigma_{\chi{\rm N}}}{\sigma_{\rm geo}}\right)\] \[\times\left(\frac{20\,{\rm km/s}}{v_{\star}}\right){\rm erf} \left(\sqrt{\frac{3}{2}}\frac{10\,{\rm km/s}}{v_{d}}\frac{v_{\star}}{20\,{\rm km /s}}\right)\Bigg{]}^{1/4}, \tag{8}\]
\[T_{\rm kin+ann}^{\infty}\approx 7700~{}{\rm K}\Bigg{[}\frac{\alpha_{ \rm kin+ann}}{8\times 10^{-4}}\left(\frac{\rho_{\chi}}{10^{3}\,{\rm GeV/cm^{3}}} \right)\left(\frac{\sigma_{\chi{\rm N}}}{\sigma_{\rm geo}}\right)\] \[\times\left(\frac{20\,{\rm km/s}}{v_{\star}}\right){\rm erf} \left(\sqrt{\frac{3}{2}}\frac{10\,{\rm km/s}}{v_{d}}\frac{v_{\star}}{20\,{\rm km /s}}\right)\Bigg{]}^{1/4}, \tag{9}\]
where
\[\alpha_{\rm kin}=\frac{(\gamma-1)(\gamma^{2}-1)}{\gamma^{4}}\qquad{\rm and} \qquad\alpha_{\rm kin+ann}=\frac{\gamma(\gamma^{2}-1)}{\gamma^{4}}~{}.\]
In the above equations we have normalized quantities to values corresponding to \(M_{\rm WD}=1.2\,M_{\odot}\) and \(R_{\rm WD}=4000\) km, and to average dispersion speeds in globular clusters [29; 76]. DM models can in principle be constrained by observing WDs colder than Eqs. (8) and (9). In Fig. 2 we show a WD population observed in M4/NGC6121 in the plane of luminosities and masses; we derived this data from HST-ACS observations [30] using a procedure described in detail in Appendix B. We also show here a contour corresponding to the maximal WD heating via DM annihilations (i.e. \(p_{\sigma}=1\) in Eq. (7)) assuming \(\rho_{\chi}=1000\) GeV/cm\({}^{3}\). Clearly, for this value of DM density several WDs are fainter than they would be in the presence of DM annihilations and may be used to set limits on DM capture.
Information on a WD's mass, radius and luminosity \(L_{\rm WD}\) can be used to determine the minimum ambient DM density \(\rho_{\chi,{\rm min}}^{\rm WD}\) required to constrain its heating the WD by requiring \(\dot{Q}\geq L_{\rm WD}\). For the WD population in Fig. 2 the span of \(\rho_{\chi,{\rm min}}^{\rm WD}\) is shown in the bottom left panel of Fig. 1 with the magenta (cyan) region corresponding to heating from DM annihilations (kinetic energy transfer). The horizontal span of these regions denotes the uncertainty in the cluster-centric distance \(r\) of the WDs. Specifically, the cutoff at \(r=r_{\rm max}\) = 2.3 pc corresponds to the maximum angular distance of 250\({}^{\prime\prime}\) at which WDs were observed at HST/ACS, and the cutoff at \(r=r_{\rm min}\) = 0.1 pc corresponds to an estimate of the minimum distance at which WDs could be resolved. In more detail, the angular resolution of HST/ACS is about 0.1\({}^{\prime\prime}\)[77] corresponding to \(r\simeq 10^{-3}\) pc, but for crowded stellar fields in the inner regions of globular clusters it is still challenging to resolve individual stars; on the other hand, the first point at which stellar line-of-sight velocity information is available is at \(r\simeq\) pc. We have chosen \(r_{\rm min}\) = 0.1 pc as a compromise between these two.
These values of \(\rho_{\chi,{\rm min}}^{\rm WD}\) may be compared with the green curves, which depict a span of DM NFW profiles corresponding to the upper limits on \(\{r_{s},\rho_{s}\}\) in the top left panel. For annihilation heating, values of \(\rho_{\chi,{\rm min}}^{\rm WD}\lesssim\) 800 GeV/cm\({}^{3}\) always remain below our DM profile limits, thus previous limits on DM capture may still be valid. In particular, the point marked with a star, depicting \(\rho_{\chi}\) = 798 GeV/cm\({}^{3}\), is the estimation of the DM density at
Figure 1: **_Top left_**: 95% C.L. upper limits on the scale density vs scale radius of an assumed NFW or Burkert profile of dark matter in the M4/NGC6121 globular cluster. **_Top right_**: 95% C.L. upper limits on the enclosed halo mass. **_Bottom left_**: A collection of NFW DM density profiles corresponding to \(\{r_{s},\rho_{s}\}\) in the top left panel. The magenta lines enclose regions with observed WD luminosities (and inferred radii & masses) converted to an equivalent DM density using the DM capture rate, assuming WDs are heated by DM self-annihilations within. The cyan lines are the same, but assuming WDs are heated by transfer of DM kinetic energy alone. The horizontal span of these lines reflect the uncertainty in WD positions, and their vertical span reflect the range of WD luminosities. The brown curve depicts the NFW profile for parameters estimated using a spherical collapse model in Ref. [11]; the asterisk denotes the DM density at \(r=2.3\) pc in this model after accounting for adiabatic contraction. **_Bottom right_**: Same as bottom left, but for the Burkert DM density profile. See text for further details.
\(r=2.3\) pc after numerically accounting for adiabatic contraction in the subhalo collapse model of Ref. [11]. This estimate was used to rule out DM-induced WD heating. Our DM density upper bounds are not strong enough to invalidate this claim. For reference, we also plot the DM density profile (in brown) for the NFW parameters estimated by Ref. [11] for the uncontracted halo. As we had explained before, it is far from obvious that the DM density upper limits would improve in the future by orders of magnitude - even with more precise data on stellar motion - as \(O(1)M_{\odot}/L_{\odot}\) mass-to-light ratios imply that the limit on the total DM mass cannot be much smaller than the total stellar mass. Our conclusion here is that, due to this lack of robustness, M4/NGC6121 is an unsuitable system to constrain DM annihilation heating of WDs.
We also see that for purely kinetic heating to be relevant much higher DM densities are required to compensate for the smaller fraction of energy transferred than in DM annihilations (see Eq. (10)). In fact, there are regions where all values of \(\rho_{\chi,\rm min}^{\rm WD}\) for kinetic heating lie above our DM density limits. There are other regions where our limits overlap with, or exceed, \(\rho_{\chi,\rm min}^{\rm WD}\). As before, our conclusion here is that our limits make M4/NGC6121 an unsuitable system to constrain DM kinetic heating of WDs.
These conclusions are qualitatively unchanged for a more cored DM profile, such as a Burkert profile [78] that is shallower than NFW in the halo's inner regions (Eq. (2)). As seen in the top panels of Fig. 1 our results are very similar to the NFW case. This is due to quantitatively similar support to our fit in the inner regions of M4/NGC6121, where stellar kinematic data is poor.
The similarity of our conlusions can also be seen in the bottom right panel of Fig. 1, where we have drawn - similar to the bottom left panel for NFW - the Burkert DM density profiles corresponding to the \(\{r_{s},\rho_{s}\}\) upper limits.
### Other signatures of dark matter in compact stars
The presence of high densities of DM in globular clusters has implications not only for overheating of white dwarfs, but also for a number of other signatures involving compact stars. Non-annihilating DM can capture in massive stars, self-gravitate when they turn into compact stars, and collapse into black holes that then accrete the stellar material and destroy the star [20]. DM captured in WDs and NSs may annihilate to long-lived mediators that escape the star and decay to SM states that can be detected [21, 24].
In certain models dark matter can trigger Type Ia-like supernovae. This could occur if DM deposited energy in a small pocket of WD material at a rate higher than the energy diffusion rate, as that would trigger runaway nuclear fusion that unbinds the WD. Observations of the survival of WDs to this date can then be used to place constraints on this mechanism. This idea has been investigated in the context of DM in primordial black holes depositing energy in WDs via dynamical friction [79], non-annihilating particle DM captured by the WD depositing gravitational potential energy via nucleon scattering as it collapses in the WD [80, 79] (though see Ref. [81]), heavier-than-\(10^{16}\) GeV DM depositing energy via annihilations, decays or nuclear scattering [82], and energy deposition from rapid Hawking radiation emitted by black holes formed in the interior of WDs via DM collapse [83, 84].
In all these cases, a sufficiently high DM density is required to ensure sufficiently high WD capture/encounter rates. Just as we urge future authors to refrain from using globular clusters to study WD heating, we urge the same of them in studies of other effects of DM on compact stars.
Figure 2: The white dwarf population of M4/NGC6121 observed in HST/ACS in the luminosity-mass plane obtained from the color-magnitude diagram in Ref. [30] using the routine described in Appendix B. The WD radius ticks in the top x-axis are obtained from a WD mass-radius relation via a Feynman-Metropolis-Teller equation of state. Also shown is a curve of the WD luminosity imparted by dark matter heating via annihilations within the WDs, assuming a DM density of 1000 GeV/cm\({}^{3}\). The WD points below this curve would lead to constraints on DM capture. An analogous curve, ranging around \(L_{\rm WD}=10^{29}\) GeV/s, exists for WD heating through DM kinetic energy transfer alone, but for the sake of visual clarity we haven’t displayed it.
Discussion and prospects
We have investigated whether recent claims in the literature on constraining dark matter capture and annihilations in white dwarfs in the globular cluster M4/NGC6121 are compatible with a first empirical estimate of the DM content in the system. We have also commented on other mechanisms of probing DM using compact objects residing in globular clusters. Using line-of-sight stellar velocity and surface luminosity data, we performed an MCMC likelihood analysis and found no evidence for a DM component in M4/NGC6121. This sets only an upper bound on the DM content, that still doesn't negate the validity of WD heating constraints. However, due to irreducibly large uncertainties in the problem and necessarily weak bounds on the DM density in M4, our broad conclusion is that, the WD heating constraints from globular clusters are unreliable2. Perhaps our stance is clarified by comparing to the state of affairs in the Galactic Center. With current stellar kinematic data, the DM density in both globular clusters and the Galactic Center is "unknown" (but the important difference is that in globular clusters it is also consistent with zero). The uncertainty in the Galactic Center density propagates into the well-known inconclusivity about the 3.1 TeV thermal wino, the supersymmetric partner of the \(W\) boson. Due to astrophysical \(J\)-factors varying by a factor of \(>100\), gamma-ray line searches at the Galactic Center by H.E.S.S. would rule out the 3.1 TeV wino for an NFW profile, yet leave it safe for a largely cored Burkert profile [86].
Footnote 2: Ref. [85] refrains from constraining their DM model using M4/NGC6121 after explicitly stating this reason. We encourage other authors to adhere to this spirit.
The above inconclusivity is, of course, buried within another inconclusivity. As mentioned in the Introduction, one pathway to form globular clusters is the collapse of massive molecular gas clouds with highly efficient star formation, supported by observations of the Antennae Galaxy merger. This mechanism requires no DM for forming self-gravitating \(\sim 10^{5}M_{\odot}\)-heavy dense stellar structures. Even if DM is required, such as in the pathway initiated by a DM subhalo that is then tidally stripped, the final DM mass in the cluster could be as low as \(10^{2}\ M_{\odot}\), weakening the claims of large DM densities affecting compact objects [10]. Finally, even if statistical evidence of a "dark" component is found, it may not be distinguished from a population of cold stellar remnants [61].
One possible way to improve the DM limits in globular clusters, albeit marginally, is to obtain stellar kinematic data in their innermost regions, which is a question of telescope resolution of dense stellar fields. Observations of small dispersion speeds in these regions will lead to tighter bounds on the enclosed DM mass \(M_{\rm DM}(r)\) at small \(r\), which in turn is a tighter bound on \(\rho_{s}r_{s}^{3}\) in these regions (as argued in Section II). Of course, improved measurements of proper motions and velocity dispersions in the outer regions of globular clusters would also be helpful, especially to diagnose a flat rotation curve indicative of a DM halo. These above improvements are foreseen with the use of _Gaia_ Data Release 3 [29] in conjunction with soon-to-be available data from JWST [87]. It is beyond our scope to identify member WDs of M4/NGC6121 with the appropriate astrometric cuts, derive the corrected color-magnitude diagram with photometric data, etc., however we strongly urge the better-equipped astronomical community to perform this analysis.
While our study focused on M4/NGC6121 to address previous claims about DM heating WDs, this phenomenon may be investigated in other globular clusters as well. A number of findings have been reported on the presence or absence of DM in \(>20\) globular clusters, which we recount in Table 1 and Appendix C, and a number of globular clusters have been reported to contain WDs. Bringing these two classes of studies under one roof would make for important progress in the hunt for DM, a task we leave for future authors. We comment more on this in Appendix C.
An interesting direction of inquiry is DM capture in neutron stars belonging to globular clusters. So far a total of 280 pulsars have been discovered in 38 globular clusters3, but simulations estimate \(\mathcal{O}(10^{2-3})\) NSs per globular cluster [88]. While the entire NS population may not be observable in the near future, faint NSs are expected to be either directly or indirectly discovered through surveys and deep field observations. For a DM density of 1000 GeV/cm\({}^{3}\), NSs could be typically heated to a maximum temperature \(\simeq 1.8\times 10^{4}\) K, with their spectral distribution peaking near the visible range. However, due to the NSs' small radii and large distances from Earth, the observable spectral flux density of \(\sim\) picoJansky is several orders below the threshold of current instruments, e.g., nanoJansky at JWST. Searches for gamma-ray fluxes from the decay of long-lived mediators produced in the annihilation of DM within NSs in the globular cluster 47 Tuc have been used to set limits on certain DM models [24], assuming a DM density of 1000 GeV/cm\({}^{3}\) within the inner 4 pc and a population of 300-4000 NSs. The presence of DM in 47 Tuc is disputed (see Table 1), with an upper limit on its densities most recently set by Ref. [37]. Assuming an NFW profile, this limit at \(r=4\) pc ranges between 200\(-\)5000 GeV/cm\({}^{3}\), neither confirming nor denying the assumption in Ref. [24].
Footnote 3: [https://www3.mpifr-bonn.mpg.de/staff/pfreire/GCpsr.html](https://www3.mpifr-bonn.mpg.de/staff/pfreire/GCpsr.html)
There remains one other spectacular way to constrain DM densities in globular clusters. And that is a future discovery of DM in underground direct searches. The cross section and mass of particle DM would inform how
efficiently WDs capture it, and thus observations of sufficiently cold WDs in globulars can be used to estimate an upper limit on the ambient DM density. This method was applied to constrain the DM content of NGC 6397 to smaller than \(10^{-3}\) the stellar mass [15] by taking at face value hints seen at the DM experiments CRESST, DAMA, CDMS-Si, and CoGeNT.
###### Acknowledgements.
For helpful conversations we thank Tom Abel, Susmita Adhikari, Joe Bramante, and David Morrissey. R.G. is supported by MIUR grant PRIN 2017FMJFMW and 2017L5W2PT. N.R. thanks the International Centre for Theoretical Sciences for hospitality, where part of this work was completed during the workshop "Less Travelled Path to the Dark Universe" (code: ICTS/ltpdu2023/3). J.R-C. is supported by MIUR grant PRIN 2017FMDE.
## Appendix A Method for setting dark matter density limits in M4/NGC6121
The free parameters in Eq. (7) that are determined by a likelihood analysis using Eq. (5) are defined as follows.
The stellar density profile is taken as a combination of three Plummer spheres:
\[\rho_{\star}(r)=\sum_{j=1}^{3}\frac{3M_{j}}{4\pi a_{j}^{3}}\left(1+\frac{r^{2} }{a_{j}^{2}}\right)^{-5/2}, \tag{10}\]
where \(M_{j}\) and \(a_{j}\) are free parameters. This description is analogous to a Gaussian decomposition assumed in, e.g., Ref. [61]. The projected surface density can be immediately obtained from the above as
\[\Sigma_{*}(r)=\sum_{j=1}^{3}\frac{M_{j}}{\pi a_{j}^{2}}\left(1+\frac{r^{2}}{a_ {j}^{2}}\right)^{-2}. \tag{11}\]
The parametrisation we use for the velocity anisotropy is:
\[\beta(r)=\beta_{0}+\left(\beta_{\infty}-\beta_{0}\right)\frac{1}{1+(r/r_{a}) ^{\eta}}, \tag{12}\]
where \(\beta_{0}\) describes an "inner" anisotropy, \(\beta_{\infty}\) an "outer" anisotropy, with \(r_{a}\) and \(\eta\) respectively the radius and the sharpness of the transition.
We perform our MCMC analysis using the non-parametric code gravsphere[89] with the Python wrapper pygrasphere[90]. Although \(M_{j}\) and \(a_{j}\) will be allowed to vary during the scan, pygravsphere first computes the values through an optimization technique and then allows the to vary within the MCMC by a factor of 50% of the best-fit value. In addition to using surface density and LOS velocity data, pygravsphere also uses the so-called virial shape parameters, which can also help breaking the mass-anisotropy degeneracy [90; 89],
\[\begin{split} v_{s1}=\frac{5}{2}\int_{0}^{\infty}dr\ r\ GM\rho_{\star}(r)\left[5-2\beta(r)\right]\sigma_{r}^{2}\\ =\int_{0}^{\infty}dr\ r\ \Sigma_{\star}(r)\left\langle v_{ \text{LOS}}^{4}\right\rangle,\end{split} \tag{13a}\] \[\begin{split} v_{s2}=\frac{4}{35}\int_{0}^{\infty}dr\ r^{3}GM \rho_{\star}(r)\left[7-6\beta(r)\right]\sigma_{r}^{2}\\ =\int_{0}^{\infty}dr\ r^{3}\Sigma_{\star}(r)\left\langle v_{ \text{LOS}}^{4}\right\rangle\.\end{split} \tag{13b}\]
Therefore our likelihood function is given by
\[-2\ln\mathcal{L}=\chi_{\text{LOS}}^{2}+\chi_{\Sigma_{\star}}^{2}+\chi_{\text{ VSP},1}^{2}+\chi_{\text{VSP},2}^{2}. \tag{14}\]
From the Bayesian approach we found no preference for a DM component, thus estimating credible intervals on the posterior distributions is a prior-dependent computation. To derive upper limits, we use a profile likelihood approach with \(\rho_{s}\) and \(r_{s}\) the parameters of interest, and the resulting 95% C.L. upper limits are displayed in the top left panel of Figure 1.
## Appendix B Obtaining white dwarf luminosities and temperatures
In this appendix we describe a prescription to convert color-magnitude diagrams (CMDs) of stellar data to temperatures and luminosities. Specifically, we do this for the WDs identified in the CMD in Fig. 11 of Ref. [30] depicting HST/ACS data on M4/NGC6121. This CMD is displayed in the \(m_{606}-m_{775}\) vs \(m_{606}\) plane after correcting for reddening and extinction due to dust. The color \(m_{606}-m_{775}\) can be used to derive the WD effective temperature, and the Vega magnitude \(m_{606}\) to derive the WD luminosity.
The zero-point used in the HST/ACS system, the "instrumental zero-point," is the magnitude of an object that produces one count per second. Each zero-point refers to a count rate measured in a specific aperture. For point source photometry, the measurement of counts in a large aperture is not possible for faint targets in a crowded field. Therefore, counts are measured in a small aperture, then an aperture correction is applied to transform the result to an "infinite" aperture.
As discussed in detail in Ref. [91], the raw data is in the form of the number of total number photo-electrons \(I_{e}\) detected in the exposure time \(t_{\text{exp}}\). This must not be confused with the usual flux expressed in ergs/s/cm\({}^{2}\). Photometric observations are transformed to Vega-mag using [91]
\[m_{\text{flit}} =-2.5\log_{10}I_{e}+m_{0}^{\text{flit}}-\Delta m_{\text{PSF}- \text{AP}(r)}^{\text{flit}}\] \[-\Delta m_{\text{AP}(r)-\text{AP}(\infty)}^{\text{flit}}. \tag{15}\]
where \(m_{0}^{\rm filt}\) is the zero-point of the filter, and the last two terms are aperture corrections. Ref. [30] provides the left-hand side of the above equation with all the correction factors included.
The detector count rate in a given filter is
\[I_{e}^{\rm filt} = A_{\rm tel}\bigg{(}\frac{R_{\rm WD}}{d}\bigg{)}^{2}\times\] \[10^{-0.4A_{\rm filt}}\int_{\nu_{1}}^{\nu_{2}}d\nu(h\nu)^{-1}B_{\nu }(\nu,T_{\rm WD})\epsilon_{\rm filt}(\nu)\,\]
where \(A_{\rm tel}\) is the effective collecting area of the telescope, \(R_{\rm WD}\) the radius of the WD, \(d\) its distance from Earth, \(B_{\nu}\) the blackbody spectral density, \(\epsilon_{\rm filt}\) the throughput of the filter, and \(A_{\rm filt}\) (in magnitude units) accounts for extinction and reddening by intervening dust. To obtain the temperature we will always take differences in magnitude, hence the factor \(A_{\rm tel}(R_{\rm WD}/d)^{2}\) will disappear in practice. We numerically obtain the color as
\[m_{606}-m_{775}=-2.5\log_{10}\left(\frac{I_{e}^{606}}{I_{e}^{775}}\right)+m_{0 }^{606}-m_{0}^{775}+{\rm apr}\, \tag{10}\]
where the last term is the difference in aperture corrections, which turns out to be at the sub-percent level.
Magnitudes in Ref. [30] are reported in the Vega magnitude system, hence the WD luminosities can be directly obtained from
\[m_{\rm WD-Veg}=-2.5\log_{10}\left(\frac{L_{\rm WD}/d_{\rm M4}^{2}}{L_{\rm Vega }/d_{\rm Vega}^{2}}\right)\rvert_{606}. \tag{11}\]
We take the distances \(\{d_{\rm M4},d_{\rm Vega}\}=\{1850,7.68\}\) pc. Vega's temperature \(T_{\rm Vega}=9550\) K and radius \(R_{\rm Vega}=2.52R_{\odot}\) gives us the blackbody bolometric luminosity \(L_{\rm Vega}^{\rm BB}\) via the Stefan-Boltzmann law. The actual luminosity \(L_{\rm Vega}=37L_{\odot}\) gives us the emissivity \(L_{\rm Vega}/L_{\rm Vega}^{\rm BB}\), which we use in the above equation.
In practice we use the package pysynphot[92] for the conversion from color vs magnitude to luminosity vs effective temperature. The result of our conversion is shown in Fig. 3, which is slightly different from that in Ref. [11] likely due to the routines in pysynphot.
Once \(L_{\rm WD}\) and \(T_{\rm WD}\) are obtained, the WD radius \(R_{\rm WD}\) is obtainable from the blackbody luminosity \(L_{\rm WD}=4\pi\sigma_{\rm SB}R_{\rm WD}^{2}T_{\rm eff}^{4}\), where \(\sigma_{\rm SB}\) is the Stefan-Boltzmann constant. We then obtain the WD mass \(M_{\rm WD}\) from a mass-radius relation derived by solving the Tolman-Oppenheimer-Volkoff equations for the relativistic Feynman-Metropolis-Teller equation of state (EoS) that models the WD as an isothermal relativistic Fermi gas including Coulomb interactions. We have assumed that the WD is composed entirely of \({}^{12}\)C. We obtain a maximum mass of 1.385 \(M_{\odot}\), corresponding to \(R_{\rm WD}=2\times 10^{-3}R_{\odot}\), in agreement with Ref. [93]. Other EoSs such as the Hamada-Salpeter [94] and Chandrasekhar-Emden EoS predict a critical maximum mass \(\approx 1.4\,M_{\odot}\) for the same composition.
## Appendix C Dark matter in other globular clusters
There are about 180 globular clusters discovered while their DM content has been searched for in only about 20 of them; see Table 1. In this appendix we non-exhaustively review the literature on searches for dark matter in globular clusters and outline some directions for progress. Numerous techniques have been tried and wide-ranging results have been reported; the goal of this note is to urge the astrophysics community to unify their approaches so that clearer conclusions may be drawn on the important question of the presence of DM in globular clusters.
**NGC 2419.**
The initial population of dark matter in globular clusters can be depleted via dynamical friction of stars ejecting the DM and via tidal stripping by the host galaxy. For these reasons, the globular cluster NGC 2419 was selected by Ref. [32] to look for DM: its timescales for dynamical friction and relaxation exceed a Hubble time, and its remote location (with a Galactocentric distance \(\sim 90\) kpc) and large mass minimize tidal stripping. Using radial velocity data from Keck I and an \(N\)-body fit, these authors find no evidence for DM, supported by their finding that the mass-to-light ratio does not rise toward the outer regions of NGC 2419. Assuming an NFW profile for DM, they set a \(2\sigma\) limit on the DM mass of \(M_{\rm DM}<10^{7}M_{\odot}\) inside \(r=500\) pc, equivalently \(M_{\rm DM}(r<260\) pc) \(\lesssim 4\times 10^{6}M_{\odot}\), corresponding to a limit on the density of 0.7 GeV/cm\({}^{3}\). Ref. [54] also set upper limits on an NFW DM profile, obtaining the very tight \(M_{\rm DM}(r<\ {\rm kpc})<10^{6}M_{\odot}\), equivalently
Figure 3: The white dwarf population of M4/NGC6121 observed in HST/ACS in the luminosity-surface temperature plane obtained from the color-magnitude diagram in Ref. [30] via the routine described in Appendix B.
\(M_{\rm DM}(r<260~{}{\rm pc})<2.4\times 10^{4}M_{\odot}\).
In contrast to these studies, Ref. [55] came to an interesting conclusion. When the authors tried to fit stellar kinematic data with a Michele model of stellar distribution and a generalized NFW profile for DM, they found no evidence for DM and set a 99% C.L. limit of \(M_{\rm DM}(r<260~{}{\rm pc})<7.2\times 10^{5}M_{\odot}\). However, when they performed a spherical Jeans analysis similar to our work but assuming _no_ analytic form for the stellar and DM distributions, instead floating 389 free parameters in the solution, they _did_ find a DM component within 260 pc of mass \(\simeq 10^{6}M_{\odot}\), about twice the mass of the stellar component. This highlights the extreme sensitivity of studies looking for DM in globular clusters to priors and parameterizations, and suggests that almost any conclusion derived from these statistical fits must be taken with a grain of salt.
\(\omega\)**Centauri/ NGC 5139.**
It is thought that the largest globular cluster observed, the 4 \(\times 10^{6}M_{\odot}\)-heavy \(\omega\)Cen, is the tidally stripped relic not of a DM subhalo but of a dwarf galaxy captured by the Milky Way. Ref. [16] applied a spherical Jeans analysis to MUSE and Keck stellar LOS velocity data and Gaia and HST proper motion data, and found evidence for a \(\sim 6\times 10^{5-6}~{}M_{\odot}\) DM component within a 7 pc half-light radius when fitting to an NFW DM profile. This result was confirmed in Ref. [61] using updated data from the same sources4; it was also argued that this invisible mass component is consistent with a population of stellar remnants.
Footnote 4: In these studies the total DM mass was fitted simultaneously with the _stellar_ (as opposed to total) mass-to-light ratio, which were found to be unsurprisingly anti-correlated. It may be seen from the posterior distributions of Refs. [16; 61] that the total mass-to-light ratio is indeed roughly constant across the favored DM mass range.
On the other hand, Ref. [37] (with one of us as an author) found no evidence for an NFW component of DM in \(\omega\)Cen, setting instead an upper limit of \(M_{\rm DM}(r<7~{}{\rm pc})<\) few \(\times 10^{5}M_{\odot}\). The main difference between these studies is that the latter used LOS dispersion data from MUSE, whereas the former additionally used proper motion data. Other differences include the modelling of stellar distributions as a sum of Gaussians in the former (the so-called CJAM model) versus a sum of Plummer spheres in the latter, the simpler parametrization of stellar anisotropy in the former, and the log likelihood analysis performed with Bayesian Multinest sampling algorithm in the former versus MCMC in the latter.
_Other globular clusters._
The LOS-dispersion-spherical-Jeans-MCMC analysis of Ref. [37] set upper limits on NFW DM component for a number of other globular clusters. These include:
(a) **M22/NGC6656** and **M30/NGC7099**, corroborated by Ref. [35], which looked for a flattening of dispersion profiles at large radii and for large mass-to-light ratios using radial velocity data from Anglo-Australian Telescope's AAOmega spectrograph. The latter also found no evidence for DM in **M53/NGC5024** and **M68/NGC4590**.
(b) **47 Tuc/NGC104**, corroborated by Ref. [31] using AAOmega. The latter also found no evidence for DM in **M55/NGC6809**, **NGC 121** and **Kron 3**. We mention that Ref. [53] explains the observed \(\gamma\)-ray flux from 47 Tuc/NGC104 using an annihilating DM component.
(c) **NGC 1851, NGC 2808, NGC 3201, M80/NGC6093, NGC 6752, M2/NGC7089**.
A number of other globular clusters have been studied for the presence of DM, and an array of conclusions drawn from astrophysical arguments. We refer the reader to Table 1 for a list of references.
_White dwarfs in them?_
Luminosity measurements of WDs in globular clusters would be greatly relevant to limiting DM-induced heating if clear evidence for DM content comes up in these systems. In Table 1 we list references on observations of WDs in various globulars. The possibility of observing DM-induced WD heating in \(\omega\)Cen is discussed in Refs. [95; 96]. As mentioned in the Discussion, WD heating can be used to _limit_ DM densities in NGC 6397 if an unambiguous DM signal is found in direct detection experiments [15]; this reasoning of course applies to any globular cluster including the focus of our study, M4/NGC6121.
|
2309.15576 | Learning Spatial-Temporal Regularized Tensor Sparse RPCA for Background
Subtraction | Video background subtraction is one of the fundamental problems in computer
vision that aims to segment all moving objects. Robust principal component
analysis has been identified as a promising unsupervised paradigm for
background subtraction tasks in the last decade thanks to its competitive
performance in a number of benchmark datasets. Tensor robust principal
component analysis variations have improved background subtraction performance
further. However, because moving object pixels in the sparse component are
treated independently and do not have to adhere to spatial-temporal
structured-sparsity constraints, performance is reduced for sequences with
dynamic backgrounds, camouflaged, and camera jitter problems. In this work, we
present a spatial-temporal regularized tensor sparse RPCA algorithm for precise
background subtraction. Within the sparse component, we impose spatial-temporal
regularizations in the form of normalized graph-Laplacian matrices. To do this,
we build two graphs, one across the input tensor spatial locations and the
other across its frontal slices in the time domain. While maximizing the
objective function, we compel the tensor sparse component to serve as the
spatiotemporal eigenvectors of the graph-Laplacian matrices. The disconnected
moving object pixels in the sparse component are preserved by the proposed
graph-based regularizations since they both comprise of spatiotemporal
subspace-based structure. Additionally, we propose a unique objective function
that employs batch and online-based optimization methods to jointly maximize
the background-foreground and spatial-temporal regularization components.
Experiments are performed on six publicly available background subtraction
datasets that demonstrate the superior performance of the proposed algorithm
compared to several existing methods. Our source code will be available very
soon. | Basit Alawode, Sajid Javed | 2023-09-27T11:21:31Z | http://arxiv.org/abs/2309.15576v1 | # Learning Spatial-Temporal Regularized Tensor Sparse RPCA for Background Subtraction
###### Abstract
Video background subtraction is one of the fundamental problems in computer vision that aims to segment all moving objects. Robust Principal Component Analysis (RPCA) has been identified as a promising unsupervised paradigm for background subtraction tasks in the last decade thanks to its competitive performance in a number of benchmark datasets. Tensor RPCA (TRPCA) variations have improved background subtraction performance further. However, because moving object pixels in the sparse component are treated independently and don't have to adhere to spatial-temporal structured-sparsity constraints, performance is reduced for sequences with dynamic backgrounds, camouflaged, and camera jitter problems. In this work, we present a spatial-temporal regularized tensor sparse RPCA algorithm for precise background subtraction. Within the sparse component, we impose spatial-temporal regularizations in the form of normalized graph-Laplacian matrices. To do this, we build two graphs, one across the input tensor's spatial locations and the other across its frontal slices in the time domain. While maximizing the objective function, we compel the tensor sparse component to serve as the spatiotemporal eigenvectors of the graph-Laplacian matrices. The disconnected moving object pixels in the sparse component are preserved by the proposed graph-based regularizations since they both comprise of spatiotemporal subspace-based structure. Additionally, we propose a unique objective function that employs batch and online-based optimization methods to jointly maximize the background-foreground and spatial-temporal regularization components. Experiments are performed on six publicly available background subtraction datasets that demonstrate the superior performance of the proposed algorithm compared to several existing methods. Our source code will be available very soon.
Background subtraction, Moving object segmentation, Background modeling, Robust principal component analysis, Structured sparsity.
## I Introduction
Video background subtraction also known as moving object segmentation from static camera is one of the long-standing problems in computer vision [5, 7, 24]. The primary goal of background subtraction is to separate moving objects from the background model, a static scene [24]. Background subtraction has numerous applications including video surveillance [11], semantic segmentation [9, 40], object detection and tracking [20, 27], autonomous driving [46], robotics manipulation [2], sports video analysis [18], and human activity recognition [1]. However, it becomes extremely difficult when there are dynamic backgrounds present, such as swaying bushes, rippling water, varying lighting, irregular object motion, bad weather, camouflaged foreground objects, pan-tilt-zoom camera sequences, and extreme nighttime scenes [5, 24, 59, 75, 76, 78]. Numerous approaches have been developed in the literature to solve the aforementioned problems, including statistical background modeling [4, 24], subspace learning models [7], and deep learning models [5]. Background subtraction is still a difficult challenge for scenes with varying backgrounds and shadows, though [35, 37, 58, 59, 76].
Robust Principal Component Analysis (RPCA) and its variant Tensor RPCA (TRPCA) are popular unsupervised paradigms and have been successfully used in many problems [6, 7, 62]. This has included background-foreground separation problems [7], salient object detection [48], image or video denoising [66, 69], data clustering [72], medical applications [36], and hyperspectral imaging [65, 74] in the past decade. Wright _et. al_ posed the background subtraction problem as an RPCA-based matrix decomposition problem into the sum of its low-rank and sparse components [67]. The temporal background sequence is highly correlated; therefore, the background model is located in a low-dimensional redun
Fig. 1: Results of background subtraction using RPCA, TRPCA, and our proposed algorithms on a number of difficult sequences chosen from openly accessible datasets. From left to right, (a) displays sample input images, (b) displays ground-truth images, (c) displays background subtraction results using RPCA [13], (d) displays results using TRPCA [42], and (e)-(f) displays results estimated using our proposed O-STRPCA and STRPCA algorithms. Selected sequences from the CD14, I2R, SABS, Wallflower, and I2R datasets are displayed, going from top to bottom.
dant subspace known as the low-rank background component. The grossly corrupted sparse component is made up of locally distinct regions known as foreground segmentation. By employing the Alternating Direction Method of Multipliers (ADMM) to solve the convex optimization problem, such a matrix decomposition may be produced [8].
RPCA has shown good performance for background subtraction [7]. It is only useful for processing 2D matices, though. Instead of a 2D matrix, real-time sequences may contain multi-dimensional data that generates a 3D matrix. Therefore, before estimating background-foreground components using RPCA, a reshaping step is typically required. Such a procedure would potentially damage the sequence's inherent spatial patterns, which would reduce performance. Several variations that enforce structural restrictions are suggested in the literature to enhance background subtraction performance, although all methods need a reshaping phase [15, 19, 28, 29, 30, 79]. In situations involving dense moving objects and really small moving objects, the moving object region therefore also becomes over-smoothed (Fig. 1 (c)).
TRPCA approaches, which expand matrix-based RPCA and take advantage of the intrinsic multi-dimensional structure of input data matrix, have recently been developed to solve this deficiency [41, 25, 42]. Lu _et al._ formulated the tensor-based decomposition framework as follows: [42]:
\[\min_{\mathbf{\mathcal{B}},\mathbf{\mathcal{F}}}\lvert\lvert\mathbf{\mathcal{B}}\rvert _{*}+\lambda\lvert\lvert\mathbf{\mathcal{F}}\rvert\rvert_{1},\text{ such that }\mathbf{\mathcal{X}}=\mathbf{\mathcal{B}}+\mathbf{\mathcal{F}}, \tag{1}\]
where \(\mathbf{\mathcal{X}}\in\mathbb{R}^{w\times h\times n}\) is the input tensor and each \(i\)-th frame in this tensor is denoted by \(\textbf{X}_{i}\in\mathbb{R}^{w\times h}\) having width \(w\) and height \(h\), respectively. \(\lambda=1/\sqrt{\max(w,h,n)}\) assigns relative importance while optimizing (1). Model (1) perfectly extracts the low-rank tensor \(\mathbf{\mathcal{B}}\) comprising the background model and the sparse tensor \(\mathbf{\mathcal{F}}\) constituting moving object segmentation under specific incoherence conditions. The segmentation of moving objects can be done more effectively with TRPCA and its variants [51, 26]. There are still two issues that must be resolved, though. The \(\ell_{1}\)-norm regularization on \(\mathbf{\mathcal{F}}\) handles each pixel independently and therefore ignores the spatial coherent structure. This results in a performance drop for scenes with dynamic backgrounds (Fig. 1 (c)-(d)). The \(\mathbf{\mathcal{F}}\) has a homogeneous spatial structure and may be handled cogently. Therefore, the importance of fostering organized sparsity inside the sparse tensor \(\mathbf{\mathcal{F}}\), as noted, remains an unresolved problem; sadly, relatively few attempts have been made in this regard [54]. Batch optimization of the model (1) requires all video frames must be present in memory before any processing. Real-time processing is therefore compromised, which is necessary for surveillance videos.
In the current work, we address the aforementioned challenges by proposing a novel algorithm known as the Spatial-temporal regularized TRPCA (STRPCA) for background subtraction. To lessen the effects of inaccurate pixels in the \(\mathbf{\mathcal{F}}\) component, we maintain the spatial and temporal structures of the moving object in the proposed algorithm. We build two graphs--one spatial and the other temporal--for this aim. The spatial-temporal structure of the \(\mathbf{\mathcal{F}}\) component is influenced by both graphs. For each frontal slice of \(\mathbf{\mathcal{X}}\), a pixel-wise spatial graph is constructed using the nearest neighbor method. The spatial graph, in particular, imposes every pixel of the moving object to share a value with its linked neighbors. To temporally constrain each pixel of the moving object to have a comparable value, a temporal graph is built among the frontal slices of the tensor \(\mathbf{\mathcal{X}}\). To capture the notion of pairwise similarities inside the model (1), we estimate the normalized spatial and temporal graph-Laplacian matrices from both graphs [71]. By requiring a model (1) to act as the eigenvectors of these matrices, one can be sure that the resulting model will capture the coherent and accurate structure of moving objects in \(\mathbf{\mathcal{F}}\). This is because the eigenvectors of the corresponding Laplacian matrices preserve the spatial-temporal structure. By enforcing the STRPCA model to be the eigenvectors of the spatial-temporal Laplacian matrices, we compel it to be aware of both spatial and temporal \(\mathbf{\mathcal{F}}\) structure. Our proposed algorithm is able to better discriminate the moving objects from their background even in the presence of camouflage, shadows, and dynamic backgrounds and thus improve the background subtraction performance by encoding these spectral clustering-based constraints into the TRPCA model (1). To the best of our knowledge, no such constraints are employed in the literature for background subtraction in the sparse tensor of the TRPCA framework.
We utilize an ADMM batch-based optimization approach to solve the objective function of the proposed STRPCA model since it has improved convergence and accuracy [8]. In several SOTA approaches [42, 54, 65, 69], batch processing is effective, but not for applications that require real-time processing. As a result, in the current work, we also proposed an online optimization strategy to solve the objective function. One video frame at a time is processed by our proposed Online STRPCA optimization model, called O-STRPCA, which also concurrently encodes the spatial-temporal regularization.
On six publicly accessible background subtraction benchmark datasets, including Change Detection.Net 2014 (CD14) [63], Institute for Infocomm Research (I2R) [34], Background Model Challenges 2012 (BMC12) [61], Wallflower [60], Stuttgart Artificial Background Subtraction (SABS) [11], SBM-RGBD [12], we assessed the performance of our proposed STRPCA and O-STRPCA algorithms. Our results showed that the proposed algorithms outperformed SOTA techniques for background subtraction. The following is a summary of our work's significant contributions:
1. We proposed a novel STRPCA algorithm for enhanced background subtraction. Our algorithm applies graph-Laplacian matrices to the sparse tensor \(\mathbf{\mathcal{F}}\) to enforce the spatial and temporal regularizations. These regularizations support the spatial-temporal coherent moving object structure and handle the structure-sparsity issues.
2. We put forth a novel objective function that concurrently uses both spatial-temporal regularizations and \(\mathbf{\mathcal{B}}\)-\(\mathbf{\mathcal{F}}\) tensors. Then, batch-based and online-based optimization techniques are used to resolve it. Even though our batch solution worked better, real-time applications are better suited to the online solution.
3. On six publicly accessible datasets, we conducted in-depth qualitative and quantitative analyses and compared
the STRPCA performance with that of 15 SOTA approaches. There has also been a thorough review of the outcomes.
This work is organized as follows. The literature review on the RPCA and TRPCA approaches is summarised in Sec. II. The proposed algorithm is explained in Sec. III. Extensive experiments are presented in Sec. IV and the conclusion and suggested future directions are presented in Sec. V
## II Related Work
The last two decades have produced a wealth of work on background subtraction, which may be divided into conventional [4], subspace learning [7, 54], and deep learning [5] approaches. Below, we provide a concise summary of each background subtraction method category.
#### Ii-1 **Traditional Methods**
In this category, pixel-level approaches received a lot of attention for dynamic background subtraction [4]. Stauffer and Grimson proposed a Gaussian Mixture Model (GMM) in which each pixel is modeled using a combination of Gaussian probability density functions[57]. In the event that the pixel values do not meet the criterion for background distribution, the moving object pixels are then categorized as foregrounds. Despite the method's optimistic background subtraction performance, it is vulnerable to rapid background fluctuations like the timing of light switches and other factors like the number of Gaussians. As a result, numerous improved GMM approaches are put forth in the literature. These approaches include adaptive GMM [80], spanning tree GMM [16, 17], bidirectional GMM [53], features-based GMM [49], and structured GMM model [52].
SuBSENSE [55] and PAWCS [56] are the most modern pixel-level background subtraction techniques. Pierre-Luc _et al._ proposed a universal change detection method known as SuBSENSE which is robust against local variations in the background scene. SuBSENSE, a technique for detecting global changes that is resistant to small fluctuations in the background scene, was suggested by Pierre-Luc _et al._[55]. SuBSENSE uses spatiotemporal binary and color information to categorize each pixel as either background or foreground. Based on the local information, the number of parameters is dynamically updated. For long-term foreground segmentation, Pierre-Luc _et al._ also put out a non-parametric PAWCS technique that learns the static and dynamic background pixels online at a low memory cost [56].
#### Ii-2 **Subspace Learning Methods**
This group of techniques compels a background model to be linearly correlated and learns a low-dimensional subspace of the input sequence. Background subtraction techniques based on PCA, RPCA, and TRPCA have seen a lot of success in recent years [7]. Using PCA, Oliver _et al._ devised the eigen background subtraction technique [47]. PCA results were optimistic, but the low-dimensional background subspace that was learned was particularly vulnerable to noise or severely distorted outliers in the background scene. Wright _et al._ introduced the RPCA for learning both low-rank and sparse subspaces using a convex optimization program to overcome this issue [67]. A nuclear norm minimization was imposed as a convex relaxation in the RPCA model since rank minimization was not continuous and non-convex. It was used for background subtraction by Candes _et al._[13]. Classical RPCA methods have shown to be potential solutions for background subtraction however, these methods are computationally not attractive due to batch optimization processing and also can not handle dynamic background scenes. Therefore, many RPCA variants have been proposed such as DECOLOR [79], TVRPCA [15], LSD [39], 2PRPCA [23], GOSUS [68], MSCL [30], and DSPSS [19] to address the structured-sparsity problem. Online RPCA variations like ORPCA [31] and COROLLA [50] are also published to be able to handle the real-time processing difficulties of RPCA. Donald _et al._ developed a tensor variation of the robust formulations for RPCA [25]. Recently, Lu _et al._ suggested a TRPCA employing a tensor nuclear norm regularisation to handle multi-dimensional data [42]. For effective foreground segmentation, Wenrui _et al._ used TRPCA with total variation penalty [26]. A time-lapsed sequence handling model with an invariant tensor sparse decomposition was proposed by Shakeri _et al._[51]. Although TRPCA and its variations show an improvement in SOTA performance over RPCA, their key drawbacks continue to be computational complexity and a lack of sparsity structure. As a result, online TRPCA versions have also recently been described in the literature [35, 37].
A tensor-based subspace learning model for background subtraction is also the foundation of our proposed STRPCA algorithm. We impose graph-based spatial and temporal continuity to the sparse component and enhance background subtraction performance in comparison to the aforementioned TRPCA approaches.
**Deep Learning Methods** Many computer vision applications, including object detection [77], object tracking [27], and background subtraction [5], have been transformed by deep Convolutional Neural Networks (CNNs). Convolutional features are trained entirely by a fully supervised CNN model before being used for either classification or regression tasks. One of the earliest deep learning techniques for background subtraction was proposed by Braham _et al._[10]. Using a fully CNN model, each input frame is separated into blocks, and each block is then categorized as either foreground or background. In the same vein, Wang _et al._ suggested an interactive block-based deep neural network for moving object segmentation utilizing AlexNet as a backbone architecture [64]. Although the performance of these two block-based deep networks was impressive, both models degraded in the presence of unknown classes. Tezcan _et al._ proposal for a comprehensive CNN model for pixel-wise background subtraction of unknown sequences is, as a result, [59]. Readers who are interested might investigate further deep learning techniques for background subtraction in the survey [5]. One of the major problems of fully supervised CNN models is that they strongly rely on well-annotated, highly diversified, and abundant data, which is not always accessible. This is true even though the background subtraction group has made exceptional progress. Our proposed algorithm, on the other hand, is based on TRPCA, which completely unsupervisedly divides input video sequences into background-foreground components.
## III Proposed Methodology
The system diagram of the proposed Spatial-temporal regularized TRPCA (STRPCA) algorithm for background subtraction is illustrated in Fig. 2. Our proposed algorithm consists of several steps including spatial graph construction, temporal graph construction, objective function formulation, and solution to the proposed model using batch and online optimization techniques. We initially present mathematical notations and preliminaries before going into detail about each stage of the proposed method.
### _Mathematical Notations and Preliminaries_
In this work, we used the same notations as described in [25, 33, 41, 42]. We denote a multi-dimensional tensor comprising of an input video sequence (3-way tensor) by boldface calligraphic letters, e.g., \(\mathbf{\mathcal{X}}\in\mathbb{R}^{w\times h\times n}\), where \(w\), \(h\), and \(n\) represent width, height, and a number of frames. Matrices are denoted by boldface capital letters e.g., **X**, vectors are represented by boldface lowercase letters e.g., **x**, and scalars are represented by lowercase letters e.g., \(a\). The \((i,j,k)\)-th entry of a tensor \(\mathbf{\mathcal{X}}\) is denoted by \(\mathbf{\mathcal{X}}(i,j,k)\) or \(x_{ijk}\).
#### Iii-A1 **Tensor Slices**
The slices of a tensor forms a 2D matrix. We use the Matlab notations \(\mathbf{\mathcal{X}}(i,:,:)\), \(\mathbf{\mathcal{X}}(:,j,:)\), and \(\mathbf{\mathcal{X}}(:,:,k)\) respectively, to denote the \(i\)-th horizontal, \(j\)-th lateral, and \(k\)-th frontal slice of tensor \(\mathbf{\mathcal{X}}\). \(\mathbf{\mathcal{X}}^{(k)}\) is also equivalent to \(\mathbf{\mathcal{X}}(:,:,k)\).
#### Iii-A2 **Tensor Unfolding and Folding Operations**
The unfolding operation \(\text{unfold}(\mathbf{\mathcal{X}})\) turns \(\mathbf{\mathcal{X}}\) into a 2D matrix and the folding operation is its inverse operator. For tensor \(\mathbf{\mathcal{X}}\in\mathbb{R}^{w\times h\times n}\), we define its mode-2 unfolding matrix \(\mathbf{\mathcal{X}}_{(2)}\) as: \(\text{unfold}_{2}(\mathbf{\mathcal{X}})=[\mathbf{\mathcal{X}}^{(1)},\mathbf{\mathcal{X}} ^{(2)},...,\mathbf{\mathcal{X}}^{(n)}]^{\top}\in\mathbb{R}^{wn\times h}\). Its folding matrix is defined as \(\text{fold}(\text{unfold}_{2}(\mathbf{\mathcal{X}}))=\mathbf{\mathcal{X}}\). Similarly, mode-1 and mode-3 unfolding matrices are \(\mathbf{\mathcal{X}}_{(1)}=\text{unfold}_{1}(\mathbf{\mathcal{X}})\in\mathbb{R}^{hn \times w}\) and \(\mathbf{\mathcal{X}}_{(3)}=\text{unfold}_{3}(\mathbf{\mathcal{X}})\in\mathbb{R}^{wh \times n}\).
#### Iii-A3 **DCT of Tensor**
We denote \(\bar{\mathbf{\mathcal{X}}}\) as a result of the Discrete Fourier Transformation (DFT) of a tensor \(\mathbf{\mathcal{X}}\) along the third dimension and can be computed using a Matlab command _fft_ as: \(\bar{\mathbf{\mathcal{X}}}=\text{\emph{fft}}(\mathbf{\mathcal{X}},[\ ],3)\). Similarly, \(\bar{\mathbf{\mathcal{X}}}\) can be transformed back to \(\mathbf{\mathcal{X}}\) using inverse DCT as: \(\mathbf{\mathcal{X}}=\text{fft}(\bar{\mathbf{\mathcal{X}}},[\ ],3)\).
#### Iii-A4 **Tensor Norms**
We employed three important norms of a tensor including the \(\ell_{1}\)-norm \(||\mathbf{\mathcal{X}}||_{1}=\sum_{i,j,k}\lvert x_{ijk}\rvert\), the Frobenius norm \(||\mathbf{\mathcal{X}}||_{F}=\sqrt{(\sum_{i,j,k}\lvert x_{ijk}\rvert^{2})}\), and the nuclear norm of a matrix \(||\mathbf{\mathcal{X}}||_{*}=\sum_{i}\sigma_{i}(\mathbf{\mathcal{X}})\), where \(\sigma_{i}(\mathbf{\mathcal{X}})\) is the \(i\)-th singular values of \(\mathbf{\mathcal{X}}\). The tensor nuclear norm is estimated using the frontal slices of the input tensor. Kilmer _et al._ defined the nuclear norm of a tensor \(||\mathbf{\mathcal{X}}||_{*}\) as the sum of the matrix nuclear norms of all the frontal slices of
Fig. 2: Schematic illustration of the proposed STRPCA algorithm for background subtraction. Step (a) shows an input tensor \(\mathbf{\mathcal{X}}\), step (b) shows the construction of temporal graph \(\mathbf{\mathbf{\mathbf{G}}}_{1}\), step (c) shows the construction of spatial graph \(\mathbf{\mathbf{\mathbf{G}}}_{2}^{\prime}\), step (d) shows the batch-based STRPCA model optimization where both graphs are incorporated, and steps (e)-(g) show the resulting low-rank tensor \(\mathbf{\mathcal{B}}\), sparse tensor \(\mathbf{\mathcal{F}}\), and the background subtraction results.
\(\bar{\mathbf{\mathcal{X}}}\) as [32]: \(||\mathbf{\mathcal{X}}||_{*}=\sum_{i=1}^{n}||\bar{\mathbf{\mathcal{X}}}(:,:,i)||_{*}\). Lu _et al._ proposed to take an average of all matrix nuclear norms as [41]: \(||\mathbf{\mathcal{X}}||_{*}=\frac{1}{n}\sum_{i=1}^{n}||\bar{\mathbf{\mathcal{X}}}(:,:,i )||_{*}\).
#### Iii-A5 **Tensor-Tensor Product**
The tensor-tensor product (t-product) between any two tensors \(\mathbf{\mathcal{Y}}_{1}\in\mathbb{R}^{n_{1}\times n_{2}\times n_{3}}\) and \(\mathbf{\mathcal{Y}}_{2}\in\mathbb{R}^{n_{2}\times c\times n_{3}}\) is defined to be a tensor \(\mathbf{\mathcal{Z}}\in\mathbb{R}^{n_{1}\times c\times n_{3}}\) and is computed as [42]: \(\mathbf{\mathcal{Z}}=\text{fold}(\text{bcirc}(\mathbf{\mathcal{Y}}_{1}).\text{unfold}( \mathbf{\mathcal{Y}}_{2}))\), where \(\text{bcirc}(\mathbf{\mathcal{Y}}_{1})\) is a block-circular matrix of size \(n_{1}n_{3}\times n_{2}n_{3}\).
#### Iii-A6 **Tensor Singular Value Decomposition (T-SVD)**
The input tensor \(\mathbf{\mathcal{X}}\) can be factorized as: \(\mathbf{\mathcal{X}}=\mathbf{\mathcal{U}}\ast\mathbf{S}\ast\mathbf{\mathcal{V}}^{*}\), where \(\mathbf{\mathcal{U}}\in\mathbb{R}^{w\times w\times n}\) and \(\mathbf{\mathcal{V}}\in\mathbb{R}^{h\times h\times n}\) are orthogonal tensors, and \(\mathbf{\mathcal{S}}\in\mathbb{R}^{w\times h\times n}\) is an f-diagonal tensor. \(\mathbf{\mathcal{V}}^{*}\) is the conjugate transpose of tensor \(\mathbf{\mathcal{V}}\). These factored tensors contain principal components of \(\mathbf{\mathcal{X}}\). The T-SVD of \(\mathbf{\mathcal{X}}\) can be computed using the below steps as [42]:
1. Compute \(\bar{\mathbf{\mathcal{X}}}=\text{fft}(\mathbf{\mathcal{X}},\,[\,],3)\).
2. Compute \([\bar{\mathbf{\mathcal{U}}}^{(k)},\bar{\mathbf{\mathcal{S}}}^{(k)},\bar{\mathbf{\mathcal{ V}}}^{(k)}]=\text{SVD}(\bar{\mathbf{\mathcal{X}}}^{(k)})\).
3. Compute complex conjugates of \(\bar{\mathbf{\mathcal{U}}}^{(k)}\) and \(\bar{\mathbf{\mathcal{V}}}^{(k)}\).
4. Compute \(\mathbf{\mathcal{U}}=\text{iff}(\bar{\mathbf{\mathcal{U}}},\,[\,],3)\), \(\mathbf{\mathcal{S}}=\text{iff}(\bar{\mathbf{\mathcal{S}}},\,[\,],3)\), and \(\mathbf{\mathcal{V}}=\text{iff}(\bar{\mathbf{\mathcal{V}}},\,[\,],3)\).
The complete collection of symbols and notations used in this work is shown in Table I.
### _Mathematical Formulation of STRPCA_
TRPCA (1) aims to decompose an input tensor \(\mathbf{\mathcal{X}}\) into the sum of a low-rank component \(\mathbf{\mathcal{B}}\) representing the background model and a sparse component \(\mathbf{\mathcal{F}}\) representing the moving object. However, the development of spatial-temporal structure within the sparse component \(\mathbf{\mathcal{F}}\) is hindered by the absence of structured-sparsity regularisation in previous TRPCA techniques [42, 26, 35, 37, 41, 54], resulting in an erroneous background subtraction results. We propose adding spatial-temporal constraints to the model (1) in order to correctly segment the moving object pixels. The objective function of the proposed STRPCA model is formulated as:
\[\begin{split}&\min_{\mathbf{\mathcal{B}},\mathbf{\mathcal{F}}}||\mathbf{ \mathcal{B}}||_{*}+\lambda||\mathbf{\mathcal{F}}||_{1}+\gamma_{1}\sum_{j=1}^{n} \text{Tr}(\mathbf{\mathcal{F}}^{(j)\top}\mathcal{L}_{s}^{(j)}\mathbf{\mathcal{F}}^{( j)})\\ &\quad+\gamma_{2}\text{Tr}(\mathbf{\mathcal{F}}_{(3)}\mathbf{L}_{t} \mathbf{\mathcal{F}}_{(3)}^{\top}),\ \text{such that}\ \mathbf{\mathcal{X}}=\mathbf{\mathcal{B}}+\mathbf{\mathcal{F}},\end{split} \tag{2}\]
where \(\text{Tr}(.)\) denotes the trace of a matrix. The third and fourth components are known as spatial and temporal graph-based regularization enforced on \(\mathbf{\mathcal{F}}\). By computing the pairwise similarities along the spatial and temporal dimensions, we consider these regularisations to be a search for intact structured moving objects. The spatial graph-based Laplacian tensor, denoted by the \(\mathbf{\mathcal{L}}_{s}\) is specifically calculated pixel-wise from the spatial graph using the frontal slices of \(\mathbf{\mathcal{X}}\). By incorporating \(\sum_{j=1}^{n}\text{Tr}(\mathbf{\mathcal{F}}^{(j)\top}\mathbf{\mathcal{L}}_{s}^{(j)} \mathbf{\mathcal{F}}^{(j)})\) component, we compel the \(\mathbf{\mathcal{F}}\) component to act as the eigenvectors of the \(\mathbf{\mathcal{L}}_{s}\), which guarantees that the moving object's spatial coherent structure is preserved. \(\mathbf{\mathcal{L}}_{t}\) denotes the temporal graph-based Laplacian matrix computed frame-wise in the temporal domain using the mode-3 unfolded matrix of \(\mathbf{\mathcal{X}}\). Similarly, by including \(\text{Tr}(\mathbf{\mathcal{F}}_{(3)}\mathbf{L}_{t}\mathbf{\mathcal{F}}_{(3)}^{\top})\) component, we maintain the moving object's temporal coherent structure. While optimising STRPCA (3), the non-negative tradeoff parameters \(\gamma_{1}\) and \(\gamma_{1}\) determine the degree of moving object sparsity and give relative relevance to each component. We introduce spatial \(\mathbf{\mathcal{H}}\) and temporal \(\mathbf{\mathcal{H}}\) moving object tensors as follows to assist (3) be more separable:
\[\begin{split}\min_{\mathbf{\mathcal{B}},\mathbf{\mathcal{F}}}||\mathbf{ \mathcal{B}}||_{*}+\lambda||\mathbf{\mathcal{F}}||_{1}+\gamma_{1}\sum_{j=1}^{n} \text{Tr}(\mathbf{\mathcal{H}}^{(j)\top}\mathbf{\mathcal{L}}_{s}^{(j)}\mathbf{\mathcal{H}} ^{(j)})\\ +\gamma_{2}\text{Tr}(\mathbf{\mathcal{T}}_{(3)}\mathbf{L}_{t}\mathbf{ \mathcal{T}}_{(3)}^{\top}),\ \text{such that}\ \mathbf{\mathcal{X}}=\mathbf{\mathcal{B}}+\mathbf{\mathcal{F}},\\ \mathbf{\mathcal{H}}=\mathbf{\mathcal{F}},\ \text{and}\ \mathbf{\mathcal{T}}=\mathbf{ \mathcal{F}}.\end{split} \tag{3}\]
To solve (4), we first compute the spatial-temporal regularization and then we optimize it using the proposed online and batch-based optimization methods.
### _Spatial-Temporal Graph-based Sparse Regularizations_
Model (1) failed to take into account the spatio-temporal organization of the sparse component, which left gaps in the segmentation of moving objects. We suggest spatial-temporal graph-based regularisation to address this problem, and to do so, we create spatial and temporal graphs.
#### Iii-C1 Temporal Graph-based Laplacian Regularization
To construct a temporal graph, we initially transform \(\mathbf{\mathcal{X}}\) into its mode-3 unfolded 2-D matrix using an unfolding operation as: \(\mathbf{\mathcal{X}}_{(3)}=\text{unfold}_{3}(\mathbf{\mathcal{X}})=\left[\mathbf{\mathcal{X}}(:, 1,:)^{\top},\mathbf{\mathcal{X}}(:,2,:)^{\top},\cdots,\mathbf{\mathcal{X}}(:,h,:) \right]^{\top}\in\mathbb{R}^{wh\times n}\) in order to capture the temporal continuity.
An undirected temporal graph, \(\mathbf{G}_{1}=(\mathbf{A}_{1},\mathbf{E}_{1})\) is then constructed where the vertices \(\mathbf{V}_{1}\) define the columns of the unfolded matrix \(\mathbf{\mathcal{X}}_{(3)}\) and \(\mathbf{A}_{1}\) is the adjacency matrix that encodes the pair-wise similarities between the vertices on the graph. The key idea here is that the comparable columns of \(\mathbf{\mathcal{X}}_{(3)}\) connected on the graph \(\mathbf{\mathcal{G}}_{1}\) are probably background components \(\mathbf{\mathcal{B}}\), whereas columns that are disconnected or separated from one another on the \(\mathbf{G}_{1}\) are distinct resulting in a segmentation of moving objects that is temporally consistent. [71]. A graph can be constructed using a variety of methods. Due to its ease of use and simplicity, we generate graph using the \(k\)-Nearest Neighbors (kNN) approach. Larger graphs are built using the FLANN package for large-scale datasets [45]. The first step is to find the nearest neighbors for all the vertices using the Euclidean distance, where each vertex is connected to its \(k\) closest neighbors. The adjacency matrix \(\mathbf{A}_{1}\) for \(\mathbf{G}_{1}\) holding the pair-wise vertex similarity \(a_{1}(p,q)\) between two vertices \(p\) and \(q\) is then estimated as:
\[a_{1}(p,q)=\exp\Big{(}\frac{||\mathbf{\mathcal{X}}_{(3)}(:,p)-\mathbf{\mathcal{X}}_{(3)}( :,q)||_{2}^{2}}{2\sigma_{1}^{2}}\Big{)}. \tag{4}\]
where \(\sigma_{1}\) is the temporally smoothing parameter that is estimated using the average distance among the vertices. Two vertices on the graph \(\mathbf{G}_{1}\) are connected together if there is an edge between them otherwise \(a_{1}(p,q)=0\). The temporal graph-based regularization on \(\mathbf{\mathcal{F}}_{(3)}\) is given by:
\[\begin{split}&\min_{\mathbf{\mathcal{F}}_{(3)}}\frac{1}{2}\sum_{p,q=1}^{n} ||\mathbf{\mathcal{F}}_{(3)}(:,p)-\mathbf{\mathcal{F}}_{(3)}(:,q)||_{F}^{2}a_{1}(p,q)\\ &=\min_{\mathbf{\mathcal{F}}_{(3)}}\sum_{p=1}^{n}\mathbf{\mathcal{F}}_{(3 )}(:,p)^{\top}\mathbf{\mathcal{F}}_{(3)}(:,p)d(p,p)\\ &-\sum_{p,q=1}^{n}\mathbf{\mathcal{F}}_{(3)}(:,p)^{\top}\mathbf{ \mathcal{F}}_{(3)}(:,q)a_{1}(p,q)\\ &=\min_{\mathbf{\mathcal{F}}_{(3)}}\text{Tr}(\mathbf{\mathcal{F}}_{(3)}^ {\top}\mathbf{D}\mathbf{\mathcal{F}}_{(3)})-\text{Tr}(\mathbf{\mathcal{F}}_{(3)}^{ \top}\mathbf{A}_{1}\mathbf{\mathcal{F}}_{(3)})\\ &=\min_{\mathbf{\mathcal{F}}_{(3)}}\text{Tr}(\mathbf{\mathcal{F}}_{(3)}^ {\top}\mathbf{L}_{t}\mathbf{\mathcal{F}}_{(3)})\end{split} \tag{5}\]
where \(\mathbf{D}\) is a degree matrix whose diagonal entry is defined as \(d(p,p)=\sum_{q}a_{1}(p,q)\) and \(\mathbf{L}_{t}\) is the normalized graph-based temporal Laplacian matrix which is estimated as:
\[\mathbf{L}_{t}=\mathbf{I}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}_{1}\mathbf{D}^{ -\frac{1}{2}}. \tag{6}\]
where \(\mathbf{I}\) is an identity matrix. The above formulations (6)-(7) encode the temporal structure of the moving object within the sparse tensor \(\mathbf{\mathcal{F}}\). It is interpreted as constraining the \(\mathbf{\mathcal{F}}_{(3)}\) component to be orthogonal to the eigenvectors of normalized graph-based temporal Laplacian matrix.
#### Iii-C2 Spatial Graph-based Laplacian Regularization
We construct a pixel-wise spatial graph using the frontal slices of \(\mathbf{\mathcal{X}}\), to maintain the spatial structure of the moving object. The spatial graph enforces smoothness at pixel-level of each frontal slice in \(\mathbf{\mathcal{F}}\) in contrast to temporal graph regularisation, leading to a spatially coherent segmentation of moving objects.
We initially divide the \(j\)-th frontal slice \(\mathbf{\mathcal{X}}^{(j)}\) of \(\mathbf{\mathcal{X}}\) into non-overlapping \(a\times a\) patches before constructing a spatial graph \(\mathbf{G}_{2}^{j}=(\mathbf{V}_{2}^{j},\mathbf{A}_{2}^{j})\) for the \(j\)-th slice. As seen in Fig. 2 (c), we take a non-overlapping patch for each \(i\)-th pixel in the \(j\)-th frontal slice, keeping the \(i\)-th pixel in the middle. With \(u\) being the total number of pixels in the \(j\)-th frontal slice, for example, \(u=wh\), the spatial data matrix \(\mathbf{P}_{s}^{j}\) of size \(a^{2}\times u\) for the \(j\)-th frontal slice is produced in this fashion. All local patches are represented by the vertices \(\mathbf{V}_{2}^{j}\) in \(\mathbf{G}_{2}^{j}\), and the \(j\)-th frontal slice adjacency matrix, which contains all pair-wise similarities between the local patches of each frontal slice, is represented by \(\mathbf{A}_{2}^{j}\).
Here, \(\mathbf{G}_{2}^{j}\) refines the spatial organization of the moving objects and completes the data gathered by \(\mathbf{G}_{1}\). In particular, if two local patches are related to one another in \(\mathbf{G}_{2}^{j}\), their structures are presumably comparable and they are most likely background component pixels. We create \(\mathbf{G}_{2}^{j}\) by measuring the Euclidean distance between the nearby patches and identifying their nearest neighbors. The following is an estimate for the adjacency matrix \(\mathbf{A}_{2}^{j}\) of \(\mathbf{G}_{2}^{j}\):
\[a_{2}^{j}(p,q)=\begin{cases}\exp\big{(}\frac{||\mathbf{P}_{s}^{j}(:,p)- \mathbf{P}_{s}^{j}(:,q)||_{s}^{2}}{2\sigma_{s}^{2}}\big{)},\text{ if both are connected}\\ 0,\text{otherwise},\end{cases} \tag{7}\]
where \(\sigma_{2}\) is a smoothing factor for \(\mathbf{G}_{2}^{j}\) that is calculated using the average distance between vertices. In case, two local patches \(\mathbf{P}_{s}^{j}(:,p)\) and \(\mathbf{P}_{s}^{j}(:,q)\) are connected to each other, then the Euclidean distance is greater than zeros otherwise \(a_{2}^{j}(i,j)=0\). Similar to Eq. (7), we compute \(\mathbf{\mathcal{L}}_{s}\) and encode it in \(\mathbf{\mathcal{F}}\).
### _Batch Optimization of STRPCA Model_
We employ an ADMM method to solve the model (4) in a batch fashion. ADMM decomposes the problem into subproblems and solves each one individually [8]. By eliminating the linear equality constraints, the Lagrangian formulation for (4) may be derived as follows:
\[\begin{split}&\mathcal{L}(\mathbf{\mathcal{B}},\mathbf{\mathcal{F}}, \mathbf{\mathcal{H}},\mathbf{\mathcal{T}},\mathbf{\mathcal{Y}}_{1},\mathbf{\mathcal{Y}}_{2}, \mathbf{\mathcal{Y}}_{3},\mu)=||\mathbf{\mathcal{B}}||_{*}+\lambda||\mathbf{\mathcal{F}}|| _{1}+\\ &\gamma_{1}\sum_{j=1}^{n}\text{Tr}(\mathbf{\mathcal{H}}^{(j)\top} \mathbf{\mathcal{L}}_{s}^{(j)}\mathbf{\mathcal{H}}^{(j)})+\gamma_{2}\text{Tr}(\mathbf{ \mathcal{T}}_{(3)}\mathbf{L}_{t}\mathbf{\mathcal{T}}_{(3)}^{\top})+\\ &\qquad\qquad\langle\mathbf{\mathcal{Y}}_{1},\mathbf{\mathcal{X}}-\mathbf{ \mathcal{B}}-\mathbf{\mathcal{F}}\rangle+\frac{\mu}{2}||\mathbf{\mathcal{X}}-\mathbf{ \mathcal{B}}-\mathbf{\mathcal{F}}||_{F}^{2}+\\ &\qquad\qquad\qquad\langle\mathbf{\mathcal{Y}}_{2},\mathbf{\mathcal{H}}- \mathbf{\mathcal{F}}\rangle+\frac{\mu}{2}||\mathbf{\mathcal{H}}-\mathbf{\mathcal{F}}||_{F }^{2}+\\ &\qquad\qquad\qquad\langle\mathbf{\mathcal{Y}}_{3},\mathbf{\mathcal{T}}- \mathbf{\mathcal{F}}\rangle+\frac{\mu}{2}||\mathbf{\mathcal{T}}-\mathbf{\mathcal{F}}||_{F }^{2},\end{split} \tag{8}\]
where \(\mathbf{\mathcal{Y}}_{1},\mathbf{\mathcal{Y}}_{2}\), and \(\mathbf{\mathcal{Y}}_{3}\) are tensors of Lagrangian multipliers and \(\mu>0\) is the penalty operator. While solving for (8), each iteration updates each of these terms. The above formulation can also be written as:
\[\begin{split}&\mathcal{L}(\mathbf{\mathcal{B}},\mathbf{\mathcal{F}},\mathbf{ \mathcal{H}},\mathbf{\mathcal{T}},\mathbf{\mathcal{Y}}_{1},\mathbf{\mathcal{Y}}_{2},\mathbf{ \mathcal{Y}}_{3},\mu)=||\mathbf{\mathcal{B}}||_{*}+\lambda||\mathbf{\mathcal{F}}||_{1}+ \\ &\gamma_{1}\sum_{j=1}^{n}\text{Tr}(\mathbf{\mathcal{H}}^{(j)\top}\mathbf{ \mathcal{L}}_{s}^{(j)}\mathbf{\mathcal{H}}^{(j)})+\gamma_{2}\text{Tr}(\mathbf{ \mathcal{T}}_{(3)}\mathbf{L}_{t}\mathbf{\mathcal{T}}_{(3)}^{\top})+\\ &\frac{\mu}{2}\Big{|}\Big{|}\mathbf{\mathcal{B}}+\mathbf{\mathcal{F}}-\mathbf{ \mathcal{X}}-\frac{\mathbf{\mathcal{Y}}_{1}}{\mu}\Big{|}\Big{|}_{F}^{2}+\frac{\mu}{2} \Big{|}\Big{|}\mathbf{\mathcal{F}}-\mathbf{\mathcal{H}}-\frac{\mathbf{\mathcal{Y}}_{2}}{\mu} \Big{|}\Big{|}_{F}^{2}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
The solution for each tensor \(\mathbf{\mathcal{B}},\mathbf{\mathcal{F}},\mathbf{\mathcal{H}},\mathbf{\mathcal{T}},\mathbf{\mathcal{Y} }_{1},\mathbf{\mathcal{Y}}_{2}\), and \(\mathbf{\mathcal{Y}}_{3}\) is then formulated by fixing one tensor and solving another.
#### Iii-B1 **Solution for \(\mathbf{\mathcal{B}}\)**
Fixing other tensors, a sub-problem \(\mathbf{\mathcal{B}}\) can be updated as follows:
\[\begin{split}&\mathbf{\mathcal{B}}=\operatorname*{argmin}_{\mathbf{ \mathcal{B}}}\mathcal{L}(\mathbf{\mathcal{B}})=\operatorname*{argmin}_{\mathbf{ \mathcal{B}}}\lVert\mathbf{\mathcal{B}}\rVert_{*}+\frac{\mu}{2}\bigg{|}\Big{|} \mathbf{\mathcal{B}}+\mathbf{\mathcal{F}}-\mathbf{\mathcal{X}}-\frac{\mathbf{\mathcal{Y}}_{1} }{\mu}\Big{|}\Big{|}_{F}^{2}\\ &=\operatorname*{argmin}_{\mathbf{\mathcal{B}}}\tau\lVert\mathbf{ \mathcal{B}}\rVert_{*}+\frac{1}{2}\bigg{|}\Big{|}\mathbf{\mathcal{B}}-\mathbf{ \mathcal{Z}}\Big{|}\Big{|}_{F}^{2},\end{split} \tag{10}\]
where \(\tau=1/\mu\) and \(\mathbf{\mathcal{Z}}=\mathbf{\mathcal{X}}-\mathbf{\mathcal{F}}+\mathbf{\mathcal{Y}}_{1}/\mu\). The closed-form solution to sub-problem \(\mathbf{\mathcal{B}}\) is obtained using the tensor Singular Value Thresholding (t-SVT) operation [42] of \(\mathbf{\mathcal{Z}}\) as:
\[\mathbf{\mathcal{Z}}=\mathbf{\mathcal{U}}*\mathbf{\mathcal{S}}_{\tau}*\mathbf{\mathcal{V}}^{ *},\mathbf{\mathcal{S}}_{\tau}=\text{iff}((\bar{\mathbf{\mathcal{S}}}-\tau)_{+},[\ ],3), \tag{11}\]
where \((\bar{\mathbf{\mathcal{S}}}-\tau)_{+}\) represents the positive part of \((\bar{\mathbf{\mathcal{S}}}-\tau)_{+}\) and \(\mathbf{\mathcal{U}}\), \(\mathbf{\mathcal{S}}\), and \(\mathbf{\mathcal{V}}^{*}\) is the T-SVD as defined in Sec. ( III-A6). For \(k+1\) iteration, \(\mathbf{\mathcal{B}}^{k+1}\) is estimated as:
\[\mathbf{\mathcal{B}}^{k+1}=\operatorname*{argmin}_{\mathbf{\mathcal{B}}}\tau||\mathbf{ \mathcal{B}}^{k}||_{*}+\frac{1}{2}\bigg{|}\Big{|}\mathbf{\mathcal{B}}^{k}-\mathbf{ \mathcal{Z}}^{k}\Big{|}\Big{|}_{F}^{2}. \tag{12}\]
#### Iii-B2 **Solution for \(\mathbf{\mathcal{T}}\)**
Fixing other tensor variables, a solution to the sub-problem \(\mathbf{\mathcal{T}}\) is then formulated as:
\[\mathbf{\mathcal{T}}=\operatorname*{argmin}_{\mathbf{\mathcal{T}}}\gamma_{2}\text{Tr} (\mathbf{\mathcal{T}}_{(3)}\mathbf{L}_{t}\mathbf{\mathcal{T}}_{(3)}^{\top})+\frac{\mu} {2}\bigg{|}\Big{|}\mathbf{\mathcal{F}}-\mathbf{\mathcal{T}}-\frac{\mathbf{\mathcal{Y}}_{3 }}{\mu}\Big{|}\Big{|}_{F}^{2}. \tag{13}\]
since the computation of \(\mathbf{\mathcal{T}}_{(3)}\) is based on mode-3 unfolded 2-D matrix, therefore we convert all other tensors in (13) as:
\[\mathbf{\mathcal{T}}=\operatorname*{argmin}_{\mathbf{\mathcal{T}}}\gamma_{2}\text{Tr} (\mathbf{\mathcal{T}}_{(3)}\mathbf{L}_{t}\mathbf{\mathcal{T}}_{(3)}^{\top})+\frac{\mu }{2}\bigg{|}\Big{|}\mathbf{\mathcal{T}}_{(3)}+\frac{\mathbf{\mathcal{Y}}_{3(3)}}{\mu} -\mathbf{\mathcal{F}}_{(3)}\Big{|}\Big{|}_{F}^{2}. \tag{14}\]
by taking the derivative with respect to \(\mathbf{\mathcal{T}}_{(3)}\) and setting its gradient to zero in Eq. (14) becomes
\[\gamma_{2}\mathbf{\mathcal{T}}_{(3)}\mathbf{L}_{t}+\gamma_{2}\mathbf{\mathcal{T}}_{(3 )}\mathbf{L}_{t}^{\top}+\mu\mathbf{\mathcal{T}}_{(3)}+\mu\mathbf{\mathcal{Y}}_{3(3)}- \mu\mathbf{\mathcal{F}}_{(3)}=0. \tag{15}\]
Finally, the solution for \(\mathbf{\mathcal{T}}_{(3)}\) is given by:
\[\mathbf{\mathcal{T}}_{(3)}=\frac{\mu\mathbf{\mathcal{F}}_{(3)}-\mathbf{\mathcal{Y}}_{3(3)} }{\gamma_{2}\mathbf{L}_{t}+\gamma_{2}\mathbf{L}_{t}^{\top}+\mu\mathbf{I}}. \tag{16}\]
For \(k+1\) iteration, \(\mathbf{\mathcal{T}}_{(3)}^{k+1}\) is estimated as:
\[\mathbf{\mathcal{T}}_{(3)}^{k+1}=\frac{\mu\mathbf{\mathcal{F}}_{(3)}^{k+1}-\mathbf{ \mathcal{Y}}_{3(3)}^{k}}{\gamma_{2}\mathbf{L}_{t}+\gamma_{2}\mathbf{L}_{t}^{ \top}+\mu^{k}\mathbf{I}}. \tag{17}\]
\(\mathbf{\mathcal{T}}_{(3)}^{k+1}\) is converted back to tensor using \(\mathbf{\mathcal{T}}=\text{fold}(\mathbf{\mathcal{T}}_{(3)}^{k+1})\) as defined in Sec. (III-A2).
#### Iii-B3 **Solution for \(\mathbf{\mathcal{H}}\)**
Fixing other tensors that do not depends on \(\mathbf{\mathcal{H}}\), a solution is then formulated as follows:
\[\mathbf{\mathcal{H}}=\operatorname*{argmin}_{\mathbf{\mathcal{H}}}\gamma_{1}\sum_{j=1}^ {n}\text{Tr}(\mathbf{\mathcal{H}}^{(j)\top}\mathbf{\mathcal{L}}_{s}^{(j)}\mathbf{ \mathcal{H}}^{(j)})+\frac{\mu}{2}\Big{|}\Big{|}\mathbf{\mathcal{F}}-\mathbf{\mathcal{ H}}-\frac{\mathbf{\mathcal{Y}}_{2}}{\mu}\Big{|}\Big{|}_{F}^{2}. \tag{18}\]
Since the computation of \(\mathbf{\mathcal{H}}\) is based on each frontal slice, we also compute other tensors based on frontal slices in (18) as:
\[\begin{split}&\mathbf{\mathcal{H}}=\operatorname*{argmin}_{\mathbf{\mathcal{ H}}}\gamma_{1}\sum_{j=1}^{n}\text{Tr}(\mathbf{\mathcal{H}}^{(j)\top}\mathbf{\mathcal{L}}_{s}^{(j)} \mathbf{\mathcal{H}}^{(j)})\\ &+\frac{\mu}{2}\Big{|}\Big{|}\sum_{j=1}^{n}\Big{(}\mathbf{\mathcal{H}} ^{(j)}+\frac{\mathbf{\mathcal{Y}}_{2}^{(j)}}{\mu}-\mathbf{\mathcal{F}}^{(j)}\Big{)} \Big{|}\Big{|}_{F}^{2}.\end{split} \tag{19}\]
by taking the derivative and setting it to zero, (19) becomes
\[\gamma_{1}\sum_{j=1}^{n}\Big{(}\mathbf{\mathcal{H}}^{(j)}\mathbf{\mathcal{L}}_{s}^{(j)}+ \mathbf{\mathcal{H}}^{(j)}\mathbf{\mathcal{L}}_{s}^{\top(j)}\Big{)}+\mu\sum_{j=1}^{n} \Big{(}\mathbf{\mathcal{H}}^{(j)}+\mathbf{\mathcal{Y}}_{2}^{(j)}-\mathbf{\mathcal{F}}^{(j)} \Big{)}=0. \tag{20}\]
\[\mathbf{\mathcal{H}}^{(j)}=\frac{\mu\sum_{j=1}^{n}\Big{(}\mathbf{\mathcal{F}}^{(j)}-\mathbf{ \mathcal{Y}}_{2}^{(j)}/\mu\Big{)}}{2\gamma_{1}\sum_{j=1}^{n}\mathbf{\mathcal{L}}_{s}^{(j) }+\mu\mathbf{I}}. \tag{21}\]
For \(k+1\) iteration, \(\mathbf{\mathcal{H}}^{(j)k+1}\) is estimated as:
\[\mathbf{\mathcal{H}}^{(j)k+1}=\frac{\mu\sum_{j=1}^{n}\Big{(}\mathbf{\mathcal{F}}^{(j)k+1}- \mathbf{\mathcal{Y}}_{2}^{(j)k}/\mu\Big{)}}{2\gamma_{1}\sum_{j=1}^{n}\mathbf{\mathcal{L}}_{s}^ {(j)}+\mu^{k}\mathbf{I}}. \tag{22}\]
#### Iii-B4 **Solution for \(\mathbf{\mathcal{F}}\)**
Fixing other tensors, a solution to a sub-problem \(\mathbf{\mathcal{F}}\) is then formulated as follows:
\[\mathbf{\mathcal{F}}=\operatorname*{argmin}_{\mathbf{\mathcal{F}}}\lambda||\mathbf{ \mathcal{F}}||_{1}+\frac{1}{2}||\mathbf{\mathcal{F}}-\mathbf{\mathcal{Z}}||_{F}^{2}, \tag{23}\]
where \(\mathbf{\mathcal{Z}}=\mathbf{\mathcal{X}}-\mathbf{\mathcal{B}}+\mathbf{\mathcal{H}}+\mathbf{ \mathcal{T}}+\Big{(}\mathbf{\mathcal{Y}}_{1}+\mathbf{\mathcal{Y}}_{2}+\mathbf{\mathcal{Y}}_{3 }\Big{)}/\mu\). For \(k+1\) iteration, \(\mathbf{\mathcal{F}}^{k+1}\) can be updated as:
\[\mathbf{\mathcal{F}}^{k+1}=\operatorname*{argmin}_{\mathbf{\mathcal{F}}}\lambda||\mathbf{ \mathcal{F}}^{k}||_{1}+\frac{1}{2}||\mathbf{\mathcal{F}}^{k}-
#### Iii-D6 **Convergence Conditions**
Following convergence criteria is defined according to the KKT condition as [8, 42]: \(\rho_{1}\leq\zeta\ \&\ \rho_{2}\leq\zeta\ \&\ \rho_{3}\leq\zeta\ \&\ \rho_{4}\leq\zeta\&\rho_{5}\leq\zeta\), where \(\rho_{1}\leftarrow||\boldsymbol{\mathcal{X}}-\boldsymbol{\mathcal{B}}^{k+1}- \boldsymbol{\mathcal{F}}^{k+1}||_{F}^{2}\), \(\rho_{2}\leftarrow||\boldsymbol{\mathcal{B}}^{k}-\boldsymbol{\mathcal{B}}^{k+ 1}||_{F}^{2}\), \(\rho_{3}\leftarrow||\boldsymbol{\mathcal{F}}^{k}-\boldsymbol{\mathcal{F}}^{k+ 1}||_{F}^{2}\), \(\rho_{4}\leftarrow||\boldsymbol{\mathcal{F}}^{k}-\boldsymbol{\mathcal{H}}^{k} ||_{F}^{2}\), and \(\rho_{5}\leftarrow||\boldsymbol{\mathcal{F}}^{k}-\boldsymbol{\mathcal{T}}^{k} ||_{F}^{2}\). \(\zeta\) is the tolerance factor that controls the convergence criteria. Algorithm 1 summarizes the main steps.
```
Input:\(\boldsymbol{\mathcal{X}}\), \(\gamma_{1}>0\), \(\gamma_{2}>0\), \(\lambda\), \(\boldsymbol{\mathcal{L}}_{s}\) & \(\mathbf{L}_{t}\) using (5)-(7). Initialization:\(\mu^{0}=0.01,\mu_{max}=10,\rho=1.2,\) \(\zeta=0.001,\{\boldsymbol{\mathcal{B}}^{0},\boldsymbol{\mathcal{F}}^{0}, \boldsymbol{\mathcal{T}}^{0},\boldsymbol{\mathcal{H}}^{0},\boldsymbol{ \mathcal{Y}}_{1}^{0},\boldsymbol{\mathcal{Y}}_{2}^{0},\boldsymbol{\mathcal{Y} }_{3}^{0}\}=0.\) while not converged (\(k=0,1,..\))do 1. Update \(\boldsymbol{\mathcal{B}}^{k+1}\) using (10)-(12). 2. Update \(\boldsymbol{\mathcal{T}}_{3}^{k+1}\) using (17) and \(\boldsymbol{\mathcal{T}}=\text{\sf{fold}}(\boldsymbol{\mathcal{T}}_{(3)}^{k+ 1})\). 3. Update \(\boldsymbol{\mathcal{H}}^{(j)k+1}\) using (22). 4. Update \(\boldsymbol{\mathcal{F}}^{k+1}\) using (24). 5. Update \(\mu^{k+1}\) and \(\{\boldsymbol{\mathcal{Y}}_{j}^{k+1}\}_{j=1}^{3}\) using (25). 6. Check convergence. end Output:\(\boldsymbol{\mathcal{B}}^{k+1},\boldsymbol{\mathcal{F}}^{k+1}\)
```
**Algorithm 1**Pseudo-code of STRPCA model.
### _Online Optimization of STRPCA (O-STRPCA) Model_
Although batch optimization is effective in terms of efficiency, it adds computational complexity. Model (3), which requires that all video frames be stored in memory, cannot always be accomplished, especially for real-time processing. In order to fill this gap, Feng _et al._ suggested an online stochastic optimization approach that processes one frame per time instance [21]. The computational complexity issues are therefore resolved. However, it lacks structural constraints and is only used for matrix-based RPCA problems. By using an online optimization technique where one sequence from the input tensor is processed each time occurrence, we are able to solve the model (3) at hand. We proposed O-STRPCA model that includes tensor-based spatial and temporal graph-based Laplacian regularizations in contrast to [21].
To achieve this, we first unfold the model (3) before utilizing an online optimization to solve each unfolded matrix. Model (3) unfolded may be represented as follows:
\[\min_{\begin{subarray}{c}\boldsymbol{\mathcal{B}}_{(m)}, \boldsymbol{\mathcal{F}}_{(m)}\\ m=1,2,\cdots,M\end{subarray}}\sum_{m=1}^{M}\Big{(}||\boldsymbol{\mathcal{B}}_{( m)}||_{*}+\lambda||\boldsymbol{\mathcal{F}}_{(m)}||_{1}+\gamma_{1}\text{Tr}( \boldsymbol{\mathcal{H}}_{(m)}^{\top}\boldsymbol{\mathcal{L}}_{s(m)}\] \[\boldsymbol{\mathcal{H}}_{(m)})+\gamma_{2}\text{Tr}(\boldsymbol{ \mathcal{T}}_{(m)}\mathbf{L}_{t}\boldsymbol{\mathcal{T}}_{(m)}^{\top})\Big{)}, \text{ such that }\boldsymbol{\mathcal{X}}_{(m)}=\boldsymbol{\mathcal{B}}_{(m)}+\boldsymbol{ \mathcal{F}}_{(m)},\] \[\boldsymbol{\mathcal{H}}_{(m)}=\boldsymbol{\mathcal{F}}_{(m)},\ \ \boldsymbol{ \mathcal{T}}_{(m)}=\boldsymbol{\mathcal{F}}_{(m)},\ m=1,2,\cdots,M,\ \text{and}\] \[\{\boldsymbol{\mathcal{X}},\boldsymbol{\mathcal{B}}, \boldsymbol{\mathcal{F}}\}=\{\text{fold}(\boldsymbol{\mathcal{X}}_{(m)}), \text{fold}(\boldsymbol{\mathcal{B}}_{(m)}),\text{fold}(\boldsymbol{ \mathcal{F}}_{(m)})\}, \tag{26}\]
where \(M\) is the total number of modes in a tensor. To solve (26), we first convert it to an unconstrained problem as:
\[\min_{\begin{subarray}{c}\boldsymbol{\mathcal{B}}_{(m)}, \boldsymbol{\mathcal{F}}_{(m)}\\ m=1,2,\cdots,M\end{subarray}}\sum_{m=1}^{M}\Big{(}||\boldsymbol{\mathcal{B}}_{( m)}||_{*}+\lambda||\boldsymbol{\mathcal{F}}_{(m)}||_{1}+\gamma_{1}\text{Tr}( \boldsymbol{\mathcal{H}}_{(m)}^{\top}\boldsymbol{\mathcal{L}}_{s(m)}\] \[\boldsymbol{\mathcal{H}}_{(m)})+\gamma_{2}\text{Tr}(\boldsymbol{ \mathcal{T}}_{(m)}\mathbf{L}_{t}\boldsymbol{\mathcal{T}}_{(m)}^{\top})+|| \boldsymbol{\mathcal{X}}_{(m)}-\boldsymbol{\mathcal{B}}_{(m)}-\boldsymbol{ \mathcal{F}}_{(m)}||_{F}^{2}+\] \[||\boldsymbol{\mathcal{H}}_{(m)}-\boldsymbol{\mathcal{F}}_{(m)}|| _{F}^{2}+||\boldsymbol{\mathcal{T}}_{(m)}-\boldsymbol{\mathcal{F}}_{(m)}||_{F }^{2}\Big{)}. \tag{27}\]
The nuclear norm minimization problem, where SVD is estimated in each iteration and strongly links all principal components, is a significant obstacle to solving the model (27) in an online manner. We use an approximation of the nuclear norm calculated using the matrix factorization problem to fill up this gap [21]. The equivalent nuclear norm is the sum of the basis \(\boldsymbol{\mathcal{U}}\) and its coefficients \(\boldsymbol{\mathcal{V}}\) and it can be expressed as:
\[||\boldsymbol{\mathcal{B}}_{(m)}||_{*}=\min_{\boldsymbol{\mathcal{U}}_{(m)} \in\mathbb{R}^{\gamma\epsilon\times},\boldsymbol{\mathcal{V}}_{(m)}\in \mathbb{R}^{\gamma\epsilon\times}}\frac{1}{2}\{||\boldsymbol{\mathcal{U}}_{(m)}|| _{F}^{2}+||\boldsymbol{\mathcal{V}}_{(m)}||_{F}^{2}\}, \tag{28}\] \[\text{such that }\boldsymbol{\mathcal{B}}_{(m)}=\boldsymbol{\mathcal{U}}_{(m)} \boldsymbol{\mathcal{V}}_{(m)},\]
where \(p\) denotes the dimension of each sample in \(\boldsymbol{\mathcal{U}}\), \(r\) is a rank, and \(q\) is the number of samples in \(\boldsymbol{\mathcal{V}}_{(m)}\). It should be noted that the dimensions of each unfolded matrices \(\boldsymbol{\mathcal{U}}_{(m)}\) and \(\boldsymbol{\mathcal{V}}_{(m)}\) vary according to the size of \(\boldsymbol{\mathcal{B}}_{(m)}\). By substituting Eq. (28) into Eq. (27), we get
\[\min_{\begin{subarray}{c}\boldsymbol{\mathcal{U}}_{(m)}, \boldsymbol{\mathcal{V}}_{(m)},\boldsymbol{\mathcal{F}}_{(m)}\\ m=1,2,\cdots,M\end{subarray}}\sum_{m=1}^{M}\Big{(}||\boldsymbol{\mathcal{X}}_{(m)}- \boldsymbol{\mathcal{U}}_{(m)}\boldsymbol{\mathcal{V}}_{(m)}-\boldsymbol{ \mathcal{F}}_{(m)}||_{F}^{2}+||\boldsymbol{\mathcal{H}}_{(m)}-\] \[\boldsymbol{\mathcal{F}}_{(m)}||_{F}^{2}+||\boldsymbol{\mathcal{T}}_ {(m)}-\boldsymbol{\mathcal{F}}_{(m)}||_{F}^{2}+\frac{1}{2}\{||\boldsymbol{ \mathcal{U}}_{(m)}||_{F}^{2}+||\boldsymbol{\mathcal{V}}_{(m)}||_{F}^{2}\}+\] \[\lambda||\boldsymbol{\mathcal{F}}_{(m)}||_{1}+\gamma_{1}\text{Tr}( \boldsymbol{\mathcal{H}}_{(m)}^{\top}\boldsymbol{\mathcal{L}}_{s(m)} \boldsymbol{\mathcal{H}}_{(m)})+\gamma_{2}\text{Tr}(\boldsymbol{\mathcal{T}}_{(m )}\mathbf{L}_{t}\boldsymbol{\mathcal{T}}_{(m)}^{\top})\Big{)}. \tag{29}\]
The aforementioned composition continues to function batch-wise. We introduce vectorized samples of each tensor and provide an online optimization framework for processing each frame per-time instance in order to convert it to the online formulation as:
\[\min_{\begin{subarray}{c}\boldsymbol{\mathcal{U}}_{(m)}, \mathbf{v}_{m}^{i},\mathbf{f}_{m}^{i}\end{subarray}}\sum_{i=1}^{N}\Bigg{(} \sum_{m=1}^{M}||\mathbf{x}_{m}^{i}-\boldsymbol{\mathcal{U}}_{(m)}\mathbf{v}_{m}^ {i}-\mathbf{f}_{m}^{i}||_{F}^{2}+||\mathbf{h}_{m}^{i}-\mathbf{f}_{m}^{i}||_{F}^ {2}\] \[+\lambda||\mathbf{f}_{m}^{i}||_{1}+||\mathbf{i}_{m}^{i}-\mathbf{f}_ {m}^{i}||_{F}^{2}+\frac{1}{2}||\mathbf{v}_{m}^{i}||_{F}^{
spatial context; for this reason, we use the final few columns of \(\mathbf{\mathcal{X}}_{(m)}\). We divide each \(\mathbf{x}_{m}^{i}\) into non-overlapping patches, much like the batch optimization, and then compute \(\mathbf{\mathcal{L}}_{s(m)}^{i}\). Every time a new column \(\mathbf{x}_{m}^{i+1}\) is received, the estimated \(\mathbf{\mathcal{L}}_{s(m)}^{i}\) is updated by discarding the information from the first column \(\mathbf{x}_{m}^{i}\) and adding the new column information to get \(\mathbf{\mathcal{L}}_{s(m)}^{i+1}\) using Eq. (7). Similar to this, a temporal graph-based Laplacian matrix \(\mathbf{L}_{t}^{i+1}\) is estimated by updating the matrix \(\mathbf{L}_{t}^{i}\) for each new incoming column \(\mathbf{x}_{m}^{i+1}\).
#### Iii-B1 Initialization of \(\mathbf{\mathcal{U}}_{(m)}\)
By taking the first \(r\)-column data from matrix \(\mathbf{\mathcal{X}}_{(m)}\), we initialize the basis matrix \(\mathbf{\mathcal{U}}_{(m)}\), and we encode the spatial and temporal information in matrices \(\mathbf{\mathcal{L}}_{s(m)}^{i}\) and \(\mathbf{L}_{t}^{i}\) as: \(\mathbf{\mathcal{U}}_{(m)}=[\mathbf{\tilde{L}}_{t}^{i}(\mathbf{x}_{m}^{1}, \mathbf{x}_{m}^{2},\cdots,\mathbf{x}_{m}^{r})\mathbf{\mathcal{L}}_{s(m)}^{i}]\), where \(\mathbf{\tilde{L}}_{t}^{i}\) is a block of the matrix \(\mathbf{L}_{t}^{i}\) with the dimensions \(r\times r\). Using this step, \(\mathbf{\mathcal{U}}_{(m)}\) creates a tiny basis matrix that uses little memory.
#### Iii-B2 Solution for \(\mathbf{v}_{m}^{i}\)
By fixing other variables in Eq. (30), a solution to the problem \(\mathbf{v}_{m}^{i}\) is then formulated as follows:
\[\min_{\mathbf{v}_{m}^{i}}\lvert\lvert\mathbf{x}_{m}^{i}-\mathbf{ \mathcal{U}}_{(m)}\mathbf{v}_{m}^{i}-\mathbf{f}_{m}^{i}\rvert\lvert^{2}_{F}+ \frac{1}{2}\lvert\lvert\mathbf{v}_{m}^{i}\rvert\lvert^{2}_{F}. \tag{31}\]
which is solved using a least-square estimation by taking its derivative. A closed-form solution is then obtained as:
\[\mathbf{v}_{m}^{i}=(\mathbf{\mathcal{U}}_{(m)}^{\top}\mathbf{\mathcal{U}}_{(m)}+ \lambda_{2}\mathbf{I})^{-1}\mathbf{\mathcal{U}}_{(m)}^{\top}(\mathbf{x}_{m}^{i}- \mathbf{f}_{m}^{i}). \tag{32}\]
#### Iii-B3 Solution for \(\mathbf{h}_{m}^{i}\)
Keeping other variables fixed in (30), a solution to the sub-problem \(\mathbf{h}_{m}^{i}\) is formulated as follows:
\[\min_{\mathbf{h}_{m}^{i}}\lvert\lvert\mathbf{h}_{m}^{i}-\mathbf{f }_{m}^{i}\rvert\lvert^{2}_{F}+\gamma_{1}\text{Tr}(\mathbf{h}_{m}^{i\top}\mathbf{ \mathcal{L}}_{s(m)}^{i}\mathbf{h}_{m}^{i}), \tag{33}\]
by taking a derivative, we get a closed-form solution of \(\mathbf{h}_{m}^{i}\) as \(\mathbf{h}_{m}^{i}=\mathbf{f}_{m}^{i}(\gamma_{1}\mathbf{\mathcal{L}}_{s(m)}^{i}+ \mathbf{I})^{-1}\), where \(\mathbf{\mathcal{L}}_{s(m)}^{i\top}=\mathbf{\mathcal{L}}_{s(m)}^{i}\) as it is symmetric.
#### Iii-B4 Solution for \(\mathbf{f}_{m}^{i}\)
Similar to \(\mathbf{h}_{m}^{i}\), \(\mathbf{t}_{m}^{i}\) closed-form solution is obtained as: \(\mathbf{t}_{m}^{i}=\mathbf{f}_{m}^{i}(\gamma_{2}\mathbf{L}_{t}^{i}+\mathbf{I })^{-1}\).
#### Iii-B5 Solution for \(\mathbf{f}_{m}^{i}\)
A solution to this sub-problem is formulated from Eq. (30) as:
\[\min_{\mathbf{f}_{m}}\lvert\lvert\mathbf{x}_{m}^{i}-\mathbf{ \mathcal{U}}_{(m)}\mathbf{v}_{m}^{i}-\mathbf{f}_{m}^{i}\rvert\lvert^{2}_{F}+ \lvert\lvert\mathbf{h}_{m}^{i}-\mathbf{f}_{m}^{i}\rvert\lvert^{2}_{F}+ \lambda\lvert\mathbf{f}_{m}^{i}\rvert\lvert_{1}\] \[+\lvert\lvert\mathbf{t}_{m}^{i}-\mathbf{f}_{m}^{i}\rvert\lvert^{2}_{F }=\min_{\mathbf{f}_{m}^{i}}\lambda\lvert\lvert\lvert\mathbf{f}_{m}^{i}\rvert \lvert_{1}+\lvert\lvert\mathbf{f}_{m}^{i}-\mathbf{q}_{m}^{i}\rvert\lvert^{2}_{F },\text{where} \tag{34}\] \[\mathbf{q}_{m}^{i}=(\mathbf{x}_{m}^{i}-\mathbf{\mathcal{U}}_{(m)} \mathbf{v}_{m}^{i}+\mathbf{h}_{m}^{i}+\mathbf{t}_{m}^{i})/2.\]
then, a closed-form solution can be obtained using a soft-thresholding operation as \(\mathbf{f}_{m}^{i}=\mathrm{T}_{\lambda}(\mathbf{x}_{m}^{i}-\mathbf{\mathcal{U}}_{ (m)}\mathbf{v}_{m}^{i}+\mathbf{q}_{m}^{i})\).
#### Iii-B6 Basis \(\mathbf{\mathcal{U}}_{(m)}\) Update
The basis matrix can be updated in two different ways including directly obtaining the closed-form solution and adopting the stochastic gradient descent method. Using a closed-form solution, we first define two auxilliary matrices, \(\mathbf{V}_{m}^{i\top}=[\mathbf{v}_{m}^{1},\mathbf{v}_{m}^{2},\cdots,\mathbf{v }_{m}^{i}]\in\mathbb{R}^{i\times r}\) and \(\mathbf{R}_{m}^{i}=[\mathbf{r}_{m}^{1},\mathbf{x}_{m}^{2},\cdots,\mathbf{r}_ {m}^{i}]\in\mathbb{R}^{p\times i}\), where each \(\mathbf{r}_{m}^{i}=\mathbf{x}_{m}^{i}-\mathbf{f}_{m}^{i}\). \(\mathbf{\mathcal{U}}_{(m)}\) is then updated as follows: \(\mathbf{\mathcal{U}}_{(m)}^{i}=\Theta_{(m)}^{i}(\theta_{(m)}^{i}+\lambda\mathbf{I })^{\top}\), where \(\Theta_{(m)}^{i}=\Theta_{(m)}^{i-1}+\mathbf{r}_{m}^{i}\mathbf{v}_{m}^{i\top}\) and \(\theta_{(m)}^{i}=\theta_{(m)}^{i-1}+\mathbf{v}_{m}^{i}\mathbf{v}_{m}^{i\top}\), where \(\Theta_{(m)}^{i}\in\mathbb{R}^{p\times r}\) and \(\theta_{(m)}^{i}\in\mathbb{R}^{r\times r}\). Using stochastic gradient descent, solution for \(\mathbf{\mathcal{U}}_{(m)}^{i}\) is given by:
\[\nabla_{\mathbf{\mathcal{U}}_{(m)}^{i}}f(\mathbf{\mathcal{U}}_{(m)})=\mathbf{ \mathcal{U}}_{(m)}\mathbf{v}_{m}^{i}\mathbf{v}_{m}^{i\top}-\mathbf{r}_{m}^{i} \mathbf{v}_{m}^{i\top}+\lambda\mathbf{\mathcal{U}}_{(m)} \tag{35}\] \[\mathbf{\mathcal{U}}_{(m)}^{i}\leftarrow\mathbf{\mathcal{U}}_{(m)}^{i-1}- \eta\nabla_{\mathbf{\mathcal{U}}_{(m)}^{i}}f(\mathbf{\mathcal{U}}_{(m)}),\]
where \(\eta>0\) is the learning rate. Then, using average pooling on mode-m foldings, the low-rank \(\mathbf{\mathcal{B}}\) and sparse \(\mathbf{\mathcal{F}}\) tensors are produced as follows:
\[\mathbf{\mathcal{B}}=\frac{1}{M}\sum_{m=1}^{M}fold(\mathbf{\mathcal{B}}_{m}),\text{ and }\mathbf{\mathcal{F}}=\frac{1}{M}\sum_{m=1}^{M}fold(\mathbf{\mathcal{F}}_{m}). \tag{36}\]
Model (30) converges to the optimal solution for each instance per-time instance if \(\frac{\max(\lvert\lvert\mathbf{f}_{m}^{i}\rvert\lvert\lvert\mathbf{z}_{m} \rvert\lvert\lvert\mathbf{v}_{m}^{i}\rvert\lvert\lvert\mathbf{z}_{2}\rvert)}{p}<\omega\), where \(\omega\) is a tolerance parameter for convergence criteria according to [21, 44]. Algorithm 2 summarizes the O-STRPCA model.
``` Input:\(\mathbf{\mathcal{X}}\), \(\gamma_{1}>0\), \(\gamma_{2}>0\), \(\lambda\), set entries of \(\mathbf{\mathcal{B}}_{m}\), \(\mathbf{\mathcal{F}}_{m}\), \(\mathbf{\mathcal{L}}_{s(m)}^{i}\) & \(\mathbf{L}_{t}^{i}\) per-time instance. Initialize:\(r>0\), \(\omega>0\), \(\eta>0\), \(\mathbf{\mathcal{U}}_{(m)}=0\). whilenot converged\((k=0,1,..)\)do 1. Access each column \(\mathbf{x}_{m}^{i}\) from \(\mathbf{\mathcal{X}}_{m}\). 2. Estimate \(\mathbf{\mathcal{L}}_{s(m)}^{i}\) & \(\mathbf{L}_{t}
All the experiments are carried out on a standard desktop workstation with 128 GB RAM and CPU Intel Xeon E5-2698 V4 2.2 Gz (20-cores) processor. We implemented both optimization models (3) and (30) using MATLAB 2023a and LRS Library 1. We also used a FLANN library [45] for the construction of spatial and temporal graphs. Furthermore, all results are obtained directly from the author's publications for reasons of comparison, and some of the current RPCA and TRPCA-based background subtraction methods are implemented using the official codes supplied by the authors.
Footnote 1: [https://github.com/andrewsobral/nslibrary](https://github.com/andrewsobral/nslibrary)
### _Datasets_
#### Iv-B1 Change Detection 2014 Dataset
One of the largest background subtraction benchmark datasets, Change Detection 2014 (CD14), has 53 challenging video sequences that were recorded in both indoor and outdoor situations utilizing PTZ, IP, and infrared cameras. This dataset's sequences are categorized into 11 different categories based on the background scene, including Low Frame Rate (LFR), turbulence, Dynamic Background (DB), Intermittent Object Motion (IOM), Camera Jitter (CJ), shadow, thermal, baseline, Night Videos (NVs), Bad Weather (BW), and PTZ. There are four to six video sequences in each category. There are 159,278 video frames in all, with each sequence having pixels that range in size from \(320\times 240\) to \(720\times 576\). To assess the effectiveness of the background subtraction techniques, ground-truth images of the foreground mask are also given for each sequence.
#### Iv-B2 12R Dataset
The Institute for Infocomm Research (I2R) dataset contains nine video sequences of varying sizes and resolutions captured from both indoor and outdoor scenes using a static camera [34]. The sequences in this dataset experience intensely complicated background alterations, such as crowded foregrounds, gradual and sudden background variations such as flickering of water surface and fountains, etc. There are a total of 21,901 video frames with the size of each sequence ranging from \(120\times 160\) to \(256\times 320\) pixels. Additionally, each sequence includes 20 foreground mask ground truth images to assess how well the background subtraction techniques perform.
#### Iv-B3 BMC 2012 Dataset
The Background Models Challenge 2012 (BMC12) dataset contains both synthetic and natural real-time surveillance videos [61]. Real-time contains nine long-term video sequences that are captured in outdoor scenes and mainly contain the background variation challenges such as the presence of vegetation, cast shadows, the presence of a continuous car flow, weather-changing conditions, sudden lighting conditions, and the presence of big objects. There are a total of 265715 video frames with the size of each sequence being \(240\times 320\) pixels. To assess the effectiveness of the background subtraction techniques, several ground-truth images of the foreground mask are also given for each video.
#### Iv-B4 Wallflower Dataset
The Wallflower dataset contains eight complex video sequences with two sequences overlapped with the I2R dataset. The videos are captured in outdoor and indoor scenes using a static camera and contain dynamic backgrounds such as waving trees and foreground aperture challenges. There are a total of 11,748 frames with the size of each sequence being \(120\times 160\) pixels. To assess the background subtraction methods, one ground-truth image of the foreground mask is provided for each sequence.
#### Iv-B5 SABS Dataset
The Stuttgart Artificial Background Subtraction (SABS) dataset contains nine synthetic videos for pixel-wise evaluation of the moving object segmentation [11]. The dataset introduces common surveillance challenges in the background scene including bootstrap, dynamic background, light switch, darkening, shadow, camouflage, and noisy night. There are a total of 7,000 frames with the size of each sequence being \(600\times 800\) pixels with several ground truth images. For a fair comparison of the background subtraction techniques, the sequences in this dataset are separated into training and testing splits. In order to report on the performance of the proposed algorithms, we solely employed testing sequences.
#### Iv-B6 SBM-RGBD Dataset
The Scene Background Modeling RGB Depth (SBM-RGBD) dataset is designed to evaluate the moving object segmentation approaches on both RGB and depth channels [12]. The dataset contains 33 video sequences captured in indoor scenes which are divided into seven distinct attributes or challenges including illumination changes (strong and mild variations), color camouflage, depth camouflage, intermittent motion, out-of-sensor range, shadows, and bootstrapping. There are 15,041 frames in all, each having a spatial resolution of \(640\times 480\) pixels and a number of ground-truth images. The depth channel of each sequence is recorded at either 16 or 8 bits. To analyze the performance of background subtraction, we employed both RGB and depth channels.
### _Performance Evaluation Metrics_
The \(F\)-measure score, precision, and recall assessment metrics are used to report the background subtraction performance. The precision and recall measures serve as the foundation for the \(F\)-measure score, which may be calculated using:
\[F=\frac{2*Precision*Recall}{Precision+Recall}, \tag{37}\] \[Recall=\frac{TP}{TP+FN},\text{ and }Precision=\frac{TP}{TP+FP},\]
the true positive, false negative, and false positive, respectively, are denoted by TP, FN, and FP. The projected background subtraction masks TP, FP, and FN values represent the proportion of moving object pixels that were correctly identified, erroneously categorized, and background pixels, respectively. High values of these metrics reflect the highest performance of the background subtraction approach.
### _Ablation Studies_
#### Iv-D1 **Graph-based Hyper-parameters \(\gamma_{1}\) and \(\gamma_{2}\)**
The ideal graph-based parameters are chosen in this ablation study. The model's (3) relative relevance is determined by the spatial and temporal graph-based regularization parameters. Running the background subtraction experiments on the nine sequences from the I2R dataset allowed us to establish these parameters. For each parameter, we established a range of values,
\(\Pi=\{0.1,0.3,0.6,0.9,1.2,1.5,1.8\}\). We do tests on these values by setting one regularization parameter \(\gamma_{1}\) and altering \(\gamma_{2}\) in terms of \(\Pi\). As a result, we were able to generate \(7\times 7\) combinations of \(F\) scores for a single sequence. By setting one of the two parameters, we tested the STRPCA algorithm on the nine sequences that produced a \(7\times 7\times 9\) combination. We calculated the average \(7\times 7\)\(F\) scores and presented the results in Fig. 3. We empirically found that the \(F=0.960\) score performs best when \(\gamma_{1}=0.9\) and \(\gamma_{2}=1.5\) are used. Then, without making any adjustments, the same set of these parameters is applied to different datasets.
#### V-B2 Variants of the Proposed Algorithm
We tested different variants of the proposed STRPCA algorithm in this experiment. We obtain a Temporally-regularized TRPCA (T-TRPCA) model by substituting \(\gamma_{1}=0\) and \(\gamma_{2}>0\) in the model (3). A Spatially-regularized TRPCA (S-TRPCA) model is obtained by replacing \(\gamma_{2}=0\) and \(\gamma_{1}>0\) in (3). In addition, we obtain a classical TRPCA model [42] by replacing \(\gamma_{1}=\gamma_{2}=0\) in (3). In order to assess how well integrating spatiotemporal constraints on tensors works, we also implemented a matrix-based form of STRPCA, known as SRPCA. For this, the RPCA model [67] is modified to include the graph-based regularizations presented in Sec. III-C, which are then solved using the batch-based ADMM technique of optimization.
Table II displays the effectiveness of all these variations on the Wallflower, SABS, and BMC12 datasets. Overall, the proposed STRPCA model exhibited the best performance, while the traditional TRPCA model demonstrated significantly degraded performance. The proposed method significantly outperformed these alternatives by a significant margin, highlighting the advantages of using both spatial and temporal regularizations in the traditional TRPCA model. The S-TRPCA model outperformed the T-TRPCA model in the SABS dataset, where many sequences exhibit local variations like dark scenes in a background model. This is due to performance loss caused by the local changes interfering with the temporally similar pixels in a temporal graph. Numerous sequences in the BMC12 and Wallflower datasets experience global dynamic background fluctuations. The T-TRPCA model performed better than the S-TRPCA model because the temporal regularization more effectively captured background changes in the temporal domain, whereas the S-TRPCA variant caused many spatially near pixels to become unconnected in the spatial graph. The STRPCA model demonstrated improved performance on all datasets for the matrix-based SRPCA version, highlighting the advantages of leveraging multi-dimensional tensor data on graph-based constraints.
#### V-B3 Nearest neighbor and Patch size in Spatial Graph
In order to build spatial and temporal graphs, we also carried out an ablation study on various nearest neighbor (\(k\)) and patch size (\(a\times a\)) values. The \(F\) score for the I2R dataset is shown in Table III by altering the values of \(k\) and \(a\times a\). On this dataset, choosing \(k=10\) and a patch size of \(8\times 8\) yields the best results.
### _Comparison with SOTA Methods_
We compared the results of our proposed algorithms with 15 SOTA methods on six publicly available benchmark datasets. We selected three categories of the SOTA methods including RPCA-based, TRPCA-based, and deep learning for background subtraction.
The five RPCA-based methods included classical RPCA [67], MSCL [30], TS-RPCA [19], LSD [39], and OMoGMF\(+\)TV [73]. The classical RPCA model's organized sparsity was enhanced by these approaches. MSCL method regularizes subspace clustering constraints, TS-RPCA encodes dynamic tree-structured constraints, LSD incorporates a group sparsity structure in the form of overlapped pixels within the spare component, and OMoGMF\(+\)TV method is based on a mixture of Gaussian distribution, which is updated online frame by frame in an online manner. The five TRPCA methods included classical TRPCA [42], ORLTM [35],
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Variants & SABS & BMC12 & Wallflower \\ \hline STRPCA & **0.912** & **0.892** & **0.951** \\ \hline SRPCA & 0.855 & 0.816 & 0.877 \\ \hline TRPCA & 0.802 & 0.746 & 0.792 \\ \hline S-TRPCA & **0.860** & 0.827 & 0.837 \\ \hline T-TRPCA & 0.844 & **0.856** & **0.905** \\ \hline \end{tabular}
\end{table} TABLE II: Performance comparison of different variants of the proposed algorithm including TRPCA, T-TRPCA, and S-TRPCA. The \(F\) measure score for BMC12, Wallflower, and SABS datasets is presented. The two top-performing algorithms are displayed in red and blue, respectively.
Fig. 3: Ablation study on setting the hyper-parameters \(\gamma_{1}\) and \(\gamma_{2}\). Averaged \(F\)-measure score is reported on the nine sequences of the I2R dataset by fixing \(\gamma_{1}\) and altering \(\gamma_{2}\). We empirically discovered that the optimum \(F\)-measure score of 0.960 is attained by setting \(\gamma_{1}=0.9\) and \(\gamma_{2}=1.5\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(k\) & \(k=2\) & \(k=4\) & \(k=6\) & \(k=8\) & \(k=10\) \\ \hline STRPCA & 0.871 & 0.896 & 0.902 & 0.941 & **0.960** \\ \hline \(a\times a\) & \(4\times 4\) & \(8\times 8\) & \(12\times 12\) & \(16\times 16\) & \(20\times 20\) \\ \hline STRPCA & 0.935 & **0.960** & 0.907 & 0.891 & 0.875 \\ \hline \end{tabular}
\end{table} TABLE III: Performance comparison of the proposed STRPCA algorithm on the I2R dataset for various \(k\) and patch sizes \(a\times a\) values. The best performance is of 0.892 is observed using \(k=10\) and \(a\times a=8\times 8\).
NIOTenRPCA [37], TV-TRPCA [14], and ETRPCA [22]. ORLTM is an online tensor-based method that incorporates a background dictionary. NIOTenRPCA is based on online compressive video reconstruction and background subtraction that explicitly models the background disturbance. TV-RPCA addresses the problem of structured sparsity of the tensor by explicitly modeling the total variation norm. ETRPCA explicitly considers the salient difference information of the input pixels between singular values of tensor data by the weighted tensor Schatten p-norm minimization.
The five deep learning-based methods included ZBS [3], 3DCD [43], CrossNet [38], CascadeCNN [64], and STPNet [70]. ZBS is an unsupervised deep learning technique that relies on zero-shot object detection. 3DCD is a fully supervised method that exploits scene-independent and scene-dependent evaluations to test the supervised methods in completely unseen videos for generalized background subtraction tasks. CrossNet employs an end-to-end cross-scene network via 3D optical flow information to address scene-specific challenges. CascadeCNN is a fully supervised block-based method. STPNet is an end-to-end propagation network that captures spatial and temporal features, simultaneously. All these existing deep learning methods train their loss functions on the training sequences and perform evaluations on the testing sequences. Therefore, these methods are data-hungry and totally rely on manual labeling as well as supervised training for differentiating background-foreground pixels. On the other hand, subspace learning methods such as RPCA and TRPCA etc., are completely unsupervised methods that do not employ any labels for background subtraction.
### _Qualitative Evaluations_
The visual results of the proposed STRPCA algorithm on 12 difficult sequences chosen from the aforementioned six datasets are shown in Fig. 4, along with a comparison to published approaches. On every sequence depicted in Fig. 4 from top to bottom, STRPCA beat the SOTA approaches. This is because graph-based spatiotemporal constraints were included and produced foregrounds with complete spatial structures.
In particular, RPCA (Fig. 4 (c)) showed a considerable performance loss because it was unable to manage the local fluctuations and swaying of the bushes problems of CD12 sequences (_office_ and _fall_). In these sequences, the majority of the compared approaches did a superior job of background-foreground segregation. The BMC12 sequences, particularly _Vid003_ and _Vid005_, had challenges of static backgrounds and bad weather conditions. On _Vid003_, all of the approaches under comparison--aside from RPCA--displayed too smoothed foreground segmentation (Figs. 4 (d)-(i)); however, on _Vid005_, certain methods, such as TRPCA, LSD, ORLTM, and ZBS, delivered better outcomes than the STRPCA. The dynamic background and bootstrapping difficulties are demonstrated in the I2R dataset's (_curtain_ and _escalator_ sequences). Only the STRPCA produced superior results; the other approaches could not handle the problems better because of ghost artifacts and too-smoothed foreground segmentation in the background scene. The comparative approaches also failed to give valid results for the remaining sequences of the SBM-RGBD, Wallflower, and SABS datasets because these sequences had problems with camouflage, light change, aperture, and dynamic background modeling.
Overall, STRPCA showed superior qualitative results in comparison to the current subspace learning techniques, highlighting the advantages of taking into account graph-based spatiotemporal constraints inside the sparse component.
### _Quantitative Evaluations_
#### Iv-G1 **Evaluations on CD14 Dataset**
We compared the performance of the proposed algorithms with two distinct paradigms, including deep learning-based techniques and subspace learning, on this dataset. While deep learning-based approaches are fully supervised methods that depend on training data, subspace learning methods like RPCA and TRPCA are fully unsupervised methods.
**Comparisons with RPCA and TRPCA-based Methods:** On the CD14 dataset, Table IV compares the performance in terms of average \(F\) scores of the proposed algorithms overall and by category with 10 other RPCA- and TRPCA-based SOTA approaches.
Overall, the \(F\) scores for STRPCA and O-STRPCA, were 89.80\(\%\) and 86.20\(\%\), respectively, which is much better than the approaches under comparison. The CD14 dataset's difficult sequences could not be processed well by the compared approaches, which led to a performance reduction. STRPCA achieved 8.10\(\%\) and 5.10\(\%\) greater accuracy when compared to the MSCL and TV-TRPCA approaches, whereas O-STRPCA produced 4.50\(\%\) and 1.50\(\%\) higher performance. Our proposed regularizations inside the batch-based and online-based optimization models have contributed to these improved results.
All of the tested approaches were able to achieve a \(F\) score of more than 75.00\(\%\) for the baseline sequences (4 videos), showing that these sequences did not significantly challenge the compared methods. STRPCA achieved the highest \(F\) score of 98.10\(\%\). The six sequences in the dynamic background category include difficult scenes with flowing fountains, swinging shrubs, and rippling water surfaces. Most of the compared approaches had significant problems with these sequences (\(F\) score less than 80.00\(\%\)). STRPCA and O-STRPCA algorithms achieved the best accuracies of 95.50\(\%\) and 91.10\(\%\), respectively, in this category. The best performance was achieved by TS-RPCA due to the tree-structured sparsity constraints and the TV-TRPCA due to the total variation norm.
The difficulty recognized for producing ghosting artifacts in the detected motion, i.e., abandoned foreground objects or deleted foreground objects, is present in the intermittent object motion category (6 videos). With the exception of MSCL and TV-TRPCA, the bulk of the examined approaches were unable to handle these sequences, whereas our proposed algorithms greatly outperformed them. The motionless frames were removed in MSCL using optical flow, and the structured foreground areas were modeled by TV-TRPCA. Our proposed algorithms produced lower \(F\) scores of 85.00\(\%\) and 83.60\(\%\) when compared to baseline and dynamic background sequences, respectively. The STRPCA and TV-TRPCA
methods achieved the best and second-best performances in the turbulence category (4 videos), whereas MSCL showed favorable performance.
Similar to this, only STRPCA and O-STRPCA algorithms were able to achieve \(F\) scores of greater than 80.00\(\%\) or 90.00\(\%\) in other categories such as low frame rate (4 videos), camera jitter (4 videos), thermal (5 videos), night videos (6 videos), and bad weather (4 videos), which contain more difficult background modeling challenges. The six videos in the shadow category include scenes with a mix of soft and harsh shadows with sporadic tints. The TS-RPCA technique achieved the greatest performance of 91.70\(\%\), whereas our proposed algorithm's performance degraded by 6.30\(\%\) and 2.50\(\%\), respectively, due to their inability to manage shadow in background scenes.
**Comparisons with Deep Learning-based Methods:** On the training videos of the CD14 dataset, these approaches first learn deep feature representations in an end-to-end fashion, and then they evaluate those representations using either the visible or unseen testing sequences. We used the same testing split established by [64] to compare the SOTA approaches fairly, and we assessed our proposed STRPCA algorithm.
Comparing the proposed STRPCA's quantitative performance to that of the available deep learning techniques is shown in Table V. Overall, it can be shown that the unsupervised approaches including ZBS and our STRPCA are less popular than the fully supervised deep learning methods.
Fig. 4: Comparison of 12 sequences chosen from each dataset for background subtraction using published techniques. Input images, ground truth images, and background subtraction estimates from RPCA [67], TRPCA [42], LSD [39], OMOG+TV [73], ORLTM [35], NIOTenRPCA [37], ZBS [3], and the proposed STRPCA method are shown from left to right. When compared to the SOTA approaches, the STRPCA algorithm produces superior visual results.
Since STRPCA does not incorporate learning from the training sequences, it nevertheless demonstrated a performance deterioration of 2.60\(\%\) percent when compared to the CascadeCNN technique, which had the greatest results. STRPCA got 4.70\(\%\) and 1.30\(\%\) greater \(F\) scores than the fully supervised STPNet and unsupervised ZBS approaches, demonstrating its superiority compared to end-to-end training.
#### V-A2 **Evaluations on BMC12 Dataset**
The performance comparison of the proposed algorithms with the SOTA RPCA and TRPCA-based techniques is shown in Table VI. Overall, both STRPCA and O-STRPCA perform better on average for the exceedingly difficult natural sequences of the BMC12 dataset. This is due to the fact that our algorithms can deal with complex and dynamic backgrounds like continuously moving cars and changing weather. With the help of the built-in spatial and temporal graph-based constraints, this capacity enables it to successfully segregate real, well-defined foreground pixels.
#### V-A3 **Evaluations on Vallflower Dataset**
The performance comparison between the proposed algorithms and the published approaches on the Wallflower dataset is shown in Table VI in terms of the average \(F\) measure score. Overall, the tree-structured induced norm in TS-RPCA received the second-highest score (93.30\(\%\)), whereas the STRPCA method performed the best (96.10\(\%\)). Comparing our online alternative, O-STRPCA, to the TS-RPCA technique, it showed a similar performance of 91.10\(\%\). The proposed algorithms were successful in handling the dataset's sudden illumination change sequences, which our low-rank tensor only partially made up for.
#### V-A4 **Evaluations on I2R Dataset**
The performance comparison using the I2R dataset is also shown in Table VI. Overall, STRPCA outperforms in nine sequences, and its \(F\) score is around 2.20\(\%\) percent higher than that of the second-best TS-RPCA (batch technique). More precisely, as compared to the approaches that have been published, both STRPCA and O-STRPCA exhibit more encouraging performance on this dataset. The first is that the graph-based constraints model the sparse component better even in the presence of complex background scenes, and the second is that it fully utilizes the underlying information of video sequences by learning the sparsity structure of the data across all tensor modes. These two factors are what make our proposed algorithms superior to others in the field.
#### V-A5 **Evaluations on SBM-RGBD Dataset**
Table VI shows a performance comparison of the proposed algorithms on this dataset. The outcomes show that our proposed algorithms outperformed all unsupervised RPCA and TRPCA-based SOTA methods. Particularly, the SOTA MSCL technique was outperformed by STRPCA and O-STRPCA by about 10.00\(\%\) and 4.00\(\%\), respectively, demonstrating the benefits of the proposed regularization.
#### V-A6 **Evaluations on SARS Dataset**
STRPCA algorithm was superior in six out of nine video sequences, including Basic, Dynamic Background, Darkening, Camouflage, etc., as shown in the findings in Table VII. H.264 Compression, Light Switch, and Bootstrap sequences, on the other hand, were incompatible with our algorithms. This is due to the STRPCA method's difficulty in adapting to background perturbations, such as the irregular background distribution in these settings, which caused our system to suffer from these problems. Overall, STRPCA outperformed SOTA approaches by an average of 87.40\(\%\), while O-STRPCA came in second place behind the TS-RPCA method.
### _Computational Cost and Running Time Analysis_
The computational complexity and execution time of the proposed algorithms were also evaluated. The calculation of the spatiotemporal graphs and the update of the \(\mathbf{\mathcal{B}}\) and \(\mathbf{\mathcal{F}}\) tensors are the two computationally expensive operations for the batch-based technique (STRPCA). The \(\mathbf{\mathcal{X}}_{3}\) columns are where the temporal graph is built, hence the cost of computing it is \(O(snlog(n))\), where \(s=w\times h\) is the number of pixels in each column and \(n\) is the total number of columns in \(\mathbf{\mathcal{X}}_{3}\). The cost of constructing the spatial graph \(\mathbf{G}_{2}^{3}\), which is patch-wise among the input tensor's frontal slices \(\mathbf{\mathcal{X}}^{(j)}\), is \(O(snlog(s))\). Similar to this, the update of \(\mathbf{\mathcal{B}}^{k+1}\) in the \(k\)-th iteration heavily influences the computing cost of optimizing the model (3). The overall cost of updating a \(\mathbf{\mathcal{B}}^{k+1}\) tensor is \(O\left(Tsnlog(n)+Tmax(w,h)min^{2}(w,h)n\right)\), where \(T\) is the total number of iterations. This is because updating a \(\mathbf{\mathcal{B}}^{k+1}\) tensor needs an estimation of FFT and \(n\) SVDs operation of \(w\times h\) matrices. Last but not least, \(O\left(n[s(log(n))+log(s))+T(max(w,h)min^{2}(w,h))]\right)\) is the overall computing complexity for solving (3) with graph-based regularization.
For the online model (O-STRPCA), estimating the column vectors \(\{\mathbf{v}_{m},\mathbf{f}_{m}\}\), basis matrix \(\mathbf{\mathcal{U}}_{m}\), and the accumulation matrices \(\mathbf{V}_{m}\), \(\Theta_{m}\), and \(\theta_{m}\) determines the computing cost. The linear operation needed to compute the vectors \(\mathbf{v}_{m}\) and \(\mathbf{f}_{m}\) is \(O(slog(s))\). Additionally, updating \(\mathbf{\mathcal{U}}_{m}\) costs \(O(sr^{2})\) whereas updating \(\mathbf{V}_{m}\), \(\Theta_{m}\), and \(\theta_{m}\) matrices costs \(O(slog(r))\), where \(r\) is a rank in online processing. As a result, the overall complexity for solving the model (30) is given by the expression \(O\left(s[r^{2}+log(s)+log(r)]\right)\), which demonstrates that it does not rely on the number of frames and is proportional to \(r\). This cost is appealing and so overcomes the problems with the TRPCA's real-time processing [42].
The proposed algorithms' running times are also provided and contrasted with those of published techniques including RPCA, TRPCA, and ORLTM. We used a frames per second (fps) assessment metric for this purpose, and we then recorded the computing time on a sequence termed _office_ that had 2050 frames with \(320\times 240\) spatial resolution that was taken from the CD12 dataset. The ORLTM technique took 5.97 fps while RPCA and TRPCA took 3.90 fps and 2.40 fps, respectively. Both of our proposed algorithms, STRPCA and O-STRPCA, ran at speeds of 1.60 and 4.30fps. Compared to TRPCA, the STRPCA method requires more time due to the incorporation of graph-based restrictions. Tables IV-VII demonstrate that both methods work well when it comes to background subtraction, however, our O-STRPCA is faster than its batch equivalent.
## V Conclusion and Future Work
We developed novel TRPCA-based algorithms in this study to learn structured sparse tensors for background subtraction tasks. We suggested requiring graph-based Laplacian regularizations in both space and time for the traditional TRPCA technique. We created two graphs one in the time domain and the other in the spatial domain for this aim. For more reliable background subtraction, the Laplacian matrices generated from these graphs enforced the structural sparsity inside the sparse component. We used batch and online optimization techniques to solve the proposed model. In contrast to the online optimization algorithm O-STRPCA, which handled video sequences sequentially by using a reformulated form of the nuclear norm constraints, the proposed STRPCA is more efficient and requires that all video frames be loaded into memory in order to handle large datasets. Extensive experimental results show that, when compared to various SOTA approaches, our proposed algorithms perform more favorably and show promising outcomes on six publicly accessible background subtraction benchmark datasets. In contrast to SOTA approaches, our algorithms were able to handle complicated background scenes in the presence of lighting conditions, local and global background fluctuations, and camouflage. Training a TRPCA-based deep neural network for background-foreground separation will be the foundation of our future work.
|
2309.14668 | Depolarized Holography with Polarization-multiplexing Metasurface | The evolution of computer-generated holography (CGH) algorithms has prompted
significant improvements in the performances of holographic displays.
Nonetheless, they start to encounter a limited degree of freedom in CGH
optimization and physical constraints stemming from the coherent nature of
holograms. To surpass the physical limitations, we consider polarization as a
new degree of freedom by utilizing a novel optical platform called metasurface.
Polarization-multiplexing metasurfaces enable incoherent-like behavior in
holographic displays due to the mutual incoherence of orthogonal polarization
states. We leverage this unique characteristic of a metasurface by integrating
it into a holographic display and exploiting polarization diversity to bring an
additional degree of freedom for CGH algorithms. To minimize the speckle noise
while maximizing the image quality, we devise a fully differentiable
optimization pipeline by taking into account the metasurface proxy model,
thereby jointly optimizing spatial light modulator phase patterns and geometric
parameters of metasurface nanostructures. We evaluate the metasurface-enabled
depolarized holography through simulations and experiments, demonstrating its
ability to reduce speckle noise and enhance image quality. | Seung-Woo Nam, Youngjin Kim, Dongyeon Kim, Yoonchan Jeong | 2023-09-26T04:47:04Z | http://arxiv.org/abs/2309.14668v1 | # Depolarized Holography with Polarization-multiplexing Metasurface
###### Abstract.
The evolution of computer-generated holography (CGH) algorithms has prompted significant improvements in the performances of holographic displays. Nonetheless, they start to encounter a limited degree of freedom in CGH optimization and physical constraints stemming from the coherent nature of holograms. To surpass the physical limitations, we consider polarization as a new degree of freedom by utilizing a novel optical platform called metasurface. Polarization-multiplexing metasurfaces enable incoherent-like behavior in holographic displays due to the mutual incoherence of orthogonal polarization states. We leverage this unique characteristic of a metasurface by integrating it into a holographic display and exploiting polarization diversity to bring an additional degree of freedom for CGH algorithms. To minimize the speckle noise while maximizing the image quality, we devise a fully differentiable optimization pipeline by taking into account the metasurface proxy model, thereby jointly optimizing spatial light modulator phase patterns and geometric parameters of metasurface nanostructures. We evaluate the metasurface-enabled depolarized holography through simulations and experiments, demonstrating its ability to reduce speckle noise and enhance image quality.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
+
Footnote †: thanks Both authors contributed equally to this research.
+
Footnote †
utilization of a single static optical element for reducing speckle noise is highly advantageous, considering that conventional speckle reduction methods often involve sacrificing image resolution (Peng et al., 2021) or relying on time-averaged speckle intensity obtained from multiple frames using high-speed spatial light modulators (SLMs) (Lee et al., 2022). In pursuit of this goal, we explore polarization as a novel optical channel that introduces incoherence and offers an additional degree of freedom to holographic displays.
Among the unique characteristics of light, polarization has been widely utilized in various imaging and display applications using two orthogonal polarization channels (Baek and Heide, 2021; Hwang et al., 2022). Particularly, the mutual incoherence of orthogonal polarization states is beneficial in holographic displays. However, it has been largely overlooked due to the lack of an appropriate optical platform for polarization-dependent modulation of light. Fortunately, a recently introduced optical element called metasurface, which modulates the optical response of light at the subwavelength regime (Khorasaninejad et al., 2016; Lin et al., 2014; Yu et al., 2011), provides a solution for this problem. Metasurfaces can offer uncorrelated phase profiles along the two orthogonal polarization states of the incident light, which is a unique characteristic hardly achieved with conventional optical devices (Arbabi et al., 2015; Mueller et al., 2017). Moreover, per-pixel modulation of optical response enables optimization-based design and makes them an ideal optical platform for holographic displays.
In this work, we propose a novel concept of holographic displays enabled by a polarization-multiplexing metasurface jointly designed with spatial light modulator phase patterns. Specifically, we employ a metasurface to exploit the polarization channel of the holographic display, generating two holograms with orthogonal polarization states simultaneously. We build a fully differentiable optimization pipeline to maximize the polarization-multiplexing functionality while considering the physical constraint of the metasurface. To this end, we model the electromagnetic response of the metasurface nanostructures to a differentiable proxy function and integrate it into a CGH optimization algorithm. The performance of our method is evaluated through simulations and experiments, demonstrating its competence in the overall image quality and speckle reduction compared to the conventional holographic displays. The quality improvement of holographic display, achieved through the joint engineering of static optical elements and SLM phase patterns, will undoubtedly open up a vast and exciting research field for the display community.
In summary, the major contributions of our work are as follows:
* We present a novel concept of holographic display which exploits orthogonal linear polarization states simultaneously. By incorporating a polarization-multiplexing metasurface, our approach expands the degree of freedom in holographic displays, making a room for CGH optimization algorithms.
* We devise an optimization pipeline that jointly optimizes the polarization-multiplexing metasurface and the SLM phase patterns. To the best of our knowledge, this is the first approach that uses a joint optimization method for co-designing a metasurface and a holographic display.
* We fabricate the optimized metasurface with electron beam lithography, and validate the proposed method through a benchtop prototype, verifying that experimental results are consistent with the simulations.
## 2. Related Work
_Holography:_ Holographic displays utilize interference of coherent light to reconstruct the object wavefront (Goodman, 2005). For their advantages in providing continuous depth cues, high-resolution images, and the aberration correction (Chang et al., 2020; Kim et al., 2021; Nam et al., 2022; Park, 2017), they have been adopted to neave displays in combination with various optical elements such as holographic optical elements (Li et al., 2016; Maimone et al., 2017; Yeom et al., 2015), geometric phase lenses (Kim et al., 2022; Nam et al., 2020; Rous, 2008), and waveguides (Jang et al., 2022). In parallel with developments in display systems, CGH algorithms to design SLM phase patterns for desired images have also been developed. Numerous CGH algorithms that support various data types (Blinder et al., 2021; Chakravarthula et al., 2022; Padmanaban et al., 2019; Shi et al., 2017) and optimization method for high-quality images (Chakravarthula et al., 2019; Fienup, 1982; Gerchberg, 1972; Zhang et al., 2017) have been proposed. Notably, computational methods that optimize parameterized real-world propagation models have achieved state-of-the-art results in experiments (Chakravarthula et al., 2020; Choi et al., 2021; Peng et al., 2020).
One of the major challenges to achieve high-quality images in holographic displays is the presence of speckle noise. The random phase distribution of the hologram introduces noise into the image, resulting in speckle intensity patterns. These speckle patterns, appearing as grainy textures, significantly reduce the quality of the image (Goodman, 2007), particularly in the mid-high frequency range. The deterioration of the specific frequency region hinders the accommodation response induced by holographic stimuli, further limiting the realization of truly immersive 3D images (Kim et al., 2022). Efforts have been made to achieve speckle-free holographic displays through various approaches. Some methods involve using a partially-coherent light source (Deng and Chu, 2017; Kozacki and Chlipala, 2016; Lee et al., 2020; Peng et al., 2021), while others utilize high-speed SLMs to time-average multiple independent speckle patterns (Choi et al., 2022; Lee et al., 2022). However, these approaches often face trade-offs between factors such as resolution, speckle contrast, depth of field and the number of time-multiplexed frames.
It is worth noting a recent work that has overcome the physical limitation of holographic displays, called \(\acute{\text{e}}\)ndue, by incorporating a random binary mask as an \(\acute{\text{e}}\)tendue expander (Kuo et al., 2020). Though only validated in simulation, Baek et al. (2021) extends this work and presents joint optimization of SLM phase patterns and complex-valued \(\acute{\text{e}}\)ndue expander with a large dataset. These works demonstrate the potential of breaking the physical constraints of holographic displays through optimized optical elements. While previous works have focused on utilizing small pixel pitches of additional optical elements for \(\acute{\text{e}}\)ndue expansion, our method takes advantage of the polarization-multiplexing characteristic of the metasurface to expand the degree of freedom in the polarization channel.
_Metasurface._ Metasurfaces are two-dimensional arrays of artificially designed nanostructures that modulate the diffraction of light at subwavelength regimes [11, 12, 13]. They attract much attention as the next generation of optical devices because they can realize optical functions that conventional refractive and diffractive optical devices cannot, such as wavelength-multiplexing [1, 12, 11], angle-multiplexing [13, 14], and complex (amplitude and phase) modulation [11, 12]. The lithography-based nanofabrication process shows the potential for mass production utilizing the current foundry legacy. Utilizing the advantages of these metasurfaces, many applications have been reported such as flat lenses [1, 13, 14, 15, 16, 17, 18, 19, 20, 21], beam shaping [20], holography [21, 22, 23], optical filters [24, 25, 26], and biosensing [14, 27, 28]. Additional information about the basic principles, fabrication, and applications are located in the review articles [11, 12, 13].
Among the benefits of metasurfaces, the ability to control electromagnetic waves independently for two orthogonal polarization states is a powerful feature unique to metasurfaces. Arbabi et al. [20] and Mueller et al. [25] theoretically and experimentally demonstrated that by adjusting the geometric dimensions and rotation angle of the nanostructures comprising the metasurface, it is possible to independently control the phases of light for two arbitrary orthogonal elliptical polarization states [20]. Rubin et al. [21, 22] generalizes the metasurface design strategy using Jones matrix calculation.
Unlike conventional diffractive optical elements, metasurfaces should be designed considering the electromagnetic response at the nanoscale, as the nanostructures are arranged with a period smaller than the wavelength of the incident wave. The most common process in designing a metasurface utilizes the pre-simulated optical response library of nanostructures that can be used as a look-up table to specify the geometric dimensions [11]. However, this method does not reflect the electromagnetic effect during the optimization step and thereby cannot consider the interactions among adjacent nanostructures or the limited degree of freedom under physical constraints. The ideal solution to handle this problem is a full-field simulation [20, 21, 22], but these are computationally expensive, being impractical for designing the aperiodic metasurface even with several hundreds of micrometers. In contrast to that, some works introduce the proxy model that approximates the solutions of the Maxwell equation, thereby building the differentiable pipeline under consideration of nanoscale optical response [20]. Research on large-scale metalens designed with a metasurface proxy model shows the possibility of practical use in the virtual-reality system [12].
_Computational optics design._ In the field of computational photography, the joint optimization of the optical component and the post-processing algorithm in an end-to-end manner has been extensively explored and has demonstrated its effectiveness in domain-specific imaging systems. These end-to-end frameworks enable the differentiable optimization of optical components, such as binary masks [10, 13], diamond-turned refractive optics [20], compound optics [21], and diffractive optical elements [11], in conjunction with post-processing algorithms using large datasets. They have achieved state-of-the-art results in various applications, including extend depth of field and super-resolution imaging [22], super-resolution SPAD imaging [23], hyperspectral imaging [13, 14, 15], depth sensing [16, 17, 18, 19], and high dynamic range imaging [24, 25].
Most recently, researchers have applied metasurfaces to the end-to-end optimization framework described above to achieve unprecedented functionalities. Tseng et al. [20] proposed a joint optimization of single metalens and the decoding neural network that achieves a high-quality imaging performance within an extremely small form factor. Hazineh et al. [20] suggested single-shot depth sensing and spatial frequency filtering metalenses based on a Tensor-Flow framework. Trained metasurface resonator encoders for real-time hyperspectral imaging have also been reported [15]. Similar to aforementioned works, we extend the joint optimization approach to holographic displays, aiming to expand the degree of freedom in CGH optimization and improve the image quality. We present a differentiable optimization algorithm that can jointly optimize the metasurface geometric parameters and the SLM phase pattern.
Fig. 2: Conceptual schematics and the field evolution along the optical system of the conventional and the proposed holographic display. In a traditional system, (most of) the phase-only SLM imparts the phase profile (\(\phi_{\text{sim}}\)) for the linearly polarized incident light. Coherence of light source results in interference among wavefronts. In contrast to that, our scheme utilizes the orthogonal polarization states using metasurface. After passing through the SLM, the HWP rotates the horizontally polarized light by 45 degrees, making diagonal linear polarization state. Then, the polarization-multiplexing metasurface provides different phase modulation for each linear polarization state. Consequent reconstructed fields along two orthogonal polarization channels do not interfere with each other, thereby leading to a weighted intensity sum.
## 3. Methods
### Preliminaries
We first briefly define the notation used to describe the wave propagation model that includes polarization. Throughout this paper, we describe the polarization of the light using the Jones calculus, where the polarization state are denoted with 2\(\times\)1 Jones vectors and optical elements are described with 2\(\times\)2 Jones matrices.
\[\vec{v}=\begin{bmatrix}v_{x}\\ v_{y}\end{bmatrix},\quad\mathrm{J}=\begin{bmatrix}J_{xx}&J_{xy}\\ J_{yx}&J_{yy}\end{bmatrix}. \tag{1}\]
Based on this notation, we define horizontal linear polarization as \(\vec{x}=[1,0]^{\top}\) and vertical linear polarization as \(\vec{y}=[0,1]^{\top}\). Therefore, elements \(v_{x},v_{y}\) of Jones vector \(\vec{v}\) are complex-valued amplitude of horizontal and vertical linear polarization components. The polarization operation of an optical element is calculated by matrix multiplication between the Jones matrix and the Jones vector. We use \((\cdot)\) to represent matrix multiplication.
### Depolarized holography
We exploit polarization in holographic displays by depolarizing the light using the metasurface. Depolarization itself is not a novel idea and has already been used in digital holography and holographic projection for speckle suppression (Bianco et al., 2018; Goodman, 2007; Rong et al., 2010). However, this method relies on the random behavior of a diffusive screen that is difficult to be delicately designed. Instead, we use a polarization-multiplexed metasurface that allows per-pixel control of phase modulation. This design flexibility of the metasurface offers potential for further improvements in performance. Additionally, the metasurface with known amplitude and phase enables CGH optimization incorporating the metasurface.
Figure 2 illustrates the comparison between conventional holographic displays and the proposed method. Conceptually, a holographic display can be abstracted to a simple optical system consisting of a laser and an SLM. Free-space wave propagation in holography is described in the angular spectrum method (ASM) (Matsubima and Shimobaba, 2009) expressed as
\[\begin{split} f_{\mathrm{ASM}}\left(u,z\right)&= \mathcal{F}^{-1}\left\{\mathcal{F}\left\{u\right\}\mathcal{H}(v_{x},v_{y}, \lambda,z)\right\}\\ \mathcal{H}(v_{x},v_{y},\lambda,z)&=\begin{cases}e^{ i2\pi z\sqrt{1/\lambda^{2}-v_{x}^{2}-v_{y}^{2}}},&\text{if }\sqrt{v_{x}^{2}+v_{y}^{2}}<\frac{1}{ \lambda}\\ 0,&\text{otherwise}\end{cases}\end{split} \tag{2}\]
where \(u\) is a complex-valued wavefront, \(z\) is a propagation distance, \(\lambda\) is a wavelength, and \(v_{x},v_{y}\) are spatial frequencies. For the sake of simplicity, we omit \(\lambda\) for the rest of the equations. Generally, CGH optimization algorithms calculate the propagated field through the ASM and iteratively optimize the amplitude of the propagated field to be the target amplitude.
In our method, a half-wave plate (HWP) and a metasurface are placed after the SLM. The angle between a horizontal line and the fast axis of the HWP is set to 22.5 degrees, so our HWP rotates the incident horizontally polarized light from SLM by 45 degrees, making a diagonal polarization state. Since the diagonal polarization state can be separated into a horizontal and vertical linear polarization with identical amplitude, the metasurface after the HWP applies different phase modulation on these orthogonal linear polarization states. The Jones matrix of the HWP and the metasurface are described as
\[\begin{split}\mathrm{J}_{\mathrm{hwp}}=\frac{1}{\sqrt{2}} \begin{bmatrix}1&-1\\ 1&1\end{bmatrix},\quad\mathrm{J}_{\mathrm{meta}}=\begin{bmatrix}e^{ i\phi_{xx}}&0\\ 0&e^{i\phi_{yy}}\end{bmatrix},\end{split} \tag{3}\]
where \(\phi_{xx},\phi_{yy}\) are the phase shifts on the co-polarized component of transmitted light. We assume that the SLM, the HWP, and the metasurface are sufficiently close to be located on the same plane. The Jones vector of the complex-valued wavefront after the metasurface is expressed as a matrix multiplication of the Jones vector of the SLM field (\(e^{i\phi_{\mathrm{dim}}}\vec{x}\)), the Jones matrix of HWP (\(\mathrm{J}_{\mathrm{hwp}}\)), and that of metasurface (\(\mathrm{J}_{\mathrm{meta}}\)). Therefore, the complex-valued wavefront at distance \(z\) of the proposed system and the intensity of the corresponding field are expressed as
\[\begin{split} f_{\mathrm{depol}}\left(\phi_{\mathrm{dim}},z\right) &=f_{\mathrm{ASM}}\left(\mathrm{J}_{\mathrm{meta}}\cdot\mathrm{J}_{ \mathrm{hwp}}\cdot e^{i\phi_{\mathrm{dim}}}\vec{x},z\right),\\ \big{|}f_{\mathrm{depol}}\left(\phi_{\mathrm{dim}},z\right) &\big{|}^{2}=\frac{1}{2}\sum_{p\in(x,y)}\left|f_{ \mathrm{ASM}}\left(e^{i\phi_{pp}}e^{i\phi_{\mathrm{dim}}},z\right)\right|^{ 2}.\end{split} \tag{4}\]
Here, the intensity of the propagated field is expressed as the intensity sum of the field evolved with two orthogonal polarization states due to mutual incoherence. Therefore, the intensity of the propagated field resembles that of the partially-coherent holographic displays, where polarization diversity replaces previously exploited diversities: angle diversity (Lee et al., 2020), wavelength diversity (Deng and Chu, 2017; Kozacki and Chlipala, 2016; Peng et al., 2021), and time-multiplexed frames (Choi et al., 2022; Lee et al., 2022).
When the hologram generated from the SLM phase pattern is depolarized into two linear orthogonal polarization channels as it passes through the metasurface, the holographic images of each polarization should be complementary to each other to improve the image quality. Therefore, the optimization of the phase distribution \(\phi_{xx},\phi_{yy}\) of the metasurface for the two orthogonal polarization states is the core of this work, which determines the performance of the depolarized holography. This necessitates a deliberate metasurface design through optimization.
### Joint optimization pipeline for polarization-multiplexing metasurface design
_Metasurface proxy model._ We use a linear polarization basis for the metasurface design since it rarely introduces an undesirable cross-polarization leakage in the multi-wavelength regime (Arbabi et al., 2015; Mueller et al., 2017). However, it is difficult for the silicon nitride metasurfaces to fully cover the \(2\pi\) radian range of phase modulation on both orthogonal polarization states under the practical fabrication conditions, due to the low refractive index (see section S1.1 in Supplementary Material for more details). Additionally, the dispersion characteristic of the dielectric material results in varying phase shifts for different wavelengths. To take account of these issues, we adopt the differentiable metasurface proxy model to solve the physically constrained problem stemming from the phase modulation range and material dispersion.
The establishment of the proxy model is divided into three major steps. First, the electromagnetic response of the nanostructures is simulated by rigorous coupled-wave analysis (RCWA) (Kim and
Lee 2023) under local periodic approximation (LPA) (Li et al., 2022; Pestourie et al., 2018; Tseng et al., 2021). Given that the pixel pitch of the metasurface and the height of the nanorod are decided, we obtain the modulated phase as a function of the length and width (\(l,w\)) of the nanorod. The combination of two polarization states (\(\phi_{xx},\phi_{yy}\)) and wavelengths (638, 520, and 450 nm) results in a total of six libraries. Next, for each wavelength, the libraries of \(\phi_{xx},\phi_{yy}\) are fitted by linear quadratic polynomials and used to represent the Jones matrix, of which the general formulation can be written as follows:
\[\mathrm{J}_{\mathrm{proxy}}\left(l,w\right)=\left[e^{i\sum_{n=m+0}^{2}c_{nm}l ^{n}w^{m}}\begin{array}{c}0\\ 0\end{array}\right. \tag{5}\]
where \(l\), \(w\) are normalized by the pixel pitch of the metasurface, \(c_{nm}\), \(c_{nm}\) are the polynomial coefficients, and \(\mathrm{J}_{\mathrm{proxy}}\) is an approximated Jones matrix of the metasurface. More details about the libraries and fitted polynomials can be found in the Supplementary Material.
Joint optimization pipelineWhile the metasurface can be engineered for our depolarized holography, the SLM phase patterns can also be optimized for the metasurface. Therefore, we jointly optimize the metasurface and SLM phase patterns. Figure 3 illustrates our joint optimization pipeline. The proposed pipeline is based on the CGH optimization algorithm with focal stack supervision (Choi et al., 2022; Lee et al., 2022). We choose a focal stack as an optimization target since it is a challenging, over-constrained problem for a single SLM phase pattern in conventional holographic displays. We evaluate the degree of freedom brought by the metasurface and the joint optimization through focal stack holograms.
In our pipeline, we jointly optimize two parameters: the geometry-maps of the metasurface and SLM phase patterns. First, the complex-valued amplitude of the metasurface is calculated from the geometry-maps using the pre-calibrated metasurface proxy model in Equation 5. In addition, we implement a noise function \(f_{\mathrm{noise}}\) to simulate the potential alignment and fabrication errors that possibly occur during real-world experiments, thereby making the optimized metasurface robust against these imperfections.
\[f_{\mathrm{noise}}\left(l\left(x\right)\right)=l\left(x\right)*\delta\left(x- x_{e}\right)+l_{e}. \tag{6}\]
In the equation, \(*\) is convolution, and \(\delta(\cdot)\) represents the Dirac delta function. The metasurface is shifted by misalignment noise \(x_{e}\) determined by uniform random distribution \(\mathcal{U}\left(-\sigma_{x},\sigma_{x}\right)\), and absolute value of Gaussian noise \(l_{e}\sim|\sigma_{l}^{2}N(0,1)|\) is added for the fabrication error. Though we only express dependencies in \(l\) for dimension \(x\) in the equation, the same applies to parameter \(w\) and dimension \(y\).
With the noise function in Equation 6, we can express the noise-reflected Jones matrix of the metasurface \(\mathrm{J}_{\mathrm{proxy}}\) by substituting it into the Equation 5. Therefore, the amplitude of the propagated field is obtained with Equation 4, and we compare it with the target
Fig. 3: Illustration of the joint optimization pipeline. Nanostructure geometry-maps of the metasurface are jointly optimized with the SLM phase pattern to realize the focal stack holograms over the target image dataset. The SLM field evolves into two distinct holograms by the polarization multiplexing metasurface. In this process, the noise-reflected Jones matrix models the optical operation of the metasurface under experimental situations. Two holograms for each polarization state propagate to all target planes, then combined by intensity summation for each depth. Backpropagating gradients of the loss calculated between the reconstructed and target focal stacks updates the metasurface and SLM phase patterns. Source image credits to Alex Trevino.
amplitude as
(7)
where \(\mathcal{L}\) is a loss function, \(\phi\) is an SLM phase pattern, and \(\{d\},d=1\ldots D\) is the index of the propagation distances. We optimize the geometry-maps \((l,w)\) of the metasurface through a large dataset where SLM phase patterns \(\phi\) are optimized per each target image.
Algorithm 1 demonstrates the algorithm used for the metasurface optimization in detail. We alternately update the metasurface geometry-maps and the SLM phase patterns using the stochastic gradient descent method. During the metasurface optimization, alignment and fabrication errors are simulated with the noise function \(f_{\text{noise}}\) and applied to the metasurface, and the geometry-maps are updated to minimize the loss defined in the Equation 7. However, the SLM phase patterns cannot converge to a certain solution if the position and values of the metasurface geometry-maps change every iteration. Therefore, we leave out the noise function \(f_{\text{noise}}\) during the phase pattern optimization and assume the ideal metasurface profile without fabrication and alignment errors. To the best of our knowledge, a high-resolution RGB-D dataset of natural images does not exist, so we use the DIV2K dataset [1] and generate focal stacks from a single 2D image as target data. For every training data, a 2D image is placed at a randomly selected plane, and the focal stack of incoherent propagation is calculated from the image [11].
We implement our algorithm in PyTorch and utilize the automatic differentiation tools for joint optimization. The metasurface is trained for 2000 epochs with 100 data samples in the DIV2K train set. We set the learning rate for the SLM phase patterns to \(1e^{-1}\), and for the metasurface to \(5e^{-3}\). The SLM phase patterns are initialized as a uniform random phase from the range of \([-\pi,\pi]\), while the metasurface geometry-maps are initialized with a uniform random distribution from the range of \([-1e^{-3},1e^{-3}]\). The joint optimization takes approximately 37 hours to converge on an NVIDIA RTX A6000. Our source code is available on the project website.
```
\(E:\text{Number of epochs}\) \(N:\text{Number of training data}\) \(\mathcal{L}:\text{Loss function}\) \(\alpha_{\text{meta}}.\alpha_{\text{slm}}:\text{Learning rates}\) for\(e\)in\(1\ldots E\)do for\(n\)in\(1\ldots N\)do // Metasurface optimization \(\int_{\text{meta}}\leftarrow\int_{\text{proxy}}\left(f_{\text{noise}}\left(l, w\right)\right)\) \(u_{\text{meta},n}\leftarrow\int_{\text{meta}}\cdot J_{\text{hyp}}\cdot e^{i \phi_{\text{noise}}\vec{x}}\) \(a^{\{d\}}_{\text{recon,n}}\leftarrow\left|f_{\text{SAM}}\left(u_{\text{meta},n },z^{\{d\}}\right)\right|\) \((l,w)\leftarrow(l,w)-\alpha_{\text{meta}}\cdot\mathcal{L}\left(a^{\{d\}}_{ \text{recon,n}},a^{\{d\}}_{\text{target,n}}\right)\) // SLM phase pattern optimization \(\int_{\text{meta}}\leftarrow\int_{\text{proxy}}\left(l,w\right)\) \(u_{\text{meta}}\leftarrow\int_{\text{meta}}\cdot J_{\text{hyp}}\cdot e^{i \phi_{\text{noise}}\vec{x}}\) \(a^{\{d\}}_{\text{recon,n}}\leftarrow\left|f_{\text{SAM}}\left(u_{\text{meta},n },z^{\{d\}}\right)\right|\) \(\phi_{\text{n,e}}\leftarrow\phi_{\text{n,e}}-\alpha_{\text{slm}}\cdot\mathcal{ L}\left(a^{\{d\}}_{\text{recon,n}},a^{\{d\}}_{\text{target,n}}\right)\) \(\rightarrow\) save updated \(\phi_{n,e}\) end for return\((l,w)\)
```
**Algorithm 1**Joint optimization pipeline
## 4. Simulation Results
Throughout the paper, as the target for the metasurface optimization is a focal stack, we evaluate our method with focal stack holograms generated from either 2D images or RGB-D data. To evaluate the image quality, we primarily utilize two metrics: peak signal-to-noise ratio (PSNR) and speckle contrast (SC). PSNR is calculated as the mean squared error between reconstructed images and target images across all 7 depth planes, encompassing both the focused and defocused images. This provides an assessment of the overall image quality of the focal stack. SC serves to quantify the presence of speckle noise in the image, representing the extent of intensity fluctuations of a speckle pattern relative to the average intensity. SC is defined by
\[SC=\frac{\sigma_{I}}{\bar{I}}, \tag{8}\]
where \(\sigma_{I}\) and \(\bar{I}\) represent the standard deviation and average of the intensity, respectively. SC ranges from 0 to 1, where a value of 1 indicates fully developed speckles, while lower SC values correspond to reduced speckle noise. In practice, the maximum value of SC is not 1 even with the fully developed speckles due to the influence of the optical system and the image sensor, which determine the number of independent phasor arrays [1]. We provide the speckle contrast estimated in simulation and measured with the experimental setup using identical settings. The speckle contrast is measured in the selected area whose intensity distribution is uniform, which is indicated by the green box in the figures. By utilizing these two image quality metrics, we analyze the advantages of the polarization-multiplexing metasurface in two aspects: providing an additional degree of freedom for focal stack hologram optimization and speckle reduction.
Tolerance in imperfectionsFigure 4 illustrates the impact of employing the noise function during the metasurface optimization. We optimize the metasurface with and without the noise function and compare the effect of misalignment and fabrication in these two metasurfaces. The upper left sub-images represent the reconstructed images assuming flawless fabrication and perfect alignment of the system. The lower-right sub-images show the reconstructed image under mismatched conditions; the metasurface is shifted 10 pixels (31 \(\mu\)m) horizontally and vertically from the SLM, and a fabrication error is introduced in the geometry-maps, following a Gaussian distribution with a deviation of 5 nm. It is evident that the existence of noise function in the entire optimization pipeline effectively mitigates the impact of imperfections, making the designed metasurface more practical for real-world applications. Notably, even the PSNR of the case without mismatch increases with the utilization of the
noise function. We presume that the noise function also prevents the overfitting of the metasurface profile to the training dataset.
Focal stack hologramsWe compare the simulation results of a total of four scenarios for the evaluation: a conventional holographic display without a metasurface (_conventional_), a depolarized holography with a metasurface fabricated from a geometry-map of a random distribution (_random depol_), and with the optimized metasurface utilizing only a single polarization state (_optimized single-pol_) or depolarized with a diagonal polarization (_optimized depol_). We include the _random depol_ case in our simulation as a baseline of polarization-multiplexing metasurface without optimization. By comparing the random metasurface and the optimized metasurface, we can distinguish the effect of the depolarized holography and the joint optimization pipeline.
The simulated results in Figure 5(a) show that both holograms optimized to focal stacks generated from 2D (first row) and RGB-D (second row) data exhibit a reduction in speckle noise when the metasurface is inserted and depolarized, regardless of whether it is optimized or not. When comparing the two depolarized metasurfaces, _optimized depol_ outperforms _random depol_ in terms of PSNR and speckle contrast, demonstrating the effectiveness of metasurface optimization. The case of _optimized single-pol_ provides insight about how our polarization-multiplexing metasurface works. Even with the optimized metasurface, the image quality is worse than _conventional_, if only a single polarization state is available. The polarization-multiplexing metasurface reconstructs two slightly different 'worse holograms', and the incoherent summation of these two holograms by depolarizing results in the best image quality, which is equivalent to _optimized depol_. It is worth noting that even though the metasurface is optimized with focal stacks generated from 2D images, focal stacks from RGB-D also shows improvements. This suggests that our joint optimization pipeline is generalized enough to be used in other types of holograms that are not specifically used during the optimization process.
Figure 5(b) presents the quantitative analysis of the four scenarios. We draw a histogram from the region specified by the green box in the first row of Figure 5(a). The black dashed line on the histogram indicates the peak of the intensity distribution of the ground truth image. Since the selected region has nearly uniform intensity, a sharp peak centered around the black dashed line implies the intensity distribution close to the ground truth. It is clear that _optimized depol_ has the sharpest intensity distribution among all cases, indicating reduced speckle noise without compromising image contrast. While the peak of _conventional_ and _optimized depol_ are close to the ground truth, the histogram shows broader distributions due to severe speckle noise. However, _random depol_ exhibits a slight shift in the peak, failing to accurately reproduce the intensity of the ground truth image. This results in a low-contrast image, as observed in Figure 5(a). Additionally, we include a graph of the average PSNR and speckle contrast of focal stack holograms obtained from 30 natural images in the DIV2K validation set (Agutsson and Timofte, 2017), which are not used during the metasurface optimization. The graph confirms that our previous observations in Figure 5(a) apply to general images; our depolarized holography outperforms the conventional method by 4.36 dB through the incoherent superposition of two noisy holograms.
Understanding the optimized metasurfaceThough the optimization of the metasurface enhances the image contrast and reduces speckle noise, there is a trade-off due to the limited degree of freedom that a single metasurface can provide. Figure 6 demonstrates the simulation results highlighting the disadvantage of the optimized metasurface compared to the random metasurface when generating independent images for two orthogonal polarization states. In this simulation, we optimize a single SLM phase pattern to generate different images for vertical and horizontal polarization state, thereby indirectly assessing the ability of the metasurface to control these polarization states independently. The random metasurface outperforms the optimized metasurface in generating polarization-dependent images, contrary to the case of focal stack generation. Hence, we conclude that our joint optimization pipeline tailors the randomness of the metasurface to maintain image contrast while ensuring that the two holograms in orthogonal polarization states are sufficiently distinct to benefit from incoherent superposition.
## 5. Experiment
### Implementation
#### Metasurface fabrication
The metasurface is fabricated utilizing electron beam lithography according to the flowchart sequence shown in Figure 7(a). A 0.5 mm thick glass wafer is cleaned with a sulfuric acid peroxide mixture (SPM), followed by 800 mm deposition of silicon nitride (SiN) utilizing a plasma-enhanced chemical vapor deposition equipment (P5000, AMAT). Two layers of electron beam resist are then spin-coated onto the SiN layer. First, a PMMA 495A4 solution is spin-coated at 500 rpm for 5 seconds and 2000 rpm for 40 seconds, followed by soft-baking at 180 degrees for 3 minutes. Then a PMMA 950A2 solution is spin-coated at 500 rpm for 5 seconds and 3000 rpm for 40 seconds. Similarly, the soft-bake process is done at 180 degrees for 3 minutes. To prevent charge accumulation issues during electron beam lithography, the conducting polymer (ESPACER 300Z, SHOWA DENKO) is spin-coated at 500 rpm for 5 seconds and 2000 rpm for 30 seconds. The designed nanopatterns are produced using electron beam lithography (JBK-6300FS,
Figure 4. Simulation results for validating the effect of the noise function. The upper left sub-images depict the image obtained when the metasurface is precisely aligned and fabricated without error, resulting in the reconstruction of phase patterns in the identical setting as CGH optimization. In contrast, lower right sub-images show the reconstructed results when the metasurface is misaligned by 10 pixels horizontally and vertically, accompanied by a fabrication error. Even with these slight errors, the image quality is severely degraded in the absence of the noise function during the metasurface optimization. Source image credits to eMirage.
JCOL), which takes about 20 hours to fabricate two metasurfaces for experimental demonstration. After exposure, water-soluble conducting polymer is removed with DI water and the resist layers are developed by soaking the sample in the development solution (MIBK:IPA=1:3, MICROCHEM) for 3 minutes. Chromium (Cr) with a thickness of 40 nm is then deposited using an electron beam evaporator and use aceton to lift off the PMMA resist layers to complete the hardness patterning. After the SiN etching process (ICP 380, OXFORD SYSTEM100), the remaining Cr hardmask is removed with Cr etchant (CE-905N, TRANSFE), and the desired metasurface is finally fabricated, as shown in Figure 7(b).
Display systemWe evaluate our method using a benchtop holographic display prototype. A collimated laser (FISBA READYBeam) incidents to the SLM (HOLOEYE LETO-3) and passes through the 4\(f\) system, which filters out the high order diffraction terms and relays the wavefront of the SLM. We note here that our 4\(f\) system demagnifies the SLM with a magnification factor of approximately 0.5 to match the size of the SLM and the metasurface, which is fabricated to dimensions of 3.4\(\times\)6.0 mm\({}^{2}\). Following the 4\(f\) system, an HWP (Thorlabs AHWP10M-600) is placed to rotate the direction of linear polarization. The fabricated metasurface is carefully placed after the HWP, aligned with the relayed SLM. An additional 4\(f\) system is employed after the metasurface to image the SLM plane for the alignment of the metasurface and the SLM. Once alignment is achieved, this second 4\(f\) system is no longer required. Reconstructed images are captured from multiple planes using a CCD (FILR GS3-U3-5155M-C) mounted on a motorized stage (Newport FCL100). The
Fig. 5: Evaluation results in simulation. (a) The four columns correspond to the reconstructed images in the following cases: without a metasurface (_conventional_), depolarized with a metasurface of randomized geometry-maps (_random depol_), with an optimized metasurface utilizing a single polarization (_optimized single-pol_), and depolarized with an optimized metasurface (_optimized depol_), from left to right. The first and second rows display focal stack images derived from 2D and RGB-D data, respectively. In the second row, the upper left sub-images display the image focused at the far plane, while the lower right sub-images show the image focused at the near plane. The PSNR and speckle contrast are provided in the lower right of each image. The green boxes indicate the specified area where the speckle contrast is calculated. The orca image credits to Wirsteck Creators and the Junk Shop image credits to Alex Trevino. (b) (upper) Image histogram of the region indicated by the green box in the figures of the first row in (a), with the most frequent intensity of the ground truth image marked by a dashed black line. The averaged PSNR (lower left) and speckle contrast (lower right) of the focal stack holograms are calculated from 30 natural images from DIV2K validation dataset [1], with error bars indicating a standard error.
Fig. 6: Simulation results demonstrating polarization-dependent image generation. A single SLM phase pattern is optimized to generate distinct 2D images for each orthogonal polarization states with a metasurface specified above the figure. Compared to the random metasurface, the reconstructed images of each polarization state utilizing the optimized metasurface show significant noise in the image and are more challenging to differentiate. Source images credit to Flavio Della Tommasa and Blender Animation Studios.
schematic diagram of the benchtop prototype is illustrated in Figure 13(a).
_Metasurface alignment._ Precise alignment is crucial for our system since we optimize the metasurface under the assumption that the SLM and the metasurface are in the same position. Our alignment procedure can be divided into two main steps. First, we align the metasurface in 3 axes using two motorized stages and a manual stage. This is done by imaging the SLM plane through the second 4\(f\) system and observing the boundary lines of the SLM and the metasurface. Second, we perform camera-in-the-loop (CITL) model calibration and learn the actual phase and amplitude of the fabricated metasurface. This step is similar to a fine-tuning step of post-processing algorithms performed in many end-to-end cameras [26, 27]. Through CITL model calibration, both misalignment and fabrication error of the metasurface are measured and included in the CGH optimization process.
We note here that our calibration process is relatively simpler compared to that of Kuo et al. [20], which aligned the SLM and the random binary mask using optimized SLM phase patterns that generate a single focal spot. Since we optimize the metasurface with the noise function \(f_{\text{noise}}\) for alignment robustness, the metasurface phase pattern is intentionally designed to be coarse enough to be less sensitive to alignment errors. The effectiveness of the noise function in achieving alignment robustness is already discussed in Section 4. Furthermore, these coarse phase patterns allow us to calibrate the metasurface phase and amplitude through CITL optimization. Additional information regarding the alignment of the metasurface and the SLM can be found in the Supplementary Material.
_Model calibration with camera-in-the-loop training._ We use a CITL-calibrated wave propagation model during the experimental validation [2, 21, 22, 23]. The CITL-calibrated model helps to reduce the discrepancy between the simulation and the real-world, thereby providing a clearer evaluation of the proposed method. In order to accurately capture the physical phenomenon of polarization-multiplexing, we combine the deep neural network-based propagation model developed by Choi et al. [20] and the all-physically interpretable model introduced by Jang et al. [20], with slight modifications to incorporate the Jones matrices of the HWP and the metasurface.
The schematic diagram of the proposed model is depicted in Figure 8. A multi layer perceptron (MLP) and a 3\(\times\)3 kernel \(k\) model spatially-varying phase response and crosstalk between adjacent SLM pixels. Source intensity \(a_{\text{src}}\), phase \(\phi_{\text{src}}\) and complex field of Fourier plane \(a_{\mathcal{F}},\phi_{\mathcal{F}}\) are incorporated into ASM to account for contents-independent propagation terms. Different from other models, we introduce the metasurface and the HWP to characterize our depolarized holography. The amplitude and phase of the two polarization states of the metasurface are learned to reflect the actual fabrication and alignment results. The rotation angle \(\theta_{\text{tilt}}\) of the HWP is parameterized to consider the mismatch between the polarization direction of the light source and the fast axis of the HWP. The reconstructed amplitudes then pass through a convolutional neural network \(\text{CNN}_{\text{target}}\), that incorporates contents-dependent terms. To conclude, our propagation model is expressed as
\[\begin{split} f_{\text{model}}\left(\phi\right)&= \text{CNN}_{\text{target}}\left(f_{\text{ASM}}\left(\mathsf{J}_{\text{proxy}} \left(l,w\right)\cdot\mathsf{J}_{\text{hWP}}\left(\theta_{\text{tilt}} \right)\right.\right.\\ &\left.\left.\cdot a_{\text{src}}e^{i\phi_{\text{src}}}e^{i \left(k\text{-MLP}\left(\phi\right)\right)};a_{\mathcal{F}},\phi_{\mathcal{F} }\right)\right).\end{split} \tag{9}\]
As our depolarized holography generates different images based on the polarization state of light, we obtain a polarization-dependent dataset of captured amplitudes for model training. The dataset consists of 1,600 phase patterns generated using stochastic gradient descent, and additional 400 phase patterns obtained through the alternating direction method of multipliers method [20]. Among the phase patterns generated using the stochastic gradient descent method, 800 phases were optimized with 2D images as the targets, while the remaining 800 phases were optimized using incoherent focal stacks derived from 2D images. In total, 2,000 phase patterns are used for training each channel. During the dataset generation, we randomized learning rates, propagation distances, and ranges of initial random phase distribution. We capture the intensity of holograms in 7 depth planes, encompassing 4 cases for a single phase pattern: without a metasurface, and vertical, horizontal, and diagonal polarization with the metasurface. This enables the model to learn the polarization-dependent phase and amplitude of
Fig. 8: Schematic illustration of our wave propagation model CITL calibration framework. Our model includes the light source, SLM, Fourier plane, HWP, and the metasurface, which are parameterized to account for the contents-independent terms. Additionally, a convolutional neural network is incorporated for the contents-dependent terms. The propagation model is trained with a dataset of captured amplitudes. Source image credits to Alex Trevino.
Fig. 7: (a) Metasurface fabrication flowchart. (b) Metasurface fabrication results. The fabricated metasurface measures approximately 3.4\(\times\)6.0 mm\({}^{2}\). Photograph of the metasurface and its SEM images along the top-view and tilt-view, respectively.
the metasurface through CITL training, along with other parameters of the propagation model. The model is trained for 10 epochs with learning rate of \(5e^{-4}\) for each channel. The model training takes approximately 6 hours for each channel on an NVIDIA RTX A6000. Additional details and analysis regarding the CITL model calibration and the results of trained parameters are provided in the Supplementary Material.
### Experimental results
Figure 9 shows the experimentally captured image of the focal stack holograms in our benchtop prototype setup. The SLM phase patterns are optimized with the CITL-calibrated model with the incoherent focal stacks derived from 2D images or RGB-D data. The _random depol_ case is excluded from the experiment due to the obvious disadvantages over the _optimized depol_ observed in the simulation and is vulnerable to alignment and fabrication errors. The comparison between the _conventional_ and _optimized depol_ demonstrate that the inclusion of the metasurface results in the improved image quality of focal stack holograms. The grainy speckle pattern in the _conventional_ case becomes smoother in the _optimized depol_ case, resulting in enhanced visibility of image details. The enlargements of each image show that _optimized depol_ case shows lower intensity fluctuation and reduced grainy patterns than the _conventional_ one. The speckle pattern is high-frequency noise and obscures the mid-high frequency areas of the image with the _conventional_ case. In contrast to that, the _optimized depol_ enables the image details more visible by reducing the speckle noise. Also, the captured images with the _optimized single-pol_ are even noisier than those without the metasurface, demonstrating severe speckle noise. As the polarization-multiplexing functionality enables the incoherent superposition of two polarization states, the integration of two complementary images results in the best image quality observed in the _optimized depol_. These experimental results align with the simulation, exhibiting the competence of our depolarized holography in image quality.
We note here that the image quality of our captured results falls below that of the state-of-the-art research, primarily due to the challenges in generating incoherent focal stacks using a single SLM
Figure 9. The captured images of focal stack holograms, where the target focal stacks are generated from 2D images for the first and second rows, and from RGB-D data for the third row. In the third row, the upper left sub-images display the image focused at the far plane, while the lower right sub-images show the image focused at the near plane. The three columns, from left to right, correspond to the holographic display without a metasurface (_conventional_), with the optimized metasurface using a single polarization channel (_optimized single-pol_), and with the optimized metasurface using the two orthogonal polarization channels (_optimized depol_). Among these three cases, _optimized depol_ exhibits the best image quality, with a smoother speckle intensity pattern and clearer details of the image. The PSNR and the speckle contrast values are provided in the bottom right of each image. In the actual experiments, the presence of unwanted DC noise decreases the variability in PSNR between different conditions when compared to the reconstructed results. The green boxes indicate the specified area at which the speckle contrast is calculated. Source images credit to Mila Drumeva (first row), Sean Pavone (second row), and Blender Animation Studio (third row).
phase pattern. It is well-known that achieving incoherent focal stack optimization with a coherent single frame phase pattern is challenging due to the presence of speckle noise and the problem being over-constrained. As a result, previous works have utilized time-multiplexed frames to generate incoherent focal stacks [10, 11] or introduced specific constraints to facilitate the optimization process [10, 11]. The performance of our method can be further improved by integrating it with other speckle reduction methods, as discussed in Section 6.
## 6. Discussion
In this work, we introduce a depolarized holography enabled by the polarization-multiplexing metasurface to leverage the polarization channel of the holographic display. This novel approach allows for exploiting the mutual incoherence between orthogonal polarization states as a new degree of freedom for CGH optimization and speckle suppression. To this end, we present a joint optimization pipeline for co-designing the metasurface and the SLM phase patterns. Simulation results demonstrate that our scheme is superior to the conventional case, while the metasurface optimization further improves the image quality. Furthermore, we fabricate the optimized polarization-multiplexing metasurface and validate the proposed method using a display prototype. The experimental results align with the simulation and outperform conventional holographic displays.
### Comparison with other speckle reduction methods
Time-multiplexing schemeThe proposed method stands independently and is compatible with other speckle reduction methods, demonstrating superior performance under identical conditions when combined. We firmly believe that incorporating an additional degree of freedom, brought by polarization-multiplexing metasurface, holds great advantages since other existing methods have their inherent limitations. For example, recently introduced time-multiplexed holographic displays report state-of-the-art results with speckeless, photorealistic images [10, 11]. However, these displays necessitate high-speed SLMs, such as ferroelectric LC-based or MEMS-based SLMs, capable of rendering multiple frames within the flicker threshold of the human eye (50 Hz). The limited bit depth of SLM results in a decline in contrast, and the computation time or memory capacity escalates proportionally with the number of frames.
Note that the state-of-the-art LC technologies typically provide refresh rates up to approximately 400 Hz for 8-bit modulation [11], but it is important to highlight that these specific models may not be widely accessible on the commercial market. In this aspect, adopting our method with time-multiplexed holographic displays helps improve the image quality in a limited frame to utilize. Figure 10 shows simulation results of PSNR and speckle contrast estimated in time-multiplexed holographic displays integrated with the polarization-multiplexing metasurface. The image quality metrics are measured from focal stack holograms generated from 30 images of DIV2K dataset [12]. Our method offers distinct advantages through its flexible integration with conventional time-multiplexing techniques and its ability to achieve competitive performance in terms of improved image metrics.
Partially coherent light sourceA comparison between a depolarized holography and a holographic display using a partially coherent light source provides valuable insights into their performance. Figure 11 shows the simulation results of 2D holograms realized with holographic displays with a partially coherent source and our depolarized holography. To simulate the partially coherent light source, we assume a setup consisting of a collimating lens with a focal length of 200 mm and a light source with a square aperture of 100 \(\mu m\) width. The wavelength spectrum of the light source is modeled to follow a Gaussian distribution with a standard deviation of 1 nm. The depolarized holography produces a sharp image with a higher PSNR compared to the image reconstructed using the partially coherent source. This observation underscores the image quality improvement achieved through our depolarized holography approach.
Figure 11. Image quality comparison between our depolarized holography with the optimized metasurface (left) and the holographic display with a partially coherent light source (right). The measured PSNR of the reconstructed 2D image is provided at the right bottom of each image and the enlargements are additionally provided for visibility. Note that the CGHs are supervised with a sole 2D intensity profile, not with a focal stack. Source image credits to Salomia Oana Irina.
Figure 10. The average PSNR of the time-multiplexed holographic display, combined with our depolarized holography enabled by metasurfaces. The PSNR values are obtained from 30 images of the DIV2K validation set, and the error bars indicate the standard error. The metasurfaces improve PSNR regardless of the number of time-multiplexed frames, demonstrating the advantage of our method under identical number of frames. Image quality results when our study is combined with a time-multiplexing scheme.
### Challenges and future works
_Trade-offs between speckle reduction and image contrast._ In our work, the polarization-dependent phase modulation of the metasurface does not allow for the complete independent modulation of the two orthogonal polarization states, introducing noise under certain conditions. For example, we can think of the point spread function. In conventional holographic displays, a lens phase function provides a straightforward solution that generates a single point in space. However, with our depolarized holography, the metasurface alters the phase profile away from the lens phase function, which results in leakage around the focal spot. Furthermore, since the image reconstruction relies on averaging the images of orthogonal polarization states, a closed-form solution to generate a single point is elusive. The insets in the top-left corner of Fig. 12 visualize how the point spread function becomes blurred when using our polarization-multiplexing metasurface.
_Scene-specific holographic realization._ The optimization of the metasurface involves a training procedure with a set of natural images, which can introduce scene-specific limitations to our work. In certain exceptional cases, noise may undesirably appear. This is particularly true when reconstructing images with binary intensity distribution. The first row in Fig. 12 demonstrates the reconstructed binary 2D images realized by the conventional holographic display without metasurface, and our depolarized holography with the optimized metasurface. Here, we provide the reconstructed images of a resolution target, which can represent the aforementioned exceptional case. The reconstructed image with our method suffers from a slight degradation in the image quality evaluated with PSNR. However, the second row of the figure showcases that our method is beneficial in 2D natural images, even though not included in the metasurface optimization process.
_Optimization algorithms._ There is room for further improvement in the optimization algorithms used for both metasurfaces and SLM phase patterns. While our current approach optimizes the metasurface in a per-pixel manner, previous research has reported that using a basis for designing optical elements can help avoid the pitfalls of local minima (Chang and Wetzstein, 2019; Sun et al., 2020). Additionally, the SLM phase patterns are generated with the iterative stochastic gradient method, which currently takes several tens of seconds to converge. To enable real-time applications, it may be beneficial to explore the use of a deep neural network for CGH optimization in combination with the optimization pipeline, as demonstrated in many end-to-end cameras (Shi et al., 2022; Sun et al., 2020). Lastly, while we optimize the metasurface with an ideal wave propagation model and use CITL-calibration as a fine-tuning step, there is potential for further improvement by directly optimizing the metasurface with a CITL-calibrated model. This approach holds promise for enhancing the performance of the metasurface in real-world applications.
_Fabrication cost of metasurface._ In this work, the metasurface is fabricated utilizing electron beam lithography equipment. This equipment is capable of producing nanopatterns with up to tens of nanometer resolution, but typically have lower throughput due to the time-consuming nature of scanning the electron beam across the substrate, and thus, expensive cost. To solve these issues, Lee et al. (2018) and Yoon et al. (2020) show the potential for low-cost mass production of large-area metasurfaces using a method called nano-imprinting, in which only the master mold is produced by electron beam lithography and then replicas are printed in large quantities. Utilizing stepper photolithography being widely used in semiconductor fabrication efficiently produces large-area metasurfaces (Leitis et al., 2021; Park et al., 2019; She et al., 2018). Most recently, Kim et al. (2023) achieves the extreme practicality of metasurface fabrication by combining photolithography with wafer-scale nanoimprinting. Using these approaches, large-area metasurfaces can be mass-produced at low cost, which has great potential for industrial applications, including holographic displays.
_System form factor._ In our benchtop prototype, we utilize a \(4f\) system to relay the SLM directly onto the metasurface so that the two devices are in the same plane for experimental convenience
Fig. 12. Simulation results with various 2D images realized using conventional holographic displays w/o metasurface (left) and our depolarized holography w/ optimized metasurface (right). The insets in the top left corner of the first row show point spread functions magnified 30 times. The first row shows the 1951 USAF resolution target as a representative of binary 2D images, while the second row demonstrates natural 2D images. A section is cropped and enlarged with the PSNR provided at the bottom right. Source image credits to Sheila Say.
Fig. 13. (left) Our benchtop prototype utilizes a \(4f\) system to relay the SLM and exactly matches the metasurface with it. However, the position of the metasurface can be adjusted freely, only if it can be incorporated in the joint optimization pipeline. (right) The potential compact display scheme utilizes waveguides, thus placing the SLM and metasurface on opposite sides of it. This design significantly reduces the form factor and realizes very lightweight holographic display devices.
(Figure 13(a)). However, it is not necessary for the metasurface to be precisely located in the relayed SLM plane. As long as the position of the metasurface in the optical path is known, it can be incorporated into the joint optimization pipeline, regardless of its location. Therefore, to further miniaturize the device in a practical manner, an alternative way is to position the SLM and the metasurface on opposite sides of the waveguide (Kim et al., 2022; Maimone and Wang, 2020). This configuration eliminates the requirement for the 4\(f\) relay optics, which significantly contributes to the overall form factor of the current system. In the proposed design, the metasurface and SLM are separated by the thickness of the waveguide (Fig. 13(b)). The identical optimization pipeline used in this work can be applied to realize a thin and lightweight holographic display platform, resembling the form factor of squallasses (Lee et al., 2018). The compact and lightweight nature of the metasurface makes it an optimized optical element for such wearable devices. While the system form factor issue is beyond the scope of this study, it presents an interesting and meaningful topic for future research. There are also several examples of combining metasurfaces with liquid crystals. By integrating the metasurface with the SLM in the fabrication process (Badloe et al., 2022; Li et al., 2019), not only the system form factor but also the alignment issue between the SLM and the metasurface can be solved.
_Human factors._ In our work, we focus on speckle reduction to improve the image quality; as a result, the etendue of our system is identical to conventional holographic displays. As a narrow etendue limits the field of view and the eyebox of the display, etendue expansion is widely recognized as a core challenge in achieving practical applications for holographic displays. Efforts have been made to expand the etendue through various optical elements such as binary masks (Kuo et al., 2020), diffractive optical elements (Baek et al., 2021), and lens arrays (Chae et al., 2023). While we do not address the etendue expansion in this work, exploring the application of metasurfaces for the etendue expansion appears promising. The complex modulation and polarization-multiplexing capabilities of metasurfaces have the potential to further enhance the quality of holographic displays with etendue expansion.
In addition, throughout the paper, we assume that all the light from the SLM is observed. However, the eyebox is sampled by the ocular pupil in practical viewing scenarios, and this leads to changes in the perceived image and speckle pattern of the reconstructed image (Chakravarthula et al., 2022; Chakravarthula et al., 2021). Although we anticipate that our method may remain effective with pupil sampling, optimizing the metasurface incorporating the pupil sampling effect could be an interesting future work.
## 7. Conclusion
Prompted by state-of-the-art CGH algorithms, holographic displays have made significant strides in achieving photorealistic images. However, the physical aspects of holographic displays, which define fundamental limitations, have often been overlooked. In this study, we demonstrate the polarization-multiplexed holographic display using a novel optical platform called metasurface, and expand the degree of freedom in CGH optimization. We believe that our work serves as a milestone in a new approach to leverage the unprecedented optical functionality of nano-optics to address unsolved challenges in holographic displays as well as various conventional optical systems.
## References
* (1)
* Agustsson and Timofte (2017) Eirikur Agustsson and Radu Timofte. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_.
* Atribabi et al. (2015a) Amir Atribabi, Yu Horie, Mahmood Bagheri, and Andrei Faraon. 2015a. Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission. _Nature Nanotechnology_ 10, 11 (2015a), 937-943. arXiv:1411.4494
* Arbabi et al. (2015b) Amir Arbabi, Yu Horie, Alexander J. Ball, Mahmood Bagheri, and Andrei Faraon. 2015b. Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitters. _Nature Communications_ 6, 1 (2015), 7069.
* Arbabi et al. (2018) Eihan Arbabi, Jiagi Li, Romans J. Hutchins, Seyedh Maissa Kamali, Amir Atribabi, Yu Horie, Pol Van Dorpe, Viviana Graham, Daniel A. Wagenaar, and Andrei Faraon. 2018. Two-Photon Microscopy with a Double-Wavelength Metasturface Objective Lens. _Nano Letters_ 18, 8 (2018), 4943-4948.
* Radloe et al. (2022) Trevon Radloe, Joonon Kim, Ihiki Kim, Won-Sik Kim, Young-Ki Kim, and Junusk Rho. 2022. Liquid crystal-powered Mie resonators for electrically tunable photorealistic color gradients and dark blocks. _Light: Science & Applications_ 11, 1 (2022), 118.
* Baek and Heide (2021) Seung-Hwan Baek and Felix Heide. 2021. Polarimetric spatio-temporal light transport probing. _ACM Transactions on Graphics_ 40, 6 (2021), 1-18.
* Baek et al. (2021) Seung-Hwan Baek, Hay Hayoshi Ikoma, Daniel S. Jeon, Youji Li, Wolfgang Heidrich, Gordon Weststein, and Min H. Kim. 2021a. Single-Shot Hyperspectral-Depth Imaging With Learned Diffractive Optics. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_. 2651-2660.
* Baek et al. (2021) Seung-Hwan Baek, Ethan Tseng, Andrew Maimone, Nathan Matsuda, Grace Kuo, Qiang Fu, Wolfgang Heidrich, Douglas Lamman, and Felix Heide. 2021b. Neural '(E)reduce Evapor for Ultra-Wide-Angle High-Fidelity Holographic Display. _arXiv_ (2021). arXiv:2109.08123
* Bianco et al. (2018) Vittorio Bianco, Pasquale Memmodo, Marco Leo, Silvio Montersen, Cosimo Distante, Melania Paturrao, Pascal Picart, Bahram Javidi, and Pietro Ferrao. 2018. Strategies for reducing speckle noise in digital holography. _Light: Science & Applications_ 7, 1 (2018), 8.
* Blinder et al. (2021) David Blinder, Makysmilian Chlipala, Tomasz Kozacki, and Peter Schelkens. 2021. Photorealistic computer generated holography with global illumination and path tracing. _Optics Letters_ 46, 9 (2021), 2188.
* Chae et al. (2023) Minoszek Chae, Kiseung Bang, Dongheon Yoo, and Yoonon Jeong. 2023. Efenduc Expansion in Holographic Near Eye Displays through Sparse Eye-Box Generation Using Lens Array Experience. _ACM Transactions on Graphics_ 24, 4 (2023).
* Chakravarthula et al. (2016) Ayan Chakravarthula, Yifan Peng, Joel Kollin, Henry Fuchs, and Felix Heide. 2016. Wetting Holography for Near-Eye Displays. 38, 6, Article 213 (2016), 13 pages.
* Chakravarthula et al. (2022) Frenee Chakravarthula, Ethan Tseng, Henry Fuchs, and Felix Heide. 2022. Heogless-Free Holography. _ACM Trans. Graph_ 41, 5, Article 17 (2022), 16 pages.
* Chakravarthula et al. (2020) Pranenetch Chakravarthula, Ethan Tseng, Tarn Srivastava, Henry Fuchs, and Felix Heide. 2020. Learned Hardware-in-the-Loop Phase Retrieval for Holographic near-Eye Displays. _ACM Trans. Graph_ 39, 6, Article 18 (2020), 18 pages.
* Chakravarthula et al. (2021) Praneneth Chakravarthula, Zhan Zhang, Okan Tursun, Piotr Dibylk, Qi Sun, and Henry Fuchs. 2021. Gaze-Configient Retinal Speckle Suppression for Perceptually-Matched Foreasted Holographic Displays. _IEEE Transactions on Visualization and Computer Graphics_ 27, 11 (2021), 4194-4203. [https://doi.org/10.1109/TVCG.2021.3106433](https://doi.org/10.1109/TVCG.2021.3106433)
* Chang et al. (2020) Chenchang Chang, Kiseung Bang, Gordon Weststein, Buzynbo Lee, and Liang Gao. 2020. Toward the next-generation VR/GR optics: a review of holographic near-eye displays from a human-centric perspective. _Optica_ 71, 11 (2020), 1563-1578.
* Chang and Westerstein (2019) Jule Chang and Gordon Westerstein. 2019. Deep Optics for Monocular Depth Estimation and 3D Object Detection. In _Proc. IEEE ICCV_.
* Chen et al. (2016) Hou-Tong Chen, Antoniette J Taylor, and Nanfang Yu. 2016. A review of metasurfaces: physics and applications. _Reports on Progress in Physics_ 79, 7 (2016), 076401. arXiv:1605.07672
* Choi et al. (2022) Suyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, Matthew O'Toole, and Gordon Weststein. 2022. Time-Multiplexed Neural Holography: A Flexible Framework.
for Holographic Near-Zye Display with Fast Heavily-Quantized Spatial Light Modulators. In _ACM SIGGRAPH 2022 Conference Proceedings_ (Vancouver, BC, Canada) (SIGGRAPH '22) Association for Computing Machinery, New York, NY, USA, Article 32, 9 pages.
* Choi et al. (2021) Suyeon Choi, Maun Gopakumar, Yifan Peng, Jonghyun Kim, and Gordon Wostrein. 2021. Neural 3D Holography: Learning Accurate Wave Propagation Models for 3D Holographic Virtual and Augmented Reality Displays. _ACM Trans. Graph._ 40, 6, Article 20 (2021), 12 pages.
* Chung and Miller (2020) Haeyun Chung and Owen D Miller. 2020. High-NA achromatic metalenses by inverse design. _Optics Express_ 28, 5 (2020), 6945. arXiv:1905.09213
* Decker et al. (2015) Manuel Decker, Isabelle Staude, Matthias Falkner, Jason Dominguez, Dragomir N. Neshev, Igel Beren, Thomas Pertsch, and Yuri S. Kivshar. 2015. High-Efficiency Dielectric Hugyens' Surfaces. _Advanced Optical Materials_ 3, 6 (2015), 813-820. arXiv:1405.5038
* Deng and Chu (2017) Yuanbo Deng and Daping Chu. 2017. Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays. _Scientific Reports_ 7, 1 (2017), 1-12.
* Dun et al. (2020) Xiong Dun, Hayato Ikoma, Gordon Wetzstein, Zhanshan Wang, Xinbin Cheng, and Yifan Peng. 2020. Learned rotationally symmetric diffractive achromat for full-resolution computational imaging. _Optica_ 7, 8 (Aug 2020), 913-922. [https://doi.org/10.1364/OPT143.94413](https://doi.org/10.1364/OPT143.94413)
* Fienup (1982) James F Fienup. 1982. Phase retrieval algorithms: a comparison. _Applied Optics_ 21, 15 (1982), 2758-2760.
* Gerchberg (1972) Ralph W Gerchberg. 1972. A practical algorithm for the determination of phase from image and diffraction plane pictures. _Optik_ 35 (1972), 237-246.
* Goodman (2005) Joseph W Goodman. 2005. _Introduction to Fourier optics_. Roberts and Company Publishers.
* Goodman (2007) Joseph W Goodman. 2007. _Speckle phenomena in optics: theory and applications_. Roberts and Company Publishers.
* Guo et al. (2019) Jiaying Guo, Teng Wang, Baogang Quan, Huan Zhao, Changzhi Gu, Junjie Li, Xinke Wang, Guohai Sita, and Yan Zhang. 2019. Polarization multiplexing for double images display. _Optic-Electron Advances_ 21, 11 (7091), 1. [https://www.researching.com/articles/Q/07117d3f213bd5850](https://www.researching.com/articles/Q/07117d3f213bd5850)
* Haim et al. (2018) Harel Haim, Shay Elmian, Raja Giryes, Alex M. Bronstein, and Emanuel Marom. 2018. Depth Estimation From a Single Image Using Deep Learned Phase Coded Mask. _IEEE Transactions on Imaging_ 4, 3 (2018), 298-310. [https://doi.org/10.1109/IC2108.2894326](https://doi.org/10.1109/IC2108.2894326)
* Hazi et al. (2022) Dean S Hazi, Soon Wei Daniel Lim, Zhujun Shi, Federico Capasso, Todd Zickler, and Qi Guo. 2022. D-flat: A Differentiating Flat-Optics Framework for End-to-End Measureful Visual Sensor Design. _arXiv_ (2022).
* Huang et al. (2017) Hui-Hui-Hsan, Cheng Hung, and Chu Jin Ping Tsai. 2017. Fundamentals and Applications of Metasurfaces. _Small Methods_ 1, 4 (2017), 1600064.
* Huang et al. (2013) Lingling Huang, Xianzhong Chen, Holger Muhlenhoff, Hao Zhang, Shunie Chen, Berfeng Bai, Qiaefang Tan, Guofin Jin, Kok-Wai Chen, Chegu Qiu, Jensen Li, Thomas Zentgraf, and Shuang Zhang. 2013. Three-dimensional optical holography using a plasmonic metasurface. _Communications_ 4, 1 (2013), 2808.
* Hwang et al. (2022) Inseung Hwang, Daniel S Jeon, Addiu Muzo, Diego Giurice, Xin Tong, and Min H Kim. 2022. Sparse ellipsometry: portable acquisition of polarimetric SVBRDF and shape with unstructured flash photography. _ACM Transactions on Graphics_ 41, 4 (2022), 1-14. arXiv:2207.04236
* Iliadis et al. (2020) Michael Iliadis, Leonidas Spinolas, and Aggelos K Katsaggelos. 2020. DeepBinaryMask: Learning a binary mask for wide compressive sensing. _Digital Signal Processing_ 96 (2020), 102591. [https://doi.org/10.1103/d.2019.102591](https://doi.org/10.1103/d.2019.102591)
* Jang et al. (2021) Changwang Jang, Kiseng Bang, Minseok Lee, Byounghe Lee, and Douglas Lamman. 2021. Waveguide Holography: Towards True 3D Holographic Glasses.
* Jang et al. (2021) Junhyeek Jang, Gun-Yeal Lee, Jungwon Sung, and Byounghe Lee. 2021. Independent Multichannel Wavefront Modulation for Angle Multiplexed Meta-Holograms. _Advanced Optical Materials_ 9, 17 (2021), 2106768.
* Kamali et al. (2017) Seyedeh Mahas Kamali, Ehsan Arbabi, Amir Arbabi, Yu Horie, MohammadSadegh, Faraj-Dam, and Andrei Faran. 2017. Angle-Multiplexed Metasurfaces: Encoding Independent Wavefronts in a Single Metasurface under Different Illumination. _Aptics_ 7, 4 (2017), 041056.
* Khorasaninejad et al. (2016) Mohammadze Khorasaninejad, Wei Ting Chen, Robert C. Devlin, Jaewoon Oh, Alexandror Y. Zhu, and Federico Capasso. 2016. Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging. _Science_ 352, 6200 (2016), 1109-1194.
* Kim et al. (2020) Changwhi Kim, Sun-F Kim, and Byoungheo Lee. 2020. Doublet metalens design for high numerical aperture and simultaneous correction of chromatic and monochromatic aberrations. _28_, 12 (2020), 1859.
* Kim and Lee (2023) Changwhiu Kim and Byoungheo Lee. 2023. TORCWA: GPU-accelerated Fourier modal method and gradient-based optimization for metasurface design. _Computer Physics Communications_ 282 (2023), 108552.
* Kim et al. (2013) Changj Kim, Heming Zimmers, Tael Pritch, Alexander Sorkine-Hornung, and Markus H Gross. 2013. Scene reconstruction from high spatio-angular resolution light fields. _ACM Trans. Graph._ 32, 4 (2013), 73-1.
* Kim et al. (2021) Dongyeon Kim, Seung-Woo Nam, Kiseung Bang, Byounghywo Lee, Seungjae Lee, Youngong Jeong, Jong-Mo Seo, and Byoungheo Lee. 2021b. Vision-correcting holographic display: evaluation of aberration correcting hologram. _Biomedical Optics Express_ 12, 8 (2021), 5179-5195.
* Kim et al. (2022) Dongyeon Kim, Seung-Woo Nam, Kyounghywo Lee, Jong-Mo Seo, and Byoungheo Lee. 2022b. Accountomotive Holography: Improving Accommodation Response for Perceptually Realistic Holographic Displays. _ACM Trans. Graph._ 41, 4, Article 111 (2022), 15 pages.
* Kim et al. (2022a) Jonghyun Kim, Muan Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, and Gordon Watersien. 2022a. Holographic Glasses for Virtual Reality. In _ACM SIGGRAPH 2022 Conference Proceedings_. (Vancouver, BC, Canada) (SIGGRAPH '22). Association for Computing Machinery, New York, NY, USA, Article 33, 9 pages.
* Kim et al. (2021) Joohoon Kim, Junhyue Seong, Wonjiong Kim, Gun-Yeal Lee, Seokwoo Kim, Hongyoon Kim, Song-Woo Nam, Dongyu Ko, Younghywo Lee, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon, Jaewoon Park, Jaewoon, Jaeon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jae, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jaewoon Park, Jae, Jaewoon Park, Jaewoon Park, Jaewoon
adjoint optimization. _Optica_ 7, 1 (2020), 77.
* Matsushima and Shimobaba (2009) Kyoji Matsushima and Tomoyoshi Shimobaba. 2009. Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields. _Optics express_ 17, 22 (2009), 19662-19673.
* Metzler et al. (2020) Christopher A. Metzler, Hayato Ikoma, Yifan Peng, and Gordon Wetzstein. 2020. Deep Optics for Single-Shot High-Dynamic-Edge Imaging. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Muller et al. (2017) P. J. Baltankar Mueller, N. A. Rubin, Robert C. Weim, Benethi Grocever, and Federico Capasso. 2017. Illustrative Polarization Optics: Independent Phase Control of Arbitrary Orthogonal States of Polarization. _Physical Review Letters_ 118, 11 (2017), 113901.
* Nam et al. (2022) Seung-Woo Nam, Dongyeon Kim, and Byoungho Lee. 2022. Accelerating a spatially varying aberration correction of holographic displays with low-rank approximation. _Opt. Lett._ 47, 13 (2022), 3175-3178.
* Nam et al. (2020) Seung-Woo Nam, Seoki Moon, Byounghyho Lee, Dongyeon Kim, Seungjae Lee, Chang-Kun Lee, and Byoungho Lee. 2020. Aberration-corrected full-color holographic augmented reality near-eye display using a Pancharatnam-Berry phase lens. _Optics Express_ 28, 21 (2020), 30836.
* Overvig et al. (2019) Adam C. Overvig, S. Sainen Strehta, Stephane C. Malek, Ming Lu, Aaron Stein, Changxi Zheng, and Naring Yu. 2019. Dielectric metasurfaces for complete and independent control of the optical amplitude and phase. _Light: Science & Applications_ 8, 1 (2019), 92.
* Padmanmanan et al. (2019) Nitish Padmanan, Yifan Peng, and Gordon Wetzstein. 2019. Holographic Near-Eye Displays Based on Overlap-Add Stereograms. _ACM Trans. Graph._ 38, 6, Article 214 (nov 2019), 13 pages.
* Park (2017) Jae-Hyung Park. 2017. Recent progress in computer-generated holography for three-dimensional scenes. _Journal of Information Display_ 18, 1 (2017), 1-12.
* Park et al. (2019) Joo-Sunk Park, Shuyan Zhang, Alan She, Wei Ting Chen, Peng Liu, Kerolos M. A. Yousef, J.-Xi Cheng, and Federico Capasso. 2019. All-Class, Large Metalness at Visible Wavelength Using Deep-Ultraviolet Procincin Holography. _Nano Letters_ 19, 12 (2019), 8673-8682.
* Peng et al. (2021) Yifan Peng, Suyeon Choi, Jonghyhyun Kim, and Gordon Wetzstein. 2021. Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration. _Science Advances_ 7, 46 (2021), 2059400.
* Peng et al. (2020) Yifan Peng, Suyeon Choi, Nitish Padmanan, and Gordon Wetzstein. 2020. Neural Holography with Camera-in-the-Loop Training. _ACM Trans. Graph._ 39, 6, Article 135 (nov 2019), 14 pages.
* Peng et al. (2019) Yifan Peng, Qilin Sun, Xiong Dun, Gordon Wetzstein, Wolfgang Heidrich, and Felix Heide. 2019. Learned Large Field-of-View Imaging with Thin-Plate Optics. 38, 6, Article 219 (nov 2019), 14 pages.
* Prestieuro et al. (2018) Raphael Prestieuro, Carlos Perez-Aranchia, Zin Lin, Wonseok Shin, Federico Capasso, and Steven G Johnson. 2018. Inverse design of large-area metasurfaces. _Optics Express_ 26, 26 (2018), 33732. arXiv:1808.04215
* Long et al. (2010) Lu Long, Wen Xiao, Feng Jun, Shu Luo, and Hui Li. 2010. Speckle noise reduction in digital holography by use of multiple polarization holograms. _Clin. Opt. Lett._ 8, 7 (Jan 2010), 633-655.
* Rous (2008) Bernard Rous. 2008. The Enabling of Digital Libraries. _Digital Libraries_ 12, 3, Article 5 (July 2008), 70 apgrat.
* Rubin et al. (2019) Noah A. Rubin, Gabriele D'Aversa, Paul Chevalier, Zhujun Shi, Wei Ting Chen, and Federico Capasso. 2019. Matrix Fourier optics enables a compact full-Stokes polarization camera. _Science_ 365, 6448 (2019), eaax1839.
* Rubin et al. (2021) Noah A. Rubin, Amin Zaidi, Ahmed H. Dorral, Zhujun Shi, and Federico Capasso. 2021. Jones matrix holography with metasurfaces. _Science Advances_ 7, 33 (2021), eabs7488.
* She et al. (2018) Alan She, Shuyan Zhang, Samuel Shian, David R Clarke, and Federico Capasso. 2018. Large area metasurfaces: design, characterization, and mass manufacturing. _Optics Express_ 26, 2 (2018), 1573.
* Shi et al. (2017) Liang Shi, Yu-Chung Huang, Ward Lopes, Wojciech Matuski, and David Lewke. 2017. Near-Eye Light Field Holographic Rendering with Spherical Waves for Wide Field of View Interactive 3D Computer Graphics. _ACM Trans. Graph._ 36, Article 236 (nov 2017), 17 pages.
* Shi et al. (2021) Liang Shi, Beichen Li, Changil Kim, Petr Kellnhofer, and Wojciech Matuski. 2021. Towards real-time photorealistic 3D holography with deep neural networks. _Nature_ 591, 7849 (2021), 234-239.
* Shi et al. (2022) Zheng Shi, Yuval Bahat, Seung-Hwan Baek, Qiang Fu, Hadi Amato, Xiao Li, Praneeth Chakravarthula, Wolfgang Heidrich, and Felix Heide. 2022. Seeing through Obstructions with Diffractive Cloaking. _ACM Trans. Graph._ 41, 4, Article 37 (jul 2022), 15 pages.
* Shi et al. (2018) Zhujun Shi, Mohammadreza Khorsanninejad, Yao-Wei Huang, Charles Rogues-Carmes, Alexander Y. Zhu, Wei Ting Chen, Vyishak Sauer, Zhao-Wei Ding, Michele Tamagagome, Kunda Chaudhury, Robert C. Devlin, Cheng-Wei Qiu, and Federico Capasso. 2018. Single-Layer Metasturface with Controllable Multiwavelength Functions. _Nano Letters_ 13, 1 (2018), 2420-2427.
* Sitzmann et al. (2018) Vincent Sitzmann, Steven Diamond, Yifan Peng, Xiong Dun, Stephen Boyd, Wolfgang Heidrich, Felix Heide, and Gordon Wetzstein. 2018. End-to-End Optimization of Optics and Image Processing for Adromatic Extended Depth of Field and Super-Resolution Imaging. _ACM Trans. Graph._ 37, 4, Article 114 (jul 2018), 13 pages. [https://doi.org/10.1145/3197517.3201333](https://doi.org/10.1145/3197517.3201333)
* Sror et al. (2020) Hend Sror, Yao-Wei Huang, Berenetje Sephton, Darryl Naidoo, Adam Valles, Vincent Ginis, Cheng-Wei Qiu, Antonio Ambrosio, Federico Capasso, and Andrew Forbes. 2020. High-purity orbital angular momentum states from a visible metasurface laser. 14, 8 (2020), 498-503.
* Steinberg and Yan (2021) Shloini Steinberg and Ling-Qi Yan. 2021. A generic framework for physical light transport. _ACM Transactions on Graphics_ 40, 4 (2021), 1-20.
* Sun et al. (2020) Qilin Sun, Brian Tseng, Qiang Fu, Wolfgang Heidrich, and Felix Heide. 2020a. Learning Rank-1 Diffractive Optics for Single-Shot High Dynamic Range Imaging. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Sun et al. (2020) Qilin Sun, Jian Zhang, Xiong Dun, Bernard Ghanem, Yifan Peng, and Wolfgang Heidrich. 2020b. End-to-End Learned, Optically Coded Super-Resolution SPAD Camera. _ACM Trans. Graph._ 39, 2, Article 9 (mar 2020), 14 pages.
* Shang et al. (2021) Ethan Tseng, Shane Okburn, James Whitehead, Luocheng Huang, Seung-Hwan Baek, Aika Majumdar, and Felix Heide. 2021a. Neural nano-optic for high-quality thin lens imaging. _Nature Communications_ 12, 1 (2021), 6493.
* Teng et al. (2021b) Ethan Tseng, Ali Mosleh, Fahin Mannam, Karl St-Armaud, Avinash Sharma, Yifan Peng, Alexander Braun, Derek Nowrouzenkaran, Jean-Francois Lalonde, and Felix Heide. 2021b. Differentiable Compound Optics and Processing Pipeline Optimization for End-to-End Camera Design. 40, 2, Article 18 (jun 2021), 19 pages. [https://doi.org/10.1145/3446791](https://doi.org/10.1145/3446791)
* Wu et al. (2019) Xicheng Wu, Vivek Bonninathan, Husijin Chen, Avin Sankaranarayanan, and Ashok Veeraraghavan. 2019. PhaseCam3D -- Learning Phase Masks for Passive Single View Depth Estimation. In _2019 IEEE International Conference on Computational Photography (ICCP)_. 1-12. [https://doi.org/10.1109/ICCPHOT.2019.874730](https://doi.org/10.1109/ICCPHOT.2019.874730)
* Yang et al. (2022) Daeo Yang, Wontaek Seo, Weiyeoneng Yu, Sun Il Kim, Bongsh Shin, Chang-Kun Lee, Seokil Moon, Junghyun Ren, and Jonghyun Kim. 2022. Ver-Squet-Eyeon, Zuong, Yeongung, and Hong-Seok Lee. 2022. Diffractive-engineered holography: Beyond the depth representation limit of holographic displays. _Nature Communications_ 13, 1 (2022), 6012.
* Yuns et al. (2017) Ozem Yuns, Mikael Svedendahl, Paulina Dobos, Yunsea Sama, and Rami Quaidar. 2017. On-a-chip Biosensing Based on All-Dielectric Nanocoreanors. _Nano Letters_ 17, 7 (2017), 4421-4426.
* Ye et al. (2015) Han-Ju Yeon, free-ae Kim, Seong-Rob Kim, Huijun Zhang, BoNi Li, Yeong-Min Ji, Sang-Hoo Kim, and Jae-Hyung Park. 2015. 3D holographic head mounted display using holographic optical elements with assignments aberration compensation. _Optics Express_ 23, 25 (2015), 32025-32034.
* Yesilov et al. (2019) Hriz Yesilov, Eduard R. Arekye, Yasaman Jahani, Mingkai Liu, Andreas Titl, Volkan Cevher, Yuri Kivshar, and Haite Altug. 2019. Ultrasensitive hyperspectral imaging and biodetection enabled by dielectric metasurfaces. _Nature Photonics_ 13, 6 (2019), 390-396.
* Yoon et al. (2020) Gwanho Yoon, Kwan Kim, Dahihong Huh, Heon Lee, and Junsuk Rho. 2020. Single-step manufacturing of hierarchical dielectric metalinas in the visible. _Nature Communications_ 11, 1 (2020), 2268.
* Yu et al. (2011) Nanfang Yu, Patrice Generet, Mikhail A. Kats, Francesco Aieta, Jean-Philippe Tetienne, Federico Capasso, and Zeno Gaibrere. 2011. Light Propagation with Phase Discontinuity: Generalized Laws of Reflection and Reflection. _
## Appendix S1 Additional details on hardware
_Principles of independent phase modulation for orthogonal linear polarization states._ Metasurfaces are two-dimensional arrays of nano-scatterer with a subwavelength period, as shown in Figure S1. Pixel-wise variation of geometric parameters, for instance, the length and width of rectangular-shaped nanorods can change quasi-independently the effective refractive indexes along the x- and y-axis, respectively. Thus, the phase shifts occur for each orthogonal linear polarization state. This optical behavior can be represented by the Jones matrix of linearly birefringent waveplate (Arbabi et al., 2015; Mueller et al., 2017).
\[\begin{bmatrix}e^{i\phi_{x}}&0\\ 0&e^{i\phi_{y}}\end{bmatrix}\] (S1)
#### Phase modulation range of orthogonal linear polarization states
In ideal case, the phase-shift of transmission coefficients for each orthogonal linear polarization states cover the whole \(2\pi\) range theoretically, which means the complete independent modulation of orthogonal polarization-pair. As explained in the main text, however, the fabrication constraint or the kind of dielectric material we use might pose a hurdle for the complete independent phase modulation. Figure S2 shows the actual phase cover range along with practical issues; a low refractive index of the silicon nitride with a limited height of the nanorod. Each point in the figure represents the phase values for \(t_{xx}\) and \(t_{yy}\), respectively. Therefore, if it is possible to adjust the phase completely independently for two orthogonal polarizations, the points shown in the picture should be fully filled throughout the whole phase chart. As the wavelength of incident light increases, the range of possible values for the propagation phase scheme is reduced, assuming that the height of the nanorod is fixed. Thus, the phase modulation range at 638 nm wavelength shows much narrower than the case of 450 nm. The use of materials possessing higher refractive index such as titanium dioxide or amorphous silicon can be a simple solution to tackle with this problem. Also the realization of the sophisticated fabrication recipe enabling the higher aspect ratio is able to increase the phase-shift range, either.
#### Proxy model fitting from the RCWA data
Metasurface proxy model is designed from the pre-simulated transmittance of rectangular nanostructure calculated by the RCWA method. First, we have to specify several hyper-parameters that are decided by the experimental conditions. The pixel pitch of the metasurface is set to approximately 283 nm, determined under two considerations: a demagnification factor of the relay optics from SLM to metasurface and the suppression of the unwanted resonant phenomena inside the dielectric materials for smooth-fitting. Three wavelengths of the laser source are 450, 520, and 638 nm, respectively. The refractive index (n) and extinction coefficient (k) of the silicon nitride layer with a deposition thickness of 800 nm. Figure S3 shows the n, k values measured by spectroscopic ellipsometer (M2000D, Woollam). Second, given that the hyper-parameters are decided, we utilize the RCWA method to obtain transmittance libraries to be used for the proxy-model fitting. A total of six data sets on the combinations of the two phase shifts for each co-polarized transmission coefficient and the three different wavelengths, as a function of geometric parameters of the nanorod, which change from 80 to 220 nm with a 2 nm interval. For example, the phase shift of the co-polarized transmission coefficients is simulated by RCWA for every width and length value, when the
x-polarized light is normally incident upon the nanostructure. Third, the discrete values of each library are fitted as a surface function using linear quadratic polynomials as explained in the main text. The phase shifts of transmitted light are also normalized by \(2\pi\). We utilize the curve fitting toolbox from the commercial software, MATLAB. Using a linear-least-square method, the coefficients of polynomials can be obtained with a 95% confidence bound. Figure S4 shows the six proxy models against simulated values. Although we can see some outliers of the simulated data compared with the fitted functions, especially for the blue wavelength case, which is attributed to the resonant phenomena inside the dielectric materials, they are very sparse so we can neglect these exceptional points. Table 1 shows the equations and the coefficients of polynomials for all twelve proxy models.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Physical entity & \(c_{00}\) & \(c_{10}\) & \(c_{01}\) & \(c_{20}\) & \(c_{11}\) & \(c_{02}\) \\ \hline \(\phi_{xx}^{r}\) & -0.0946 & -0.1171 & 0.06675 & 0.3065 & 1.204 & -0.2145 \\ \hline \(\phi_{xx}^{g}\) & -0.3072 & 0.3484 & 0.3064 & 0.05226 & 1.543 & -0.4258 \\ \hline \(\phi_{xx}^{b}\) & -0.7156 & 1.366 & 0.8043 & -0.5976 & 1.743 & -0.8002 \\ \hline \(\phi_{yy}^{r}\) & -0.09458 & 0.06663 & -0.1175 & -0.2144 & 1.204 & 0.3069 \\ \hline \(\phi_{yy}^{q}\) & -0.3072 & 0.3064 & 0.3486 & -0.4258 & 1.543 & 0.05215 \\ \hline \(\phi_{yy}^{b}\) & -0.7157 & 0.8048 & 1.365 & -0.8004 & 1.742 & -0.5967 \\ \hline \end{tabular}
\end{table}
Table 1: Fitted coefficients of the linear quadratic polynomials. The coefficients of \(c_{12}\), \(c_{21}\), and \(c_{22}\) are set to zeros. Superscripts ’r’, ’g’, and ’b’ correspond to red, green, and blue. \(t_{xx}\) defines the co-polarized transmission coefficient when the x-polarized light is normally incident upon the nanostructure.
### Display prototype
The holographic display prototype used for experimental validation is illustrated in Figure S5. Our prototype follows the basic structure of a conventional holographic display, with a half-wave plate (HWP) and a metasurface (MS) positioned after the \(4f\) system. Additionally, to facilitate metasurface alignment, an extra \(4f\) system is placed after the metasurface. The light from a full-color fiber-coupled laser diode (FISBA READYBeam) is collimated using a collimating lens and directed to the 8-bit SLM (HOLOEYE LETO-3) via a beam splitter (BS). Prior to the beam splitter, a HWP and a linear polarizer (LP) are included to ensure proper polarization alignment for the SLM. The light transmitted through the SLM passes through the \(4f\) system equipped with a low-pass filtering system to eliminate high-order diffraction terms. Following the first \(4f\) system, an LP is positioned to filter out undiffracted terms, and an HWP on a motorized rotation mount is incorporated to control the direction of linear polarization of the light from the SLM. The metasurface is mounted on 3-axis linear stages, comprising two motorized stages in the X-axis (Thorlabs LTS300/M) and Y-axis (Thorlabs Z812B), as well as a Z-axis manual stage. These stages enable precise alignment of the SLM and the metasurface, and the motorized stages enable switching between capturing images with and without the metasurface. Finally, the metasurface plane is relayed through a second \(4f\) system, and the resulting image is captured using a CCD camera (FLIR GS3-U3-51S5M-C) mounted on a motorized stage (Newport FCL100). As real images of the holograms are
captured instead of virtual images with an eyepiece, the propagation distance of the hologram is calculated assuming a 50 mm eyepiece.
### Metasurface alignment
In Section 5.1, we discuss the utilization of the second \(4f\) system in our display prototype for aligning the metasurface and the SLM. The \(4f\) system allows us to directly capture the SLM plane and observe the positioning of both the SLM and the metasurface. Figure S6 shows an captured image of the relayed SLM plane, where a misalignment of \(30\mu\)m in both vertical and horizontal directions between the metasurface and the SLM is present. The boundary lines of the SLM and the metasurface is clearly visible, enabling manual alignment. It is worth noting that this misalignment corresponds to a shift of 10 pixels in the simulation, representing the maximum misalignment error of the noise function \(f_{\text{noise}}\) employed during metasurface optimization. Since this level of misalignment is detectable by the camera, it is evident that the misalignment error in our display prototype would be much smaller than what is simulated using the noise function \(f_{\text{noise}}\). Therefore, we did not conduct additional calibration steps for more precise alignment and instead relied on camera-in-the-loop training for fine-tuning.
## 0.2 Details on camera-in-the-loop training
### Propagation model
We use camera-in-the-loop (CITL) calibrated wave propagation model during CGH optimization for the experiments(Peng et al., 2020). Our goal is to clarify the effect of the polarization-multiplexing metasurface, which is optimized in ideal simulation. Therefore, quality degradation from discrepancy between the simulation and the real-world system may weaken the effect of the metasurface in the experiment.
We combine the CNNpropCNN model proposed by Choi et al. (2022) and the all-physically interpretable model by Jang et al. (2022) in our approach. Since our model aims to accurately simulate the polarization-multiplexing phenomenon, we exclude black-box models such as CNN before the metasurface. Instead, we model the nonlinear phase response of the SLM using a multi-layer perceptron (Peng et al., 2020) and incorporate the SLM pixel crosstalk noise by convolving a 3\(\times\)3 kernel with the SLM phase pattern (Jang et al., 2022). After the SLM phase mapping through the MLP and the crosstalk kernel, we apply the complex field of the light source \(a_{\text{src}},\phi_{\text{src}}\) and the metasurface, while taking into account the rotation angle of the half-wave plate. To account for potential misalignment between the fast axis of the HWP and the metasurface, we parameterize the rotation angle error of the HWP as \(\theta_{\text{tilt}}\). Therefore, the Jones matrix of the HWP becomes
\[\text{J}_{\text{hwp}}\left(\theta;\theta_{\text{tilt}}\right)=\begin{bmatrix} \cos\left(2\left(\theta+\theta_{\text{tilt}}\right)\right)&\sin\left(2\left( \theta+\theta_{\text{tilt}}\right)\right)\\ \sin\left(2\left(\theta+\theta_{\text{tilt}}\right)\right)&-\cos\left(2 \left(\theta+\theta_{\text{tilt}}\right)\right)\end{bmatrix},\] (S2)
where \(\theta\) is the angle of the HWP for polarization rotation, with \(0^{\circ}\), \(45^{\circ}\), and \(22.5^{\circ}\) corresponding to horizontal, vertical, and diagonal linear polarization, respectively. For simplicity, we omit \(\theta\) from Equation 9 in the manuscript.
The light from the metasurface is propagated using the modeled angular spectrum method (ASM) with a parameterized Fourier plane to account for the IRIS placed inside the \(4f\) system and optical aberration. The phase aberration of the plane is modeled using Zernike polynomials up to the 9th order. After the parameterized ASM, the reconstructed amplitude passes through the CNN for image adjustment. Overall, our propagation model can be expressed as follows:
\[f_{\text{model}}\left(\phi\right)=\text{ CNN}_{\text{target}}\left(f_{\text{ASM}} \left(\text{J}_{\text{proxy}}\left(l,w\right)\cdot\text{J}_{\text{hwp}} \left(\theta;\theta_{\text{tilt}}\right)\ \cdot a_{\text{src}}e^{i\phi_{\text{src}}}e^{i\left(k*\text{MLP}\left(\phi \right)\right)};a_{\mathcal{T}},\phi_{\mathcal{F}}\right)\right).\] (S3)
Since Jones matrices of the HWP and the metasurface have polarization-depedent elements, we capture the dataset with polarization diversity by changing the rotation angle \(\theta\) of HWP. Therefore we capture the dataset with 4 different settings: without a metasurface, with the metasurface and \(0^{\circ}\) HWP, with metasurface and \(22.5^{\circ}\) HWP, and with metasurface and \(45^{\circ}\) HWP. We train our model with dataset captured with 2,000 SLM phase patterns generated from
stochastic gradien descent method and the alternating direction method of multipliers method (Choi et al., 2021). We use 5 layers U-Net for CNNtarget and optimize for 10 epochs with a learning rate of \(5e^{-4}\).
### Optimized model parameters
Figure S7 visualizes the trained physical parameters of our CTTL-calibrated model, including the source intensity \(a_{\text{src}}\), source phase \(\phi_{\text{src}}\), amplitude \(a_{\mathcal{F}}\) and phase \(\phi_{\mathcal{F}}\) of the Fourier plane, SLM phase mapping through MLP, and SLM pixel crosstalk kernel. Though the phase of the Fourier plane \(\phi_{\mathcal{F}}\) is modeled in a depthwise manner, only the phase of the central plane is showcased in the figure as a representative. Additionally, Figure S8 visualizes the trained polarization-dependent transmission coefficients of the metasurface. The model successfully captures misalignment due to shifts or distortions, as well as additional noise from dust and scratches, along with the fabricated phase patterns. The misaligned angles of the HWP are \(-2.84^{\circ}\), \(-2.00^{\circ}\), and \(-1.78^{\circ}\) for the red, green, and blue channels, respectively. We utilize the CTTL-calibrated model for CGH optimization during the experimental validation.
## S3 Additional results
### Metasurface optimization result
Figure S9 visualizes the geometric parameters of the metasurface nanostructure. The left figure illustrates the schematic diagram of the metasurface nanostructures. During the metasurface optimization, the height \(H\) and pixel pitch \(P\) are fixed at 800 nm and 283 nm, respectively, while only the geometry maps of length \(L\) and width \(W\) are optimized.
The first column displays the geometry-maps of a random metasurface utilized in the simulations presented in Figure 5 and Figure 6. The geometry-maps of the random metasurface follow a uniform random distribution. The second column showcases a metasurface optimized without the noise function, which is utilized for the simulation in Figure 4. The last column illustrates the optimized metasurface with the noise function, which is actually fabricated for the experiment. The optimized metasurfaces exhibit coarser geometry-map patterns compared to the random metasurface. However, the metasurface without the noise function displays grainy, randomized patterns that make it more vulnerable to misalignment.
The power spectrum of the optimized metasurface can be found in Fig. S10. The power spectrum is derived from the Fourier transform of the complex amplitude of the metasurface. For more clear visualization, we illustrate the power spectrum is displayed on a normalized logarithmic scale. The power spectral distribution is predominantly focused on the DC component, similar to a diffuser with a narrow diffusing angle. This aligns with the interpretation of the metasurface in the manuscript Section 4, which concludes that the metasurface is optimized to have a tailored randomness.
### Additional simulation results with partially coherent light sources
Figure S11 showcases simulation results with multiple levels of coherence. Consistent with the simulation in the manuscript, the focal length of the collimating lens is fixed to 200 mm, while we adjust the bandwidth and the aperture width of the light source. We modeled the light source's wavelength spectrum as a Gaussian distribution, with wavelength diversity represented by the standard deviation, \(\sigma\). During the simulation, we first optimized the SLM phase pattern for a 2D target image using a coherent light source, and reconstructed this phase pattern with variations in the light source. The results show that the image gets blurry as the aperture size and the bandwidth increase, illustrating trade-offs in partially coherent light sources. We note that increased wavelength diversity introduces speckle noise in
the image. This is because, while the speckle noise seems absent for the optimized condition in simulation, it reemerges when the reconstruction condition is different from the optimized one. However, in practice, the speckle noise is also inherent in a coherent light source, and increasing wavelength diversity reduces speckle noise at the expense of the image contrast.
### Additional simulation and experimental results of depolarized holography
We provide additional simulation results in Figure S12 and experimentally captured results in Figure S13. Both results shows holograms with focal stack supervision. The first column represents the hologram reconstructed without the metasurface, which is equivalent to the conventional holographic displays. Second column shows the case where the metasurface inserted to the display, but only hologram with a single polarization state is captured. Third column is the depolarized holographiy, where two holograms with orthogonal polarization states are superimposed together as an intensity sum, achieving the best image quality among these three cases. An interesting observation is that even a single polarizer provides better contrast, which was not observed in the simulation. This finding contributes to the optimization of the focal stack hologram CITL. Although the peak signal-to-noise ratio (PSNR) is lower due to speckle noise, the distribution remains similar to that depicted in the histogram represented in manuscript. This indirectly implies that the degree of freedom offered by the polarization channel aids optimization, not solely in speckle reduction.
|
2306.00098 | Enhanced-sensitivity interferometry with phase-sensitive unbiased
multiports | Here we introduce interferometric devices by combining optical feedback
(cavities) with unbiased multiports, which unlike traditional beam dividers,
allow light to reflect back out of the port from which it originated. By
replacing the traditional, directionally-biased beam-splitter in a Michelson
interferometer with an unbiased multiport, the functional dependence of the
scattering amplitudes changes. As a result, the derivative of transmittance
with respect to an external phase perturbation can be made substantially large.
This significantly enhances the resolution of phase measurement, and allows the
phase response curves to be altered in real time by tuning an
externally-controllable phase shift. | Christopher R. Schwarze, David S. Simon, Alexander V. Sergienko | 2023-05-31T18:18:02Z | http://arxiv.org/abs/2306.00098v1 | # Enhanced-sensitivity interferometry with phase-sensitive unbiased multiports
###### Abstract
Here we introduce interferometric devices by combining optical feedback (cavities) with unbiased multiports, which unlike traditional beam dividers, allow light to reflect back out of the port from which it originated. By replacing the traditional, directionally-biased beam-splitter in a Michelson interferometer with an unbiased multiport, the functional dependence of the scattering amplitudes changes. As a result, the derivative of transmittance with respect to an external phase perturbation can be made substantially large. This significantly enhances the resolution of phase measurement, and allows the phase response curves to be altered in real time by tuning an externally-controllable phase shift.
## I Introduction
Beam dividing is a fundamental manipulation of light, forming the basis of applications such as interferometry [1] and optical information processing [2]. Many optical apparatuses rely on a variant of the standard beam-splitter to separate and combine light. This might be a cube beam-splitter, a pellicle one, or perhaps a fiber coupler. In any case, these devices share the following characteristic: the light entering a given port cannot exit that same port. This directionally-biased behavior effectively lowers the dimensionality of the device: an output state is unable to populate as many modes as there are ports due to the lack of a coupling between the input port and the mode counter-propagating from that port. Sometimes this feed-forward nature is desirable, for instance, to prevent back-reflections from re-entering a laser cavity. However, the use of strictly feed-forward scattering devices constrains the output states that can be generated, and consequently, limits the capabilities of the systems that use them.
A generalization to the class of feed-forward linear-optical scatterers was introduced in [3]. These new devices are directionally-_unbiased_, allowing light to reflect out the port that it entered. In addition to reducing the number of optical elements needed to enact certain operations, this property has driven a number of theoretical developments in entangled-state information processing [4], Hamiltonian simulation of topologically non-trivial systems [5][6], and optical sensing with the Sagnac effect[7].
In this article, we use a particular directionally-unbiased counterpart of the beam-splitter called the Grover "\(N\)-sided coin", which will be introduced in greater detail later. With the four-port Grover coin, we form a generalization of the Michelson interferometer which can be configured to have an arbitrarily large sensitivity to a phase perturbation. This behavior stems from the optical cavities which get formed by replacing the beam-splitter with the unbiased Grover coin. Intentionally leaky cavities of this sort formed the basis of the free-space realization of the 3-port Grover coin [8]. In this case, we analyze how a controllable phase shift acquired during each round trip in the cavity can be used to create a lower-dimensional device with a tunable scattering matrix.
The outline for the remainder of the article is as follows. In the next section, we review linear-optical scattering theory for monochromatic radiation interacting with devices such as the beam-splitter or Grover coin. In Section III, we compare various unbiased and tunable scattering devices formed from either a single beamsplitter or a 4-port Grover coin. All devices considered are tuned by changing optical phase elements; that is, for new values of each phase shift, a new scattering matrix is produced which is periodic in \(2\pi\). In Section IV, we focus on a two-cavity configuration which resembles the Michelson interferometer. Unlike the standard Michelson interferometer, this Grover-based version can be tuned to have a transmittance with an arbitrarily steep slope, which can be used to obtain enhanced sensing. Conclusions are drawn in Section V.
## II Linear-optical scattering theory
We will mathematically represent a beam-splitter as a particular linear, spatially-coherent scattering transformation. Any optical device that can be expressed this way we will call a "multiport". The action of a multiport is often expressed in the ideal case as a unitary transformation, which operates on the probability amplitudes of an optical state. It is common to express such a state in a Fock basis of spatial modes.
If we consider this basis and additionally assume all excitations of interest are confined to four ports, a general
superposition of arbitrarily polarized, monochromatic radiation takes the form \(|\psi\rangle=(c_{1}a_{1}^{\dagger}+c_{2}a_{2}^{\dagger}+c_{3}a_{3}^{\dagger}+c_{4 }a_{4}^{\dagger})|0\rangle\). The subscripts on the creation operators indicate the port numbers of a given four-port device, such as those depicted in Fig. 1. The field modes could be fiber modes or free space plane waves, depending on the physical implementation of the system. Next we identify each photon creation operator \(a_{j}^{\dagger}\) with the standard basis vector \(e_{j}\), which equals 1 at its \(j\)th element and 0 elsewhere. In our notation, ingoing and outgoing modes of the same port will be associated with the same standard basis vector.
In this formalism, the state \(|\psi\rangle\) may be expressed as a column vector
\[|\psi\rangle=\begin{pmatrix}c_{1}\\ c_{2}\\ c_{3}\\ c_{4}\end{pmatrix}. \tag{1}\]
while a 50:50 beam-splitter would have a scattering matrix given by
\[B=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0&1&1\\ 0&0&1&-1\\ 1&1&0&0\\ 1&-1&0&0\end{pmatrix}, \tag{2}\]
and the output state of \(|\psi\rangle\) interacting with the beam-splitter is then given by \(B|\psi\rangle\)[9][10].
The transformation (2), however common in practice, is relatively sparse: modes 1 and 2 are only routed to modes 3 and 4 and vice versa; no energy returns to the mode from which it came. In this sense, the device is feed-forward or directionally-biased. Hence the scattering matrix above is often depicted by the \(2\times 2\) matrix
\[H=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix} \tag{3}\]
and the feed-forward nature is assumed implicitly.
When static scattering devices such as the beam-splitter are combined with optical phase shift elements, the resulting device may be viewed as a realization of a tunable scattering matrix. Non-trivial device behavior is often obtained with interference, such as when excitations in multiple spatial modes are superimposed after a tunable delay is introduced between them. To obtain exotic forms of interference, an increasing number of optical elements and/or degrees of freedom are typically required. However, as we will discuss later, interference across an infinite number of cavity paths can give rise to novel effects without substantially increasing either.
An important instance of an unbiased multiport is the Grover coin, which originates from a matrix that appeared in Grover's search algorithm [11] and has since been adopted to studies of quantum walks [12]. The \(d\)-port Grover coin (where \(d\geq 3\)) is defined to have a reflection amplitude of \((2/d-1)\) and transmission amplitude of \(2/d\) at each output port, for all input ports. Therefore in addition to being unitary, it is also real and symmetric. Grover coins have been experimentally realized for the case of \(d=3\)[8][13] and fabrication of an integrated version of the \(d=4\) case is underway.
In the \(d=4\) case, the scattering matrix \(G\) is given by
\[G=\frac{1}{2}\begin{pmatrix}-1&1&1&1\\ 1&-1&1&1\\ 1&1&-1&1\\ 1&1&1&-1\end{pmatrix}. \tag{4}\]
As in Eq. (2), the output probabilities are equal at each port. The free-space realization of the device, first presented in [6], is shown next to its abstract circuit symbol in Fig. 1. In order for this realization to possess the scattering matrix (4), the phase imparted in each arm \(\phi\) must equal \(-\pi/2\). Moreover, the source of radiation must be coherent with respect to the length scales of the beam-splitter arrangement. This prevents various internal routes that a photon could propagate along from being distinguished by the photon exit times, causing the amplitudes corresponding to these paths to interfere. These coherence relations are further discussed in [3].
The permutation-symmetry of the matrix (4) manifests as a rotational symmetry in the device in Fig. 1 (left). Since this Grover coin and the beam-splitter both have four ports, it is interesting to consider what results when traditional beam-splitters are replaced with Grover 4-ports in conventional interferometers. It has recently been shown that in the Mach-Zehnder topology, the Grover coin enables simultaneous measurements of different phase shifts [7].
## III Grover optical cavities
Consider the configuration with one simple cavity affixed to a 4-port Grover coin, such as the sealed 4-port Grover coin pictured in Fig. 2. Light entering the cavity strikes the mirror, accumulating phase \(\pi\) from the mirror
Figure 1: (left) One particular realization of a directionally-unbiased optical multiport, formed by linking four 50:50 beam-splitters to form a leaky cavity. The distance between beam-splitters is twice that between a mirror and neighboring beam-splitter. By fixing \(\phi=-3\pi/4\), so that the total phase per mirror-unit is \(-\pi/2\), one obtains the four-port Grover coin [6]. We abstract this four-port device into the circuit diagram (right), which will be used throughout the rest of the article.
in addition to a round-trip phase \(\phi\). Since the device has three open ports, the scattering matrix will be \(3\times 3\) and its elements will vary \(2\pi\)-periodically with \(\phi\). After each pass through the cavity, \(3/4\) of the energy of the optical state is transferred from the cavity mode uniformly into the spatial modes of the open ports.
When computing the scattering matrix, one can avoid making redundant calculations using the permutation symmetry of the Grover coin; after computing the output amplitudes for input at one port, the output state for input at a different port is the same but with the corresponding amplitudes permuted. In this example, the cavity mode amplitudes map for each round trip into a simple recursive form which can be converted to a geometric series. Ultimately, the reflected amplitude is of the form \(r=(e^{i\phi}-2)^{-1}\) and the transmitted amplitude is \(t=1+r\). One may readily verify these amplitudes are normalized: \(|r|^{2}+2|t|^{2}=1\). For \(\phi=0\), the device behaves as a 3-sided mirror.
More interestingly, for \(\phi=\pi\), the reflection probability reaches a minimum and the device behaves as a Grover 3-port. It can be shown this property generalizes to Grover coins of arbitrary dimension \(d\): a single seal cavity will create a device that interpolates between a \((d-1)\)-port mirror and \((d-1)\)-port Grover coin. Conversely, if one coherently couples an \(n\)-port Grover coin to a \(d\)-port Grover coin with a net 0 phase shift (modulo \(2\pi\)) in the cavity, a \((d+n-2)\)-port Grover coin is formed. Applications of these facts will be explored elsewhere, and in the remainder of this article we focus on applications with low-dimensional devices.
The cavity formed in the previous example will not exist if a beam-splitter is used instead of a Grover coin. It will have a partially-unbiased scattering matrix: input at two ports will be directionally-unbiased while input at the third does not interact with the mirror and is consequently feed-forward. In fact, this device formed the essential building block of the original free-space realization of the Grover multiport [3][8].
Placing a pair of mirrors on adjacent sides of a beam-splitter will not form a cavity either but doing so will form another device of great practical use: the Michelson interferometer, shown in Fig. 3 (left). We can view this interferometer as a tunable, unbiased, two-port scattering device. Its reflection and transmission probabilities are given by
\[R =\cos^{2}\left(\frac{\phi_{1}-\phi_{2}}{2}\right) \tag{5a}\] \[T =1-R=\sin^{2}\left(\frac{\phi_{1}-\phi_{2}}{2}\right) \tag{5b}\]
where \(\phi_{1}\) is the phase in the first arm of the interferometer, and \(\phi_{2}\) is that of the second.
If two mirrors are alternatively arranged to lie on opposite sides of the beam-splitter, as shown in Fig. 3 (right), an optical cavity is formed. The scattering matrix elements are found by coherently summing the probability amplitudes for all paths light can take between a given input port and a given output port. This device is also tunable, with the reflection and transmission probabilities given by
\[R =\frac{1}{5-4\cos(\phi_{1}+\phi_{2})}, \tag{6a}\] \[T =1-R. \tag{6b}\]
In the case of using a conventional beam-splitter, the cavity and Michelson device probabilities may be expressed as functions of either \((\phi_{1}+\phi_{2})\) or \((\phi_{1}-\phi_{2})\). If \(\phi_{2}\) is fixed at some value, the probabilities can be viewed as a \(2\pi\)-periodic curve parameterized by \(\phi_{1}\), or vice versa. Altering \(\phi_{2}\) then produces a new curve which is function of \(\phi_{1}\). The points on any particular curve of fixed \(\phi_{2}\) are prescribed for each value of \(\phi_{1}\). Due to the functional
Figure 2: (left) A simple optical cavity coupled to a Grover coin (G). The spatial mode at the lower port is converted to a cavity mode by sealing the port with a mirror. Light in this mode oscillates back and forth, depicted schematically with the blue arrow. _As the light bounces within the cavity, some energy leaks into the output ports, interfering with the radiation in these ports._ As \(\phi\) changes, the interference in the output ports changes, modulating the final scattering matrix. (right) The same topology but with a traditional beam-splitter instead of a Grover coin. _Because the beam-splitter is directionally-biased, this configuration does not form an optical cavity._ All energy will leave the sealed port mode after a single round trip.
Figure 3: (left) Traditional Michelson interferometer. (right) A cavity-coupled beam-splitter device. The scattering coefficients for both of these devices can be expressed as a function of the quantity \((\phi_{1}+\phi_{2})\) or \((\phi_{1}-\phi_{2})\).
dependence of the form (\(\phi_{1}\pm\phi_{2}\)), when the fixed phase \(\phi_{2}\) is allowed to vary, _the entire transmission curve is merely translated along the horizontal axis without making changes to its structure_. Hence an entire degree of freedom is redundant. In addition to this, the cavity-based beam-splitter device cannot be tuned to obtain all reflection and transmission probabilities in the range \([0,1]\); this can be seen since the minimum in Eq. (6a) occurs when the denominator is maximized, which in turn occurs when the cosine term equals its extreme value of -1, so the minimum reflection probability obtainable is \(1/9\).
## IV Enhanced sensitivity
Grover-Michelson interferometer
A novel two-port device can be created from the 4-port Grover coin by forming separate cavities at two ports. The device topologies in Fig. 3 coalesce into this "Grover-Michelson" device when the beam-splitter is replaced by a Grover coin, as the latter device is permutation symmetric. The resulting configuration is shown schematically in Fig. 4. Light from a coherent source enters a port and the transmission probability is measured at the other open port. Some of the incident energy is coupled into two cavities, reflecting from a mirror and re-entering the Grover coin after accumulating a phase shift \(\phi_{j}\) in cavity \(j\).
Assume the cavities are coupled to ports 3 and 4. If one studies the round-trip propagation of \((a_{3}^{\dagger}+a_{4}^{\dagger})\) and \((a_{3}^{\dagger}-a_{4}^{\dagger})\) rather than \(a_{3}^{\dagger}\) and \(a_{4}^{\dagger}\) individually, the transformation from one round-trip to the next forms a recurrence relation which may be unrolled into a geometric series and then explicitly summed (see Appendix B). Thus these linear combinations somewhat emulate the role of coupled-cavity supermodes. In the appendix we show the output state corresponding to \(|\psi_{0}\rangle=a_{1}^{\dagger}|0\rangle\) is given by
\[|\psi_{\mathrm{out}}\rangle =\bigg{[}\bigg{(}\frac{C(\phi_{1},\phi_{2})^{2}}{2B(\phi_{1},\phi _{2})-2}-\frac{B(\phi_{1},\phi_{2})}{2}-\frac{1}{2}\bigg{)}a_{1}^{\dagger}\] \[\quad+\bigg{(}\frac{C(\phi_{1},\phi_{2})^{2}}{2B(\phi_{1},\phi_{2 })-2}-\frac{B(\phi_{1},\phi_{2})}{2}+\frac{1}{2}\bigg{)}a_{2}^{\dagger}\bigg{]} |0\rangle, \tag{7}\]
where
\[B(\phi_{1},\phi_{2}) \coloneqq\frac{1}{2}(e^{i\phi_{1}}+e^{i\phi_{2}}) \tag{8a}\] \[C(\phi_{1},\phi_{2}) \coloneqq\frac{1}{2}(e^{i\phi_{1}}-e^{i\phi_{2}}). \tag{8b}\]
In fact, we show in the appendix that up to a \(\pi\) phase shift, \(B\) and \(C\) are respectively the \(r\) and \(t\) for the standard Michelson interferometer. Therefore the use of a Grover coin instead of a beam-splitter results in a direct nonlinear transformation of the device's scattering coefficients.
Over the parameters \(\phi_{1}\) and \(\phi_{2}\), both the Michelson and Grover-Michelson interferometers span the line \(R+T=1\) with \(R,T\geq 0\). This line forms the state space of classical scattering transformations derived from a tunable \(U(2)\) device. Despite sharing this space entirely, the Grover-Michelson carries a significant advantage in the way its dependence on \(\phi_{1}\) and \(\phi_{2}\) covers the line \(R+T=1\). To see this, consider how its reflection and transmission probability curves \(R_{\phi_{2}}(\phi_{1})\) and \(T_{\phi_{2}}(\phi_{1})\) vary with \(\phi_{2}\), as in Fig. 5. In the case of the regular Michelson interferometer, we recall from the previous section the scattering probabilities were functions of the quantity \((\phi_{1}-\phi_{2})\). This meant that as one phase was varied, the probability curve was merely translated.
However, in the Grover-Michelson device, the dependence on \(\phi_{1}\) and \(\phi_{2}\) cannot be expressed in this way. The geometric series summation of cavity amplitudes places phase dependence in the denominator in the output amplitudes, resulting in nonlinear behavior in \(R\) and \(T\). Examples of reflection probability curves are plotted in Fig. 5. The deformation of each probability curve constrained on the domain \([0,2\pi]\) is over a family of continuous paths with fixed endpoints, which is an instance of homotopy. At \(\phi_{2}=\pi\) the tuning curves are symmetric about \(\phi_{1}=\pi\). However, as \(\phi_{2}\) increases in distance from \(\pi\), the curves are increasingly skewed toward the periodic endpoint, increasing the maximum slope obtainable on a given curve.
An advantage of the Grover-Michelson configuration over the traditional one is that the sensitivity of the output state can be made arbitrarily large or small by fixing \(\phi_{2}\) at some value near an integer multiple of \(2\pi\). Sensitivity here is quantified by the magnitude of the slope of the probability \(R\) or \(T\) vs. \(\phi_{1}\) curve, such as \(|\partial T/\partial\phi_{1}|\). A comparison of the maximum sensitivity as a function of
Figure 4: Grover-Michelson interferometer, formed by replacing the beam-splitter in a conventional Michelson interferometer with a Grover coin. By attaching a source and detector to the open ports, an enhanced phase sensing device is formed; the interference of cavity amplitudes leads to a nonlinear, continuously modified phase response, allowing the slope of the output probabilities with respect to a phase perturbation be made as steep or flat as desired.
the curve index \(\phi_{2}\) is shown in Fig. 6. The phase sensitivity can be seen to grow arbitrarily large as \(\phi_{2}\) approaches integral multiples of \(2\pi\).
From a metrological standpoint, the controllable sensitivity behavior can be used to substantially increase the resolution of the phase readout. Assuming the phase shift \(\phi_{2}\) can be stably controlled, then this sharper response can be useful if the phase \(\phi_{1}\) is perturbed about the sensitive region by a small, unknown amount. If the interferometer is calibrated with a controllable phase \(\phi_{1}\) to bias the system about the point of 50:50 power splitting, say in the case \(\phi_{2}=\pi/8\) in Fig. 5, then a small unknown variation in phase \(\delta\) brings the effective cavity round-trip phase \(\phi_{1}\) to \((\phi_{1}+\delta)\). Accordingly, the reflectance and transmittance will see a substantially greater degree of modulation in comparison to this same variation affecting a standard Michelson interferometer. This is illustrated with the shaded gray boxes in Fig. 5; the boxes illustrate the same value of unknown phase disturbance leads to a substantially larger modulation in transmittance for the Grover-Michelson.
Using a phase-shifting element in the second arm to control \(\phi_{2}\), the slope at a given bias point can be changed, allowing the system to be field-programmed to accommodate varying perturbation strengths, even if the strengths themselves are unknown. If the slope is large enough for a given perturbation, the output transmission will jump past the sensitive region and saturate into the flat-sloped, low-transmission region of the next period, at which readout of the phase becomes difficult to resolve accurately. This exemplifies the trade-off between sensitivity and dynamic range intrinsic to metrology. The larger the slope, the smaller \(\Delta\phi\) must be in order to be extracted unambiguously. Assuming the same perturbation can be reapplied at will, one may repeatedly re-calibrate the bias point at different slopes and then reapply the perturbation here until the result is non-saturating. For large perturbations, this may mean operating in a low-curvature bias point to obtain a larger dynamic range. For instance, in Fig. 5, biasing at the inflection point \(\phi_{1}=\pi,\phi_{2}=\pi/8\) in the Grover-Michelson interferometer would be the preferred readout location in comparison to the \(\phi_{1}=\pi\) zero-sloped region of high curvature in the traditional device.
As a coherent device, conditions must be met to guarantee the device behaves as predicted by the assumption that amplitudes corresponding to indistinguishable paths interfere. As stated in Section II, the source must be coherent with respect to the length scales of the Grover coin in use. This allows the physical Grover coin such as the one in Fig. 1 to behave as such. The conditions pertaining to that physical realization are discussed in Ref. [3]. The coherence length of the source must also be larger than the length of a single cavity arm so that amplitudes exiting after a _different_ number of cavity round-trips can
Figure 5: Transmission probability curve in one period \(0\leq\phi_{1}\leq 2\pi\) for the Grover-Michelson interferometer at various values of \(\phi_{2}\). At \(\phi_{2}=\pi\), \(T\) is symmetric, but as \(\phi_{2}\)’s distance from \(\pi\) is increased, the curve becomes increasingly skewed, leaving the periodic endpoints fixed. The homotopy originates from the fact that \(R\) and \(T\) are given by composition of nonlinear functions of \((\phi_{1},\phi_{2})\), which itself stems from the cavity-coupled nature of the device. Given suitable control of the phases \(\phi_{1}\) and \(\phi_{2}\), perturbations in the \(\phi_{1}\) can be made to have a small or large response in \(R\) and \(T\), while in the standard Michelson the maximum slope is always 1. The gray boxes illustrate how this difference in modulation \(\Delta T\) for the same phase variation \(\Delta\phi\) can be made very large.
Figure 6: Maximum sensitivity of the Michelson and Grover-Michelson interferometers versus \(\phi_{2}\), shown from \(10^{-5}\) to \(2\pi-10^{-5}\). The curves are periodic in \(2\pi\). The sensitivity is given by the absolute value of the slope of either the reflection or transmission probability \(P\), \(|\partial P_{\langle}\phi_{1},\phi_{2})/\partial\phi_{1}|\). The maximum is taken over \(\phi_{1}\). The Grover-Michelson dominates the Michelson for all values of \(\phi_{2}\) and becomes arbitrarily large as \(\phi_{2}\) approaches integral multiples of \(2\pi\).
interfere. Otherwise, the result would be a statistical mixture of the output amplitudes corresponding to interference over an _equal_ number of round-trips only. Fortunately, the geometric series that determines the output amplitudes converges quickly, so interference over only a few round-trips would likely be sufficient in practice.
Nonetheless, if the source coherence may be an issue, the condition may be improved by bringing the arms closer to the coin; however, if the arms become too close to the coin, they will couple to the internal cavity modes of the coin, thereby changing the coin to a new device of its own. Instead, one might ensure the coin acts instantaneously on each round-trip by using a detector with a slower response then characteristic lifetime of the coin's internal cavity. Then the time spent within the coin becomes negligible in relation to the time spent inside a cavity arm.
Other configurations have been known to impart a large, possibly nonlinear phase due to the interference of light in a cavity. For instance, a Fabry-Perot could provide a collinear cavity to be placed in the arm of a Michelson interferometer. If the back surface is fully reflective, a Gires-Tournois interferometer is formed [14][15]. While such a device can be designed to produce a sharp response to a phase perturbation, the curve it produces stems from fixed parameters such as surface reflectivity, thickness, and refractive index. This means the device can only produce one curve for each wavelength. One substantial advantage of the tunable response curves of the Grover-Michelson device is that any fixed wavelength can be used with the device, so long as the unbiased coin continues to act according to Eq. (4). At the new wavelength, one obtains the same family of curves, by adjusting values of \(\phi_{2}\).
## V Conclusion
In this work we have shown that replacing the beamsplitter in a standard Michelson interferometer with a Grover coin generates a tunable-sensitivity device. The new device behavior is the result of field interference over an infinite number of cavity round-trip paths, which generates a nonlinear phase dependence in the scattering matrix amplitudes. As a result, the transmittance can be made as sensitive or insensitive as desired by tuning an external phase and can operate at any fixed wavelength. This would provide an enhanced-resolution measurement of phase disturbances in any physical situation. Higher dimensional extensions of this work will be explored in the future.
###### Acknowledgements.
This research was supported by the Air Force Office of Scientific Research MURI award number FA9550-22-1-0312.
## Appendix
Here we derive the scattering amplitudes for the standard Michelson and Grover-Michelson interferometers. We work in the Heisenberg picture, in which the scattering matrix acts on the photon creation operators. This is equivalent to the Schrodinger picture in which the probability amplitudes of each operator are transformed under the same mappings. Indeed, if a monochromatic but otherwise general optical scattering state is given by
\[|\psi\rangle=\sum_{j}c_{j}a_{j}^{\dagger}|0\rangle=\sum_{j,k}c_{j}a_{k}^{ \dagger}\delta_{jk}|0\rangle,\]
then applying the linear scattering transformation \(A\) on \(|\psi\rangle\) gives
\[A|\psi\rangle=\sum_{j,k,\ell}(A_{kj}c_{j})a_{\ell}^{\dagger}\delta_{j\ell}|0 \rangle=\sum_{j,k,\ell}c_{j}(A_{k\ell}a_{\ell}^{\dagger})\delta_{j\ell}|0\rangle.\]
### Standard Michelson Interferometer
We use the beam-splitter scattering matrix in Eq. (2). Because the beam-splitter is biased there is no cavity summation. Thus the initial excitation is split into each arm, hits the mirror (\(M\)) and phase shifts (\(\Phi\)) once, and then overlaps at the beam-splitter again before exiting. In accordance with this, we see
\[a_{1}^{\dagger} \xrightarrow{B}\frac{1}{\sqrt{2}}(a_{3}^{\dagger}+a_{4}^{ \dagger})\] \[\xrightarrow{\Phi,M}-\frac{1}{\sqrt{2}}(e^{i\phi_{1}}a_{3}^{ \dagger}+e^{i\phi_{2}}a_{4}^{\dagger})\] \[\xrightarrow{B}-\frac{1}{2}\bigg{(}(e^{i\phi_{1}}(a_{1}^{ \dagger}+a_{2}^{\dagger})+e^{i\phi_{2}}(a_{1}^{\dagger}-a_{2}^{\dagger})) \bigg{)}\] \[=-\frac{1}{2}\bigg{(}a_{1}^{\dagger}(e^{i\phi_{1}}+e^{i\phi_{2}}) +a_{2}^{\dagger}(e^{i\phi_{1}}-e^{i\phi_{2}})\bigg{)}\]
Reading off the scattering amplitudes,
\[r =-\frac{1}{2}(e^{i\phi_{1}}+e^{i\phi_{2}}),\] \[t =-\frac{1}{2}(e^{i\phi_{1}}-e^{i\phi_{2}}).\]
The square-modulus of these leads to the scattering probabilities of eqs. (5a) and (5b). A similar calculation starting with \(a_{2}^{\dagger}\) results in the same output amplitudes.
### Grover-Michelson Interferometer
In a Grover coin \(G\), we will seal port 3 with a phase shift \(\phi_{1}\) and mirror and do the same for port 4 with a phase shift \(\phi_{2}\) and mirror. Collectively the linear phase transformations these devices enact during each round
trip will be denoted \(\Phi\) and \(M\). To simplify the calculation, we will introduce some new variables and first show how a single round-trip affects \((a_{3}^{\dagger}+a_{4}^{\dagger})\) and \((a_{3}^{\dagger}-a_{4}^{\dagger})\). We will find that these linear combinations of cavity modes are mapped recursively into themselves, somewhat emulating the role of coupled-cavity supermodes.
To that end, define the following:
\[A \coloneqq\frac{1}{2}(-a_{1}^{\dagger}+a_{2}^{\dagger}),\] \[B \coloneqq\frac{1}{2}(e^{i\phi_{1}}+e^{i\phi_{2}}),\] \[C \coloneqq\frac{1}{2}(e^{i\phi_{1}}-e^{i\phi_{2}}).\]
Next, for an excitation \((a_{3}^{\dagger}+a_{4}^{\dagger})\) making a single round-trip,
\[(a_{3}^{\dagger}+a_{4}^{\dagger}) \xrightarrow{M,\Phi}-e^{i\phi_{1}}a_{3}^{\dagger}-e^{i\phi_{2}} a_{4}^{\dagger} \tag{9a}\] \[\xrightarrow{G}-\frac{1}{2}(e^{i\phi_{1}}(a_{1}^{\dagger}+a_{2}^ {\dagger}-a_{3}^{\dagger}+a_{4}^{\dagger})\] \[\quad+e^{i\phi_{2}}(a_{1}^{\dagger}+a_{2}^{\dagger}+a_{3}^{ \dagger}-a_{4}^{\dagger}))\] (9b) \[=-\frac{1}{2}(e^{i\phi_{1}}+e^{i\phi_{2}})(a_{1}^{\dagger}+a_{2}^ {\dagger})\] \[\quad+\frac{1}{2}(e^{i\phi_{1}}-e^{i\phi_{2}})(a_{3}^{\dagger}-a_ {4}^{\dagger})\] (9c) \[=-B(a_{1}^{\dagger}+a_{2}^{\dagger})+C(a_{3}^{\dagger}-a_{4}^{ \dagger}) \tag{9d}\]
and similarly
\[(a_{3}^{\dagger}-a_{4}^{\dagger}) \xrightarrow{M,\Phi}-e^{i\phi_{1}}a_{3}^{\dagger}+e^{i\phi_{2}} a_{4}^{\dagger} \tag{10a}\] \[\xrightarrow{G}-\frac{1}{2}(e^{i\phi_{1}}(a_{1}^{\dagger}+a_{2}^ {\dagger}-a_{3}^{\dagger}+a_{4}^{\dagger})\] \[-e^{i\phi_{2}}(a_{1}^{\dagger}+a_{2}^{\dagger}+a_{3}^{\dagger}-a_ {4}^{\dagger}))\] (10b) \[=-\frac{1}{2}(e^{i\phi_{1}}-e^{i\phi_{2}})(a_{1}^{\dagger}+a_{2}^ {\dagger})\] \[\quad+\frac{1}{2}(e^{i\phi_{1}}+e^{i\phi_{2}})(a_{3}^{\dagger}-a_ {4}^{\dagger})\] (10c) \[=-C(a_{1}^{\dagger}+a_{2}^{\dagger})+B(a_{3}^{\dagger}-a_{4}^{ \dagger}). \tag{10d}\]
We see \((a_{3}^{\dagger}-a_{4}^{\dagger})\) maps directly into itself after each round trip. This recursion can be explicitly unrolled into a geometric series and summed like so
\[(a_{3}^{\dagger}-a_{4}^{\dagger}) \xrightarrow{N}-C(a_{1}^{\dagger}+a_{2}^{\dagger})\sum_{n=0}^{N }B^{n}+B^{N+1}(a_{3}^{\dagger}-a_{4}^{\dagger}) \tag{11a}\] \[\xrightarrow{N\rightarrow\infty}-C(a_{1}^{\dagger}+a_{2}^{ \dagger})\sum_{n=0}^{\infty}B^{n}\] (11b) \[=\left(\frac{C}{B-1}\right)(a_{1}^{\dagger}+a_{2}^{\dagger}). \tag{11c}\]
Now we use the above to derive the S-matrix. The Grover coin maps a photon incident on the first port to
\[a_{1}|0\rangle\rightarrow\frac{1}{2}(-a_{1}^{\dagger}+a_{2}^{\dagger}+a_{3}^{ \dagger}+a_{4}^{\dagger})|0\rangle=(A+\frac{1}{2}(a_{3}^{\dagger}+a_{4}^{ \dagger}))|0\rangle.\]
Combining this with the above formulas, we see
\[\xrightarrow{(9d)}\left[A-\frac{B}{2}(a_{1}^{\dagger}+a_{2}^{ \dagger})+\frac{C}{2}(a_{3}^{\dagger}-a_{4}^{\dagger})\right]|0\rangle\] \[\xrightarrow{(11c)}\left[A-\frac{B}{2}(a_{1}^{\dagger}+a_{2}^{ \dagger})+\frac{C}{2}\left(\frac{C}{B-1}\right)(a_{1}^{\dagger}+a_{2}^{ \dagger})\right]|0\rangle.\]
Grouping by operator yields the output state (7)
\[|\psi_{\text{out}}\rangle =\bigg{[}\bigg{(}\frac{C^{2}}{2B-2}-\frac{B}{2}-\frac{1}{2} \bigg{)}a_{1}^{\dagger}\] \[+\bigg{(}\frac{C^{2}}{2B-2}-\frac{B}{2}+\frac{1}{2}\bigg{)}a_{2}^ {\dagger}\bigg{]}|0\rangle. \tag{12}\]
Up to a \(\pi\) phase shift, the \(B\) and \(C\) are respectively \(r\) and \(t\) for the traditional Michelson. Hence the above calculation illustrates that the Grover coin nonlinearly maps the scattering parameters of the standard Michelson interferometer. Because the Grover coin is permutation symmetric there is no need to consider the initial state \(a_{2}^{\dagger}|0\rangle\). Relabeling ports \(1\longleftrightarrow 2\) results in the same permutation of the output amplitudes so that \(r\) and \(t\) are the same for input on either port.
|
2309.11710 | ContextRef: Evaluating Referenceless Metrics For Image Description
Generation | Referenceless metrics (e.g., CLIPScore) use pretrained vision--language
models to assess image descriptions directly without costly ground-truth
reference texts. Such methods can facilitate rapid progress, but only if they
truly align with human preference judgments. In this paper, we introduce
ContextRef, a benchmark for assessing referenceless metrics for such alignment.
ContextRef has two components: human ratings along a variety of established
quality dimensions, and ten diverse robustness checks designed to uncover
fundamental weaknesses. A crucial aspect of ContextRef is that images and
descriptions are presented in context, reflecting prior work showing that
context is important for description quality. Using ContextRef, we assess a
variety of pretrained models, scoring functions, and techniques for
incorporating context. None of the methods is successful with ContextRef, but
we show that careful fine-tuning yields substantial improvements. ContextRef
remains a challenging benchmark though, in large part due to the challenge of
context dependence. | Elisa Kreiss, Eric Zelikman, Christopher Potts, Nick Haber | 2023-09-21T01:17:33Z | http://arxiv.org/abs/2309.11710v1 | # ContextRef: Evaluating Referenceless Metrics For Image Description Generation
###### Abstract
Referenceless metrics (e.g., CLIPscore) use pretrained vision-language models to assess image descriptions directly without costly ground-truth reference texts. Such methods can facilitate rapid progress, but only if they truly align with human preference judgments. In this paper, we introduce ContextRef, a benchmark for assessing referenceless metrics for such alignment. ContextRef has two components: human ratings along a variety of established quality dimensions, and ten diverse robustness checks designed to uncover fundamental weaknesses. A crucial aspect of ContextRef is that images and descriptions are presented in context, reflecting prior work showing that context is important for description quality. Using ContextRef, we assess a variety of pretrained models, scoring functions, and techniques for incorporating context. None of the methods is successful with ContextRef, but we show that careful fine-tuning yields substantial improvements. ContextRef remains a challenging benchmark though, in large part due to the challenge of context dependence.1
Footnote 1: All data and code will be made available at [https://github.com/elisakreiss/contextref](https://github.com/elisakreiss/contextref).
## 1 Introduction
Image description generation is an outstanding application area for image-based natural language generation (NLG). The purpose of an image description is to make the content of an image accessible to someone who can't see it. This most prominently affects people with temporary or long-term vision conditions, but it extends to people online facing image loading issues and those who simply prefer listening to PDFs and website content. Thus, the potential impact of work in this area is large.
In this context, recent proposals for referenceless evaluation metrics for image-based NLG are very welcome. Traditionally, evaluation in this area has been based on comparing a proposed description to a number of ground-truth descriptions (e.g, BLEU, Papineni et al., 2002; CIDEr, Vedantam et al., 2015; SPICE, Anderson et al., 2016; METEOR, Banerjee and Lavie, 2005). Such _reference-based_ metrics heavily rely on high-quality annotations (Anderson et al., 2016), which can be difficult to obtain. In contrast, referenceless metrics use pretrained vision-language models to assess image descriptions directly, without costly ground-truth reference texts. This serves a real-world need where ground-truth descriptions are sparse (Gleason et al., 2019; Williams et al., 2022; Kreiss et al., 2022).
How well correlated are these referenceless metrics with human preferences, though? Unless there is a strong correlation, such metrics will lead us in wrong directions. To address this question, we present ContextRef, a new English-language benchmark for assessing referenceless metrics against human preferences. ContextRef has two components. The first derives from a human-subjects experiment eliciting ratings along a variety of quality dimensions (Figure 1A). The second provides ten diverse robustness checks designed to stress-test metrics via context manipulations, syntactically and semantically meaningful alterations to predicted texts, and changes to the input image (Figure 1B).
A crucial feature of ContextRef is that images and descriptions are presented in context. This reflects much recent work arguing that the context an image is presented in significantly shapes the appropriateness of a description (Stangl et al., 2020, 2021; Muehlbradt and Kane, 2022; Kreiss et al., 2022). For instance, an image of a sculpture in a park presented in the context of a Wikipedia article on "Sculptures" will require a different description than when presented in an article on "Photographic Composition." In the first case, the sculpture and its properties should be prominent; in the second, the sculpture may require only a passing reference.
We use ContextRef to assess a wide variety of referenceless metrics. The metrics we consider vary along three axes. First, we use a number of different pretrained models. Second, we consider two scoring methods: using the _similarity_ of the learned image and description embeddings, and using the _likelihood_ of the description conditioned on the image. Third, since prior referenceless metrics have not accounted for the role of context, we explore methods for integrating context into the metrics themselves.
None of the methods we explore succeed at ContextRef. In particular, while these methods mostly do show positive correlations with our human data, they fall short on our robustness checks, revealing that they are insensitive to fundamental changes to the examples they are evaluating. The main source of variation is the scoring method. In particular, similarity-based metrics tend to be less sensitive to grammaticality and context, while likelihood-based metrics tend to be less sensitive to uninformative but predictable text like repetition or irrelevant sentences.
However, we identify a path forward. Careful fine-tuning regimes can start making potential metrics much more successful at ContextRef. This is encouraging, but ContextRef remains a challenging benchmark. In particular, our fine-tuning experiments do not lead to models that are sufficiently sensitive to context, as reflected in ContextRef itself. However, we are optimistic that ContextRef can facilitate progress on this fundamental challenge for automatically generating useful image descriptions.
## 2 Related Work
Referenceless metrics leverage pretrained vision-language models and provide scores for novel descriptions by considering the image directly (Hessel et al., 2021; Lee et al., 2021; Scott et al., 2023; Lin et al., 2023). The most commonly used metric, CLIPScore (Hessel et al., 2021), assigns a score to each image-description pair based on the cosine similarity of the image and the description in CLIP's embedding space (Radford et al., 2021). CLIPScore often correlates better with human quality judgments than reference-based metrics (Hessel et al., 2021; Kasai et al., 2022), but its inability to integrate context significantly restricts its practical usefulness (Kreiss et al., 2022). Kreiss et al. present initial evidence that context can be successfully integrated into the similarity computation of CLIPScore, and we develop this exploration much further (discussed in Section 3).
Figure 1: **Our proposed benchmark.** (A) ContextRef questions and distributions of averaged human ratings in the dataset for each question type. For simplicity, pre-image rating distributions are omitted (except for _imaginability_ which only has pre-image ratings), since they show similar distribution patterns. Overall, the distributions are robust from the perspective of using the ratings to score referenceless metrics. (B) ContextRef example with illustrative robustness checks. These checks prove invaluable for uncovering undesired behavior of proposed metrics that can’t be detected in naturalistic data.
In addition, recent vision-language models (many directly building on CLIP) have surpassed CLIP in downstream task performance on many multimodal tasks and offer new potential scoring opportunities. In this work, we investigate an array of models potentially capable of functioning as contextual metrics that leverage pretrained models, we investigate the role of similarity- vs. likelihood-based scoring, and we develop new methods for bringing in context.
An important feature of ContextRef is its series of robustness checks. Extensive research has been devoted to evaluating the robustness of models to input perturbations, especially in the context of adversarial attacks (Szegedy et al., 2014), including with multimodal models (Qiu et al., 2022; Kim et al., 2023; Pezzelle, 2023). In particular, works such as Ribeiro et al. (2020) highlight the value of leveraging interpretable changes to the input and confirming the model predictions change (or do not change) as expected. With ContextRef, we build on this work with a variety of previously-identified and novel robustness checks (see Section 5) to better understand the differences across scoring strategies.
## 3 Models and Scoring Strategies
In this section, we describe the models used for our experiments. For all of our approaches, the exact architectures of the visual and text encoders are designed to be easily interchangeable, and we tested many choices for each model. We selected current state-of-the-art vision-language models that cover a wide range of strategies for integrating textual and visual information, with varying degrees of multimodal pertaining. For consistency, we select one variant of each model according to their correlation with the human annotations and discuss the selected variants in Appendix D. We release the details for all models tested with the associated code. Based on the computation of the description quality score, we distinguish between likelihood-based and similarity-based metrics (similar to generative and discriminative scores in Lin et al. 2023).
### Likelihood-based Metrics
Likelihood-based metrics score image descriptions conditional on the image and potentially other information like context. The precise method by which this is done depends on the model. To integrate context into these metrics without any fine-tuning, we considered two intuitive methods: (1) using the likelihood of a positive assessment of the description conditioned on an image description for an image and its context, and (2) using the likelihood of the description conditioned on a positive assessment, the image, and its context. We include the prompt templates used for the models in Appendix G, with all of these components.
In initial experiments, it became clear that (2) is the superior option, so we focus on that method, as approach (1) peaked at about half of its correlational strength. There are multiple possible ways to calculate these scores; we found that using each language model's average per-token log-likelihood across the full sequence was consistently best correlated with human preferences across most models, as opposed to cumulative log-likelihood or only the log-likelihood of the conditioned variable.
**Flamingo**: The OpenFlamingo v2 (Awadalla et al., 2023) models all use a CLIP-based image encoder (CLIP ViT-L/14), leveraging frozen, pretrained vision and language models. The visual features are passed into the language model using a cross-attention-based adapter. These models are a replication of the Flamingo work that introduced this cross-attention-based training method (Alayrac et al., 2022).
**Frozen**: One approach to permit a text-only language model to operate as a multimodal model with no additional multimodal fine-tuning is to use a frozen language model (e.g., GPT-2; Radford et al. 2019) and a multimodal embedding model (e.g., CLIP; Radford et al. 2021) to map images to linear combinations of token embeddings. For example, consider an image of a "pluot" that is represented in the multimodal model's embedding space as a linear combination of its embeddings for the words plum and aprioct: i.e., \(\operatorname{encode\_image}(pluot\_image)=\alpha*\operatorname{encode\_ text}(plum)+\beta*\operatorname{encode\_text}(aprioct)\). Then, a new token would be created in the language model's vocabulary corresponding to the same linear combination of the language model embeddings for plum and apricot: \(\operatorname{new\_token}(pluot\_image)=\alpha*\operatorname{embed\_token}( plum)+\beta*\operatorname{embed\_token}(aprioct)\). Then, the image can be passed into the language model as if it is a token. This combines ideas from Tsimpoukelli et al. (2021) and Norouzi et al. (2014) and was first introduced by dzyk (2023).
**BLIP**: The BLIP models that we consider (more precisely, BLIP-2 models; Li et al. 2023) use a ViT image encoder (Dosovitskiy et al., 2021), similar to the Flamingo models. Both OpenFlamingo and BLIP support a variety of Transformer-based autoregressive text encoders, some of which are
instruction-tuned (including InstructBLIP, which is instruction-tuned to follow directions; Dai et al.2023). Unlike the other models, they are trained with both a likelihood-based and similarity-based objective. We analyze both their likelihood-based and similarity-based metric outputs.
### Similarity-based Metrics
**CLIP** CLIP is a widely used multimodal technique mapping text and images to a shared embedding space using a contrastive objective (i.e., bringing together the embeddings associated with ground-truth text-image pairs while moving apart unassociated text-image pairs; Radford et al.2021). Trained on large amounts of data, CLIP-based methods for image description evaluation (in particular, CLIPScore; Hessel et al.2021) have been proposed.
We can incorporate context by including terms that take into account the cosine similarity between the context and the image or between the description and the context. We use the method proposed in Kreiss et al. (2022), which shows a promising correlation with sighted as well as blind and low vision participant quality judgments. Intuitively, the method amends CLIPScore to incorporate the similarity of the description and context and replaces the similarity of the description to the image with the similarity of the description to information added by the image to the context. We use this as our main CLIP method and refer to the original CLIPScore as _Orig. CLIPScore_ elsewhere.
However, despite their widespread use, CLIP-based approaches generally suffer some key limitations. First, the most widely used Vision Transformer (ViT) models (but not ResNet models; He et al.2016) expect center-cropped images, which fundamentally limits their usefulness as image-description-evaluation tools. In addition, for the default text encoder for CLIP, there is a 77-token character limit, which also applies to the substantial majority of the text encoders in OpenCLIP (note, however, that this doesn't apply to all of the text encoders in OpenCLIP, e.g., to RoBERTA; Ilharco et al.2021). We also include CoCa under this umbrella, which modifies CLIP by adding an additional image captioning objective to the language model and is included in OpenCLIP (Yu et al., 2022).
**BLIP** As mentioned, BLIP is trained with both likelihood and similarity objectives. Consequently, we evaluate both objectives in this study. Notably, BLIP is actually trained with two similarity objectives - an item matching and an item contrastive score - but, in this study, we focus on the item contrastive score since it tended to achieve higher correlation with our human judgment data. To compute the description quality scores, we use BLIP embeddings in the same way we use CLIP embeddings.
## 4 ContextRef: Evaluating Correlation with Human Judgments
The first part of ContextRef allows users to correlate model-assigned scores with human preference ratings. Image description quality judgments have been extensively studied; Bernardi et al. (2016) provide an overview of the various dimensions prior research has explored for determining quality, including accuracy grammaticality, creativity, and human-like content. More recent frameworks include THumB (Kasai et al., 2022) and gamified quality ratings (Scott et al., 2023). Since image accessibility is a fundamental use of image description generation and evaluation at scale, we adopt the evaluation scheme proposed by Kreiss et al. (2022). They introduce a set of 5 questions to assess multiple dimensions of description quality, which show a promising correlation between sighted and blind and low vision (BLV) participant judgments.
### Stimuli selection
The data was randomly sampled from the English language subset of the WIT dataset (Srinivasan et al., 2021). To provide an in-depth understanding of how model scoring behavior corresponds with human description preferences, we prioritized detailed and high-coverage annotations for each description over increased data sample size. As Sections 4.4 and 5.2 show, the dataset size is sufficient to highlight robust patterns in model behavior.
Our dataset contains 204 sampled data points, each of which consists of an alt text description written by Wikipedia editors as well as the corresponding image and context (article title, first paragraph, section title, section text, caption). Sampling was restricted to data where both an alt description (as it appears in the HTML alt tag) and a caption (visible to everyone below the image) were present (Kreiss et al., 2022). In WIT's subset of English Wikipedia, 65% of alt descriptions
are identical to the caption, which is generally discouraged in image accessibility guides (e.g., the WebAIM accessibility guide specifically advises against redundant information2). To optimize for most informative sampled data, we therefore subsampled such cases to 20% of the crawled data.
Footnote 2: [https://webaim.org/techniques/alttext/](https://webaim.org/techniques/alttext/)
### Procedure
Before starting the main study, participants were introduced to the overall goal of making images nonvisually accessible. Then, participants were given 5 descriptions that they were asked to rate, which were presented within the available context from the Wikipedia article page. The descriptions were randomly sampled, but each participant saw exactly one description that was identical to the caption and 4 descriptions that were distinct from the caption. Participants rated each description twice, once before and once after seeing the image. After the image was revealed, participants saw what they had previously selected so that they could make an informed decision to either keep or change their rating. Each image was rated based on 6 distinct questions.
Question order was randomized between participants, except that the _overall_ quality question always appeared last. Participants were recruited via Prolific (Palan and Schitter, 2018), restricted to US-based workers. The study had a median completion time of 11.5 minutes, and participants received $2.40 compensation ($12.50/hr). We continued recruitment until all descriptions had received at least 3 annotations from workers who passed the attention check (see Appendix A for details).
### Results: Dataset properties
The dataset contains 768 annotations, averaging 3.8 distinct participant ratings for each description (see examples in Appendix Figure A.4). _Overall_ ratings are the most intuitive quality measure, which is why they are the focus of the following dataset analyses. Figure 1A shows the distributions of averaged ratings for each of the questions. Specifically, the _overall_ ratings show encouraging coverage over the whole scale, which is essential for evaluating the effectiveness of metrics. We also find that quality ratings are significantly correlated with the description length, that descriptions are regarded as less useful when they are identical to the associated caption, and that faulty descriptions consistently receive lower ratings from participants. We include details on these analyses in Appendix B.
### Results: Correlation with refererenceless metrics
Using the annotated data, we correlate the description quality as predicted by the metrics with the averaged human-annotated description quality. We selected the best-performing model variants based on the highest correlation with the _overall_ post-image ratings (see Appendix D for model details).
Figure 2: Best correlations with human annotations of each model category for predicting description quality. All correlations for overall, imaginability, and relevance are statistically significant Pearson correlations (\(p<0.001\)). No irrelevance correlations are significant. Correlations with ratings participants gave before seeing the image are in light blue, and ratings after seeing the image are in dark blue.
Figure 2 shows the Pearson correlations for each model variant with the human annotations for all quality assessment questions. There is a strong qualitative difference in correlation between the ratings participants provided before seeing the image (presented in light blue) vs. after seeing the image (dark blue), specifically for similarity-based metrics (denoted by circles).
Concretely, similarity-based metrics are uniformly less able to capture pre-image quality judgments than post-image ones, which is not borne out for any of the likelihood-based metrics (denoted by triangles). Most strikingly, this pattern even holds within the same model type (BLIP-2), suggesting that the scoring method itself introduces a robust semantic bias for evaluating descriptions. These differences trace mainly to the descriptions marked as containing inaccurate information (see Appendix E).
While all similarity-based metrics are less successful in predicting pre-image ratings, we place more emphasis on the post-image ratings for two reasons. First, when establishing the annotation scheme, Kreiss et al. (2022) note that sighted participant ratings after seeing the image show slightly higher correlation with blind and low vision participant judgments. Second, it is only after seeing the image that sighted users can evaluate whether descriptions are truthful. In the post-image condition, all potential metrics achieve comparably high correlations with the human ratings (with \(r\approx 0.4\)), except for InstructBLIP (\(r=0.2\)). Nevertheless, the distinction in correlation with the pre-image ratings already points to a qualitative difference between likelihood- and similarity-based metrics and the role that image-text alignment plays for achieving this correlation. This is further supported by high correlations of the predicted ratings within those categories, but not across (see Appendix C).
Based on the correlation with human ratings, these results seem to tell a promising and successful story for the potential of leveraging powerful pretrained models out-of-the-box for referenceless image description evaluation. The by-question and across-metric correlational analyses, however, indicate qualitative differences in the way that the metrics assign these scores.
## 5 ContextRef: Evaluating Robustness
While the high correlations of the metrics with human ratings are reassuring, they provide only limited insight into how the metrics work and where they fail. Based on prior work on what makes descriptions (not) useful and the type of errors language and vision models often make, the second part of ContextRef introduces dataset augmentations which any metric should be expected to be sensitive to. These augmentations are in contrast to many previous approaches testing whether models are insensitive to perturbations (e.g., Qiu et al., 2022; Rohrbach et al., 2018). Here, we expect all augmentations to necessarily result in lower scores than are assigned to the ground-truth data.
### Data Augmentations
The applied data augmentations manipulate a subset of three potential causes of errors: missing image-text alignment, over-reliance on string predictability, and lack of contextual sensitivity. We exemplify each augmentation in Figure 1B.
Shuffled descriptionsDescriptions are shuffled to be assigned to a different image from the dataset. This tests whether a metric integrates image and description information jointly and is commonly used to uncover object hallucinations (Radford et al., 2021; Hessel et al., 2021; Cui et al., 2018).
Shuffled contextsContexts that each image originated from are shuffled. Prior work found that if the connection between the image and the context it appears in isn't apparent from the description, it receives low quality ratings, especially from BLV participants (Kreiss et al., 2022).
Shuffled wordsPrior work suggests that grammaticality is an indicator of description quality (Kasai et al., 2022; Mitchell et al., 2012; Elliott and Keller, 2013). Shuffling word order is a long-standing strategy to investigate sensitivity to grammaticality (Barzilay and Lee, 2004; Cao et al., 2020; Parthasarathi et al., 2021) and some Transformer-based language model variants can be trained to effectively perform language modeling without consideration to word order information (Sinha et al., 2021; Abdou et al., 2022). In addition to string predictability, word shuffling can also affect image-text alignment since, for instance, property attribution can become ambiguous (e.g., "a red shift and blue pants" can become "blue shirt pants a red and").
Proper name replacementWe used GPT-4 (OpenAI, 2023) to identify and replace all proper names in the descriptions, such as people's names or locations, with likely alternatives.3 The accuracy of proper nouns based on the image alone is generally difficult to verify but essential for error detection. Following the same logic, we also replaced dates in this manipulation. 104 out of the 204 descriptions contain at least one proper name replacement.
Footnote 3: Using GPT-4 allowed for more naturalistic replacements than could be done with pattern-based methods.
Frequent alignment errorsPrevious work has established a number of common errors that image description generation models make, including the misidentification of colors, clothing items, or people's ages (van Miltenburg and Elliott, 2017). We used GPT-4 to detect and replace those terms with incongruent alternatives in order to necessarily make the description inaccurate. 153 out of the 204 descriptions contain at least one induced common model error.
Frankenstein imagesA random object (e.g., a golden crown) is saliently placed within the image at a random position (Yu et al., 2022). The score for a description that doesn't mention the added object is expected to be lower due to the salience of the image manipulation. This tests image-text alignment but would likely also be reflected in metrics sensitive to image coherence.
GPT-2 continuations (long/short)To test the effect of string predictability on the predicted rating (Rohrbach et al., 2018), descriptions were extended by an additional sentence (_long_ condition). We used GPT-2 (Radford et al., 2019) to generate likely string continuations that are not grounded in the image. To account for the length artifact, we also created a version where GPT-2 completes the first half of the description (_short_ condition). This tests image-text alignment by adding image-independent information that is highly likely.
Irrelevant final sentenceTo further exaggerate the condition of adding irrelevant but high-probability strings, we add an irrelevant sentence to the end of a description. The sentence is randomly chosen from 10 sentences from Wikipedia, e.g., "The elephant is the largest existing land animal."
Exact repetitionInspired by the observation that language models tend to repeat phrases (Holtzman et al., 2019; Xu et al., 2022; Tang et al., 2023), we add a test for an exact repetition of the description. Reference-based evaluation metrics can show a bias towards long sentences with repeated phrases (SPICE; Liu et al. 2017). Redundant information should be dispreferred by a metric for two reasons. First, redundant information can lead to undesired pragmatic inferences (Nie et al., 2020), and second, accessibility technologies like screen readers make it hard to skip ahead and avoid redundant parts.
### Results
To contextualize the behavior of the various metrics for each augmentation type, Figure 3 shows the exact number of descriptions for which the metrics assigned the same, lower, or higher scores. Given the nature of the augmentations, a well-calibrated metric should assign a lower score for all augmented descriptions, resulting in all green bars. Cases where the metrics are insensitive to the augmentation are marked in light pink. The most problematic cases are marked in dark pink. Here, the metric considers the augmented data to be of higher quality than the ground truth.
No metric passes all data augmentations out-of-the-box. Across augmentation variants, augmented descriptions often counter-intuitively receive a higher score than their ground-truth counterparts (see Appendix F for a complementary analysis of the average assigned scores). This illustrates fundamental shortcomings of simply selecting refereenceless metrics based on human correlation performance alone, and shows how those metrics can mislead model development based on their behavior on likely model error patterns.
The data augmentation results further support the previous observation that similarity-based and likelihood-based metrics show distinct semantic sensitivities. Notably, they strongly differ in their sensitivity to _shuffled descriptions_. CLIP correctly decreases the score for almost all shuffled descriptions, providing evidence that the task is well-defined. The original CLIPScore and BLIP-2 are similarly successful, which is perhaps unsurprising given the contrastive learning objective underlying the scores and provides further evidence that similarity-based metrics are sensitive to image-text mismatches. However, the Frozen metric, which showed a comparatively strong correlation with the human data, increases its score for more than 25% of all incompatible descriptions, and the best-performing BLIP-2 does so for more than half. This pattern is similarly reflected in the _Frankenstein images_ augmentation and suggests a key failure case of the likelihood-based metrics.
When it comes to _shuffled contexts_, however, likelihood-based metrics appear comparatively more successful. Even the previously proposed contextual CLIPScore variant that showed encouraging correlations with sighted and BLV user rating (Kreiss et al., 2022) fails when the contexts are randomly shuffled. Another success story for likelihood-based scores is the _shuffled words_, where they achieve ceiling accuracy. In 25% of the descriptions, the similarity-based metrics CLIP and BLIP-2, however, assign a higher score to the shuffled descriptions than their ordered counterparts.
The most striking failure case of likelihood-based metrics is the strong preference for descriptions that were augmented to increase the predictability of the string (_GPT-2 continuation long, irrelevant final sentence_, and _exact repetition_). For _exact repetition_, all likelihood-based metrics show a categorical preference for the augmented description over the original one, which is only marginally improved for the case where a correct but completely _irrelevant final sentence_ is added. This suggests that increased string predictability (independent of the image) biases especially likelihood-based metrics towards higher scores. This is in line with the prior observation that language models trained for description generation exhibit strong language priors (Rohrbach et al., 2018).
In sum, all models exhibit unexpected behavior and assign higher scores to descriptions that are decidedly worse. However, similarity- and likelihood-based metrics show distinct sensitivity patterns across augmentations. Likelihood-based metrics are highly influenced by added irrelevant information and show a comparatively low sensitivity for detecting descriptions that don't belong to an image. However, they are very sensitive to manipulations of word order and context. Interestingly, Instruct-BLIP had the lowest correlation with human ratings but seems more sensitive to data manipulations than the on-the-surface more promising likelihood-based alternatives.
Based on the behavior on augmented data, similarity-based metrics appear more promising since they consistently judge at least half of all augmented descriptions as worse compared to their original counterpart. However, increased scores for the augmented examples are still present at an alarming rate, and the similarity-based metrics seem to fail to respond meaningfully to context perturbations.
Figure 3: Proportion of augmented descriptions that receive lower scores (green), unchanged scores (light pink), or counter-intuitively higher scores (dark pink). Metrics are sorted according to their correlational performance with the human judgments in Figure 2. Across augmentations, models commonly assign higher scores to augmented descriptions that by definition contain wrong or irrelevant irrelevant, omit relevant information, or are ungrammatical.
Towards better metrics via fine-tuning with ContextRef
The data augmentation results suggest that while out-of-the-box referenceless metrics appear promising in terms of correlation with human judgments, they exhibit a wide range of unexpected behaviors on data augmentations that target image-text alignment, predictability of the string, and context sensitivity. In this section, we explore the extent to which fine-tuning can guide metrics toward capturing the reduced quality associated with these expected model-made errors in the augmentations.
We select CLIP, a similarity-based metric that is the most robust against the data augmentations, and Frozen, a likelihood-based metric that had particularly strong overall correlation with human ratings but still some promising scoring behavior on the data augmentations. We split the data into an 80% train and 20% test split, ensuring that any augmentations involving data shuffling are only shuffled within the respective split to avoid contamination of the test set.
We first trained the best-performing CLIP model for 0.5 epochs with a learning rate of \(5e^{-6}\) and a batch size of 64, with the Adam optimizer (Kingma & Ba, 2014). Fine-tuning CLIP solely on the data augmentations results in deterioration of the human judgment correlation. When reaching 0.5 epochs, CLIP achieves some performance improvements in 7 out of 10 augmentations but only at the cost of reducing the Pearson correlation with the human judgments from 0.36 to 0.27.
To mitigate this issue, we jointly trained on the augmented data and the raw evaluation scores from the human-subjects experiment (Section 4). For this training, we maintain other hyperparameters, but change the learning rate to \(2e^{-6}\). While still maximizing for the Pearson correlation with human judgments on _overall_ (post-image) ratings (from \(0.36\) to \(0.30\)), fine-tuned CLIP achieves remarkable performance gains on the data augmentations, shown in Table 1. Augmentations with the highest gains are _shuffled words_ (\(+24\%\)), and perfect performance on _GPT-2 continuation long_ (\(+34\%\)), _irrelevant final sentence_ (\(+20\%\)), and _exact repetition_ (\(+24\%\)). For the _shuffled contexts_ augmentation, fine-tuned CLIP also improves performance, but doesn't change its score in 9% of the descriptions and provides a higher score for about 40% of the augmented data compared to the ground truth.
Fine-tuning Frozen jointly on the human data and data augmentations also improves performance on many of the data augmentations, but it still largely falls behind CLIP. Even with fine-tuning, Frozen can't get any traction on _exact repetition_ and still largely provides higher scores for descriptions containing irrelevant information (_GPT-2 continuation long_ and _irrelevant final sentence_).
These fine-tuning results highlight how fine-tuning existing models to align with common model shortcomings can be an effective strategy for developing more intuitive referenceless metrics. For CLIP, a similarity-based metric, fine-tuning can alleviate most of the unintuitive behavior. However, context-sensitivity remains challenging, suggesting that especially a successful integration of context might require more fundamental innovations to successfully guide metric alignment with people's judgments.
## 7 Conclusion
Referenceless image description evaluation metrics can support and promote fast progress on image description generation models, but only if they reliably correlate with human preferences. We introduce ContextRef, a benchmark for assessing these metrics against the results of a human-subjects experiment and against data augmentations that should systematically make descriptions worse. We find that no metric excels across all parts of ContextRef, but careful fine-tuning improves metric performance. Integrating context remains a challenge, though; we hope that ContextRef spurs new research on this important aspect of image description generation.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CLIP} & \multicolumn{2}{c}{Frozen} \\ Dataset variant & Untuned & Tuned & Untuned & Tuned \\ \hline shuffled descr. & 100.0 & 100.0 & 66.7 & **69.2** \\ shuffled contexts & 43.9 & **48.8** & 58.5 & **65.9** \\ shuffled words & 67.6 & **91.9** & 100.0 & 100.0 \\ proper name repl. & 76.2 & **81.0** & 85.7 & 85.7 \\ freq. align. errs. & 89.3 & 89.3 & 71.4 & **75.0** \\ frankenstein img. & 100.0 & 100.0 & 53.7 & 53.7 \\ GPT-2 cont. short & 78.1 & **90.2** & 61.0 & **63.4** \\ GPT-2 cont. long & 65.9 & **100.0** & 2.4 & **9.8** \\ irrel. final sent. & 80.5 & **100.0** & 2.4 & **19.5** \\ exact repetition & 65.9 & **100.0** & 0.0 & 0.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model performance (percent) on dataset augmentations before and after jointly fine-tuning on the augmentations and human judgments. Accuracy is the proportion of descriptions in the test set that receive the expected lower score compared to the ground-truth.
## Acknowledgements
This research is supported in part by grants from Google and the Generative AI for the Future of Learning program at Stanford.
|
2310.20486 | Optimal Binary Differential Privacy via Graphs | We present the notion of \emph{reasonable utility} for binary mechanisms,
which applies to all utility functions in the literature. This notion induces a
partial ordering on the performance of all binary differentially private (DP)
mechanisms. DP mechanisms that are maximal elements of this ordering are
optimal DP mechanisms for every reasonable utility. By looking at differential
privacy as a randomized graph coloring, we characterize these optimal DP in
terms of their behavior on a certain subset of the boundary datasets we call a
boundary hitting set. In the process of establishing our results, we also
introduce a useful notion that generalizes DP conditions for binary-valued
queries, which we coin as suitable pairs. Suitable pairs abstract away the
algebraic roles of $\varepsilon,\delta$ in the DP framework, making the
derivations and understanding of our proofs simpler. Additionally, the notion
of a suitable pair can potentially capture privacy conditions in frameworks
other than DP and may be of independent interest. | Sahel Torkamani, Javad B. Ebrahimi, Parastoo Sadeghi, Rafael G. L. D'Oliveira, Muriel Médard | 2023-10-31T14:27:42Z | http://arxiv.org/abs/2310.20486v1 | # Optimal Binary Differential Privacy via Graphs
###### Abstract
We present the notion of _reasonable utility_ for binary mechanisms, which applies to all utility functions in the literature. This notion induces a partial ordering on the performance of all binary differentially private (DP) mechanisms. DP mechanisms that are maximal elements of this ordering are optimal DP mechanisms for every reasonable utility. By looking at differential privacy as a randomized graph coloring, we characterize these optimal DP in terms of their behavior on a certain subset of the boundary datasets we call a boundary hitting set. In the process of establishing our results, we also introduce a useful notion that generalizes DP conditions for binary-valued queries, which we coin as suitable pairs. Suitable pairs abstract away the algebraic roles of \(\varepsilon,\delta\) in the DP framework, making the derivations and understanding of our proofs simpler. Additionally, the notion of a suitable pair can potentially capture privacy conditions in frameworks other than DP and may be of independent interest.
## I Introduction
Differential privacy (DP) [2] has emerged as a leading standard in private data analysis [3]. This framework has been instrumental in protecting privacy across a multitude of applications. Most prominently, the United States Census Bureau integrated differential privacy into its 2020 Census release [4]. Furthermore, industry leaders like Google [5], Microsoft [6], and Apple [7] have also incorporated DP into their respective systems. DP is also heavily studied and used in deep learning [8, 9] and federated learning [10, 11].
Differential privacy is often achieved through a randomized perturbation of the true query outputs before sharing them with potentially untrustworthy entities. However, such a perturbation (also known as a DP mechanism) inevitably affects the reliability and utility of the output. Therefore, one of the central and challenging research problems in the field is how to design and implement DP mechanisms to best balance privacy and utility [12]. Under certain parameter settings and assumptions, optimal DP mechanisms have been studied in the literature for real-valued queries [13, 14, 15, 16, 17] and for categorical or binary-valued data [18, 19, 20].
Previous works on differential privacy have considered different utility functions to measure performance. Thus, while a certain DP mechanism might perform well, or even optimally, for a certain utility, it might not do so for another. In Definition 2, we present the notion of _reasonable utility_ for binary mechanisms, which applies to all binary utility functions in the literature. This notion induces a partial ordering on the performance of all binary DP mechanisms. DP mechanisms which are maximal elements of this ordering are optimal DP mechanisms for every reasonable utility. In Theorem 1, we characterize these optimal DP mechanisms. To do so, we look at differential privacy as a randomized graph coloring.
In our graph formulation, each vertex \(v\in V\) of the graph represents a dataset and each edge represents a neighborhood relation. The true value of the binary query is represented by the vertex color (such as blue and red). A DP mechanism is then a randomized coloring of the vertices subject to local privacy constraints. We categorize datasets into boundary and non-boundary datasets. Boundary datasets are those with at least one neighbor with a different true query value (color), and non-boundary datasets are those in which no single individual in the dataset can change the query.
Theorem 1 shows that optimal DP mechanisms are characterized by the values of the DP mechanism on a certain subset of the boundary datasets we call a _boundary hitting set_. Thus, if the values of a DP mechanism on a boundary hitting set are defined and satisfy DP conditions among themselves, then there exists a unique optimal DP mechanism which outperforms all others, for any reasonable utility function.
In the process of establishing our results, we also introduce a useful notion that generalizes DP conditions for binary-valued queries. We coin this as a _suitable pair_, which abstracts away the algebraic roles of \(\varepsilon,\delta\) in the DP framework and instead focuses on the following: a randomized binary mechanism defined on a dataset \(v\) imposes an upper bound and a lower bound on the mechanism on a neighboring dataset \(u\). These bounds at \(u\), in turn, impose upper and lower bounds on the mechanism in the original dataset \(v\). The strength of the notion of suitable pair is that non-local privacy conditions between non-neighboring datasets can be easily understood and manipulated without being entangled in algebraic DP conditions. Thus, simplifying the derivations and understanding of our proofs. Additionally, the notion of a suitable pair can potentially capture privacy conditions in frameworks other than DP and may be of independent interest.
### _Main Contributions_
Our main contributions are as follows.
* In Definition 2, we present the notion of _reasonable utility_ for binary mechanisms, which applies to all utility functions in the literature. This notion induces a partial ordering on the performance of all binary DP mechanisms. DP mechanisms which are maximal elements
of this ordering are optimal DP mechanisms for every reasonable utility.
* In Theorem 1 we characterize optimal DP mechanisms by their values on a certain subset of the boundary datasets, which we call a _boundary hitting set_.
* In Definition 12 we present the notion of a _suitable pair_. This notion generalizes DP conditions for binary-valued queries and abstracts away the algebraic roles of \(\varepsilon,\delta\), thus simplifying our proofs.
* We present Algorithm 1, for finding optimal mechanisms within the suitable pair framework, as well as a more efficient Algorithm 2 for the case where one is solely interested in the output of a mechanism on a specific dataset. The optimality of Algorithm 1 is stated in Theorem 2.
Theorem 1 generalizes the results in [1], which is stated as Corollaries 1 and 2 for the special case of boundary homogenous mechanisms and balanced mechanisms, respectively. Definition 12, Algorithms 1 and 2, Theorem 2, and associated intermediate results are all new in this paper with respect to [1].
### _Paper Organization_
Section II contains a statement of the problem and all main results of the paper. In Section II-A, we review basic DP definitions and introduce the notion of reasonable utility, mechanism utility dominance, and optimal mechanism. Section II-B presents the DP mechanism as the randomized coloring of datasets on the graph. Section II-C, highlighted in Theorem 1, presents the main results for optimally extending the mechanism in terms of the restricted \((\varepsilon,\delta)\)-DP mechanism on a boundary hitting set. Section II-D generalizes the results of Section II-C using the new notion of a suitable pair and presents the necessary and sufficient condition for the existence of the unique optimal extension of a mechanism restricted to a boundary hitting set. It also summarizes the optimal extension in Algorithm 1. All proofs are in Section III.
## II Main Results
### _Differential Privacy_
We denote by \(V\) the family of datasets. We consider a symmetric neighborhood relationship \(\sim\) on \(V\) where \(u,v\in V\) are said to be neighbors if \(u\sim v\). We also consider a finite output space \(Q\), which corresponds to the space over which the output of the queries lies. A randomized mechanism, which we refer to as just a mechanism, is a random function \(\mathcal{M}:V\to Q\), from the family of datasets to the output space.
**Definition 1** (Differential Privacy [21]).: Let \(\varepsilon,\delta\in\mathbb{R}\) be such that \(\varepsilon\geq 0\) and \(0\leq\delta<1\). Let \(V\) be a set and \(\sim\) be a symmetric relation on \(V\). Then, a mechanism \(\mathcal{M}:V\to Q\) is \((\varepsilon,\delta)\)-differentially private if for any \(u\sim v\) and \(S\subseteq Q\), we have \(\Pr[\mathcal{M}(u)\in\mathcal{S}]\leq e^{\varepsilon}\Pr[\mathcal{M}(v)\in \mathcal{S}]+\delta\). We denote the set of all \((\varepsilon,\delta)\)-DP mechanisms \(\mathcal{M}:V\to Q\) by \(\mathfrak{M}_{\varepsilon,\delta}(V,Q)\). However, when \(V\) and \(Q\) are clear from the context, we refer to \(\mathfrak{M}_{\varepsilon,\delta}(V,Q)\) as \(\mathfrak{M}_{\varepsilon,\delta}\).
In this paper, we consider the case where the size of the output space is \(|Q|=2\), i.e., binary-valued queries. Without loss of generality, we set the output space to \(Q=\{\texttt{blue},\texttt{red}\}\). The DP conditions for any \(u\sim v\) in \(V\) are then as follows.
\[\Pr[\mathcal{M}(u)=\texttt{blue}] \leq e^{\varepsilon}\Pr[\mathcal{M}(v)=\texttt{blue}]+\delta, \tag{1}\] \[1-\Pr[\mathcal{M}(u)=\texttt{blue}] \leq e^{\varepsilon}(1-\Pr[\mathcal{M}(v)=\texttt{blue}])+\delta,\] (2) \[\Pr[\mathcal{M}(v)=\texttt{blue}] \leq e^{\varepsilon}\Pr[\mathcal{M}(u)=\texttt{blue}]+\delta,\] (3) \[1-\Pr[\mathcal{M}(v)=\texttt{blue}] \leq e^{\varepsilon}(1-\Pr[\mathcal{M}(u)=\texttt{blue}])+\delta. \tag{4}\]
Since we only consider binary-valued queries, we have that \(\Pr[\mathcal{M}(u)=\texttt{red}]=1-\Pr[\mathcal{M}(u)=\texttt{blue}]\) and that \(\Pr[\mathcal{M}(v)=\texttt{red}]=1-\Pr[\mathcal{M}(v)=\texttt{blue}]\).
We consider a function \(T:V\to Q\), which we refer to as the true function. Our goal is to approximate the true function \(T\) using an \((\varepsilon,\delta)\)-differentially private mechanism \(\mathcal{M}\). To measure the performance of the mechanism, i.e., how good the approximation is, a utility function \(\mathcal{U}_{T}:\mathfrak{M}_{\varepsilon,\delta}\to\mathbb{R}\) must be defined, where \(\mathcal{U}_{T}[\mathcal{M}]\geq\mathcal{U}_{T}[\mathcal{M}^{\prime}]\) means that the mechanism \(\mathcal{M}\) outperforms \(\mathcal{M}^{\prime}\) with respect to the true function \(T\). In this work, we do not consider a specific utility function, but rather consider a general family of them.
**Definition 2**.: A utility function \(\mathcal{U}:\mathfrak{M}_{\varepsilon,\delta}\to\mathbb{R}\) is _reasonable_ if \(\Pr[\mathcal{M}(u)=T(u)]\geq\Pr[\mathcal{M}^{\prime}(u)=T(u)]\) for every \(u\in V\) implies \(\mathcal{U}[\mathcal{M}]\geq\mathcal{U}[\mathcal{M}^{\prime}]\). When this condition holds, we say that the mechanism \(\mathcal{M}\) dominates \(\mathcal{M}^{\prime}\).
Given the true function \(T\), the notion of domination in Definition 2 induces a partial order on the set \(\mathfrak{M}_{\varepsilon,\delta}\) of all \((\varepsilon,\delta)\)-DP mechanisms. If a mechanism \(\mathcal{M}\) dominates another mechansim \(\mathcal{M}^{\prime}\) then the first one outperforms the second for every reasonable utility function. It is not always the case that two mechanisms can be compared, even when restricted to a reasonable utility. We give an example below.
**Example 1**.: Consider the dataset \(V=\{1,2\}\) where \(1\sim 2\) and the true function \(T:V\to Q\) is such that \(T(1)=\texttt{blue}\) and \(T(2)=\texttt{red}\). Let \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) be two \((\log(2),0.1)\)-DP mechanisms1 defined such that \(\Pr[\mathcal{M}_{1}(1)=\texttt{blue}]=0.58\), \(\Pr[\mathcal{M}_{1}(2)=\texttt{red}]=0.76\), \(\Pr[\mathcal{M}_{2}(1)=\texttt{blue}]=0.64\), and \(\Pr[\mathcal{M}_{2}(2)=\texttt{red}]=0.73\). Then, neither mechanism dominates the other. The reason for this is that there are reasonable utility functions which, for a mechanism \(\mathcal{M}\)\(\mathfrak{M}_{\varepsilon,\delta}\) might prefer a higher value for \(\Pr[\mathcal{M}(1)=\texttt{blue}]\) more than a higher value for \(\Pr[\mathcal{M}(2)=\texttt{red}]\), or vice-versa. Extreme cases of this are the reasonable utility functions \(U[\mathcal{M}]=\Pr[\mathcal{M}(1)=\texttt{blue}]\) and \(U^{\prime}[\mathcal{M}]=\Pr[\mathcal{M}(2)=\texttt{red}]\), both disagreeing on which of \(\mathcal{M}_{1}\) or \(\mathcal{M}_{2}\) is better.
Footnote 1: In this paper, by \(\log\) we mean the natural logarithm.
For more discussion and insight on the notion of reasonable utility and its extensions, see Sections IV and V.
We are interested in characterizing the optimal \((\varepsilon,\delta)\)-DP mechanisms, i.e., the \((\varepsilon,\delta)\)-DP mechanisms \(\mathcal{M}\) which are not dominated by any other mechanism. These correspond to the maximal elements in the partial order \(\mathfrak{M}_{\varepsilon,\delta}\). To find such
mechanisms, we reinterpret the problem as a randomized graph coloring problem, which we describe in Section II-B.
This random graph coloring approach, together with the notion of suitable pairs defined in Section II-D, allows us to abstract the problem of characterizing optimal \((\varepsilon,\delta)\)-DP mechanisms. Through such abstraction, we show in Theorem 1 that if the mechanism is defined only on an appropriate subset of neighboring datasets, between which the true function changes value, then the optimal mechanism can be uniquely found for every other dataset.
### _Differential Privacy as Randomized Graph Colorings_
We interpret differential privacy as a randomized graph coloring problem. The vertices of the graph2\(\mathcal{G}(V,E)\) are the datasets \(u\in V\) and the edges \(E\) are the neighboring relation on the datasets, i.e. two vertices \(u,v\in V\) have an edge between them if \(u\sim v\). The true function \(T:V\to Q\) is a graph coloring (\(Q\) is the set of colors) of the vertices of \(\mathcal{G}\). For a given color \(j\in Q\), the inverse image \(T^{-1}(j)\) is the set of vertices with true value \(j\). Therefore, for a given vertex \(u\in V\), the inverse image \(T^{-1}(T(u))\) is the set of vertices with the same true value as \(u\). Since differential privacy is a local condition, i.e., it is a condition on the \(u,v\in V\) such that \(u\sim v\), the notion of a neighborhood is essential.
Footnote 2: We assume the graph is undirected and connected. Otherwise, the results of this paper apply to any connected component of \(\mathcal{G}(V,E)\).
**Definition 3** (Neighborhood).: The neighborhood of a subset \(S\subseteq V\) of vertices, denoted by \(N(S)\), is the set of all vertices in \(V-S\) which are neighbors to at least one element of \(S\).
**Definition 4**.: A sequence of vertices \(u=u_{0},u_{1},\cdots,u_{n}=v\) is said to form a path from \(u\) to \(v\), denoted by \((u,v)\)-path, if \((u_{0},u_{1}),\cdots,(u_{n-1},u_{n})\in E\). We say \(n\) is the path length. The distance between two nodes \(u,v\), denoted by \(\mathrm{dist}(u,v)\), is the shortest path length from \(u\) to \(v\). The distance between two subsets \(A_{1},A_{2}\subset V\) is the shortest path length between any \(a_{1}\in A_{1}\) and any \(a_{2}\in A_{2}\).
**Definition 5** (Boundary Edges).: The _boundary edge_ of \(\mathcal{G}\) with respect to \(T\), denoted by \(\partial_{T}(\mathcal{G},E)\), is the set of the edges in \(\mathcal{G}\) whose two endpoints have different true query values.
**Definition 6** (Boundary Vertices).: The _boundary vertices_ of \(\mathcal{G}\) with respect to \(T\), denoted by \(\partial_{T}(\mathcal{G},V)\), is the set of all the endpoints of the boundary edges.
**Definition 7** (Boundary-hitting Set).: A _boundary-hitting set_ of \(\mathcal{G}\) with respect to \(T\), denoted by \(\mathcal{H}_{T}\), is a subset of the vertices which contains at least one endpoint of every edge in \(\partial_{T}(\mathcal{G},E)\).
**Definition 8** (Boundary Vertices with True Value \(j\)).: The boundary vertices with true value \(j\) is the set of boundary vertices whose true value is \(j\). We denote this set by \(\partial_{T}(\mathcal{G},j):=\partial_{T}(\mathcal{G},V)\cap T^{-1}(j)\).
In Fig. 1 we illustrate these definitions.
In an \((\varepsilon,\delta)\)-differentially private mechanism, every path connecting two vertices \(u,v\in V\) induces an upper bound on the probability of the mechanism output, i.e., the probability \(\Pr[\mathcal{M}(v)=j]\) induces an upper bound on \(\Pr[\mathcal{M}(u)=j]\) and vice versa. These upper bounds are induced by the \((\varepsilon,\delta)\)-DP conditions (inequalities (1) through (4)). Since each upper bound that \(v\) induces on \(u\) is an increasing function of \(\alpha:=\Pr[\mathcal{M}(v)=j]\) and of the length of the path between \(u\) and \(v\), then the shortest path induces the tightest upper bound on \(\Pr[\mathcal{M}(u)=j]\). We formalize this statement through Definition 9 and Proposition 1.
**Definition 9**.: Let \(u,v\) be distinct vertices in \(V\) with distance \(d=\mathrm{dist}(u,v)\) between them and \(\alpha\in[0,1]\) be some fixed value. Then, the probability induced on the vertex \(u\) by the vertex \(v\) with value \(\alpha\) is:
\[p(d,\alpha)=\begin{cases}e^{d\varepsilon}\alpha+\delta\frac{e^{d\varepsilon}- 1}{e^{\varepsilon}-1},&d\leq\tau,\\ \min\big{(}1,e^{(2\tau-d)\varepsilon}\alpha+1-\frac{1}{e^{(2\tau-d)\varepsilon }}&\tau<d.\\ +\frac{\delta(e^{\tau}+e^{(d-\tau)\varepsilon}-2)}{e^{(d-\tau)\varepsilon}(e^ {\tau}-1)}\big{)},\end{cases} \tag{5}\]
where
\[\tau=\left\lceil\frac{1}{\varepsilon}\log\frac{(e^{\varepsilon}+2\delta-1)}{( e^{\varepsilon}+1)(e^{\varepsilon}\alpha-\alpha+\delta)}\right\rceil. \tag{6}\]
Taking the minimum in the second line of (5) ensures that \(p(d,\alpha)\) never exceeds \(1\). The next proposition establishes how each vertex \(v\) induces an upper bound on \(\Pr[\mathcal{M}(u)=j]\) for any other vertex \(u\).
**Proposition 1**.: _Let \(\mathcal{M}:V\to Q\) be a mechanism. Then, \(\mathcal{M}\) is an \((\varepsilon,\delta)\)-DP mechanism if and only if for every \(j\in\{\texttt{blue},\texttt{red}\}\) and every distinct vertices \(u,v\in V\) with \(d=\mathrm{dist}(u,v)\) and \(\Pr[\mathcal{M}(u)=j]=\alpha\), it holds that \(\Pr[\mathcal{M}(v)=j]\leq p(d,\alpha)\)._
Proposition 1 presents a closed form expression for the differential privacy condition on non-neighboring datasets, which
Fig. 1: In this graph, the vertices are the datasets and the edges are the neighborhood relationships, e.g., since there exists an edge between \(a\) and \(b\), it follows that \(a\sim b\). The true function is a graph coloring of the vertices, e.g., \(T(a)=\texttt{blue}\) and \(T(d)=\texttt{red}\). The set of vertices with true value \(\texttt{red}\) is \(T^{-1}(\texttt{blue})=\{a,b,g,h,k,\ell,n,s,t,u,v\}\) and the ones with true value \(\texttt{red}\) is \(T^{-1}(\texttt{red})=\{d,e,f,i,j,m,n,o,p,q,r\}\). The neighborhood of the vertex set \(\{g,h\}\) is \(N(S,h)=\{a,b,c,i,n\}\). The distance between a \(b\) and \(b\) is \(\mathrm{dist}(a,a)=5\), with the size of the shortest path \((a,g),(c,c),(h,h),(i,d)\) between them. The edges colored in green are the boundary edges \(\mathcal{D}_{T}(\mathcal{G},E)=\{(\ell,a),(h,n),(i,n),(\ell,v),(r,t,u)\}\). The boundary vertices are the vertices \(\mathcal{D}_{T}(\mathcal{G},V)=\{\ell,q,h,n,i,r,s,t,t\}\). The sets \(\mathcal{H}_{T}=\{l,h,u,s,t\}\) and \(\mathcal{H}_{T}=\{a,q,n,i,s,t,u,v\}\) are boundary-hitting sets, because they contain at least one vertex from each boundary edge, while the set \(\{a,\ell,n,s,t,j\}\) is not a boundary-hitting set since it does not include any vertex from the boundary edge \((h,i)\). Finally, \(\partial_{T}(\mathcal{G},\texttt{blue})=\{h,\ell,s,t,u\}\) and \(\partial_{T}(\mathcal{G},\texttt{red})=\{f,i,n,q,r\}\).
we use, in Theorem 1, to characterize optimal mechanisms.
### _Characterizing Optimal Mechanisms_
We show that optimal mechanisms are uniquely characterized by their behavior on the boundary edges, i.e., edges connecting vertices with different true values (see Definition 5). We do this by showing that when a mechanism has been predefined on a boundary-hitting set (see Definition 7), then it can be uniquely extended to an optimal mechanism over all other vertices. These results are shown in Theorem 1 and Algorithm 1. To this end, we introduce the notions of mechanism restriction and extension.
**Definition 10** (Mechanism Restriction).: The restriction of a mechanism \(\mathcal{M}:\mathcal{V}\to Q\) to a subset \(A\subseteq V\) is \(\mathcal{M}|_{A}:A\to Q\).
We also refer to \(\mathcal{M}|_{A}\) as a partial mechanism.
**Definition 11** (Mechanism Extension).: Let \(A\subseteq V\) and \(\mathcal{M}^{\prime}:A\to Q\) be a mechanism. Then, a mechanism \(\mathcal{M}:V\to Q\) is an extension of \(\mathcal{M}^{\prime}\) if \(\mathcal{M}|_{A}=\mathcal{M}^{\prime}\).
**Theorem 1**.: _Let \(\mathcal{H}_{T}\) be a boundary-hitting set with respect to the true function \(T\) and \(\mathcal{M}^{\prime}:\mathcal{H}_{T}\to Q\) be a randomized function. Then, there exists a unique optimal \((\varepsilon,\delta)\)-DP mechanism \(\mathcal{M}:V\to Q\) such that \(\mathcal{M}|_{\mathcal{H}_{T}}=\mathcal{M}^{\prime}\) if and only if for every \(u,v\in\mathcal{H}_{T}\) with \(d=\mathrm{dist}(u,v)\), we have \(\Pr[\mathcal{M}^{\prime}(u)=j]\leq p(d,\Pr[\mathcal{M}^{\prime}(v)=j])\) for a fixed \(j\in Q\). Moreover, for every \(u\notin\mathcal{H}_{T}\), the optimal mechanism is_
\[\Pr[\mathcal{M}(u)=T(u)]=\min_{v\in\mathcal{H}_{T}}p(\mathrm{dist }(u,v),\Pr[\mathcal{M}(v)=T(u)]). \tag{7}\]
Theorem 1 establishes that optimal mechanisms are uniquely characterized by their values at the boundary. Moreover, the assumption that \(\mathcal{H}_{T}\) is a boundary-hitting set is essential to the theorem, as the following example shows.
**Example 2**.: Let \(\mathcal{G}(V,E)\) be the graph with vertices \(V=\{v_{1},\ldots,v_{4}\}\) and edges \(E=\{(v_{1},v_{2}),(v_{2},v_{3}),(v_{3},v_{4})\}\), the true function be such that \(T(v_{1})=T(v_{4})=\texttt{red}\) and \(T(v_{2})=T(v_{3})=\texttt{blue}\), and \(A=\{v_{3}\}\). We illustrate this in Fig. 2.
Note that \(A\) is not a boundary-hitting set, since it is missing a vertex from \((v_{1},v_{2})\). We now show a \((\log(2),0)\)-DP mechanism \(\mathcal{M}^{\prime}:A\to Q\) which does not have a unique optimal \((\log(2),0)\)-DP extension to all of \(V\).
Let \(\mathcal{M}^{\prime}\) be such that \(\Pr[\mathcal{M}^{\prime}(v_{3})=\texttt{blue}]=\frac{1}{2}\). Then, the \((\log(2),0)\)-DP mechanisms \(\mathcal{M}_{1}:V\to Q\) such that
\[\Pr[\mathcal{M}_{1}(v_{1})=\texttt{blue}]=\frac{1}{2},\]
\[\Pr[\mathcal{M}_{1}(v_{2})=\texttt{blue}]=\frac{3}{4},\]
\[\Pr[\mathcal{M}_{1}(v_{3})=\texttt{blue}]=\frac{1}{2},\]
\[\Pr[\mathcal{M}_{1}(v_{4})=\texttt{blue}]=\frac{1}{4},\]
and \(\mathcal{M}_{2}:V\to Q\) such that
\[\Pr[\mathcal{M}_{1}(v_{1})=\texttt{blue}]=\frac{1}{8},\]
\[\Pr[\mathcal{M}_{1}(v_{2})=\texttt{blue}]=\frac{1}{4},\]
\[\Pr[\mathcal{M}_{1}(v_{3})=\texttt{blue}]=\frac{1}{2},\]
\[\Pr[\mathcal{M}_{1}(v_{4})=\texttt{blue}]=\frac{1}{4},\]
are extensions of \(\mathcal{M}^{\prime}\) which are not comparable. Indeed, each is a maximal element in the partially ordered set of \((\log(2),0)\)-DP mechanisms on \(\mathcal{G}\).
Theorem 1 generalizes the main results of [1], which we restate as corollaries below.
**Corollary 1**.: Let \(\alpha_{\texttt{blue}},\alpha_{\texttt{red}}\in[0,1]\) be fixed real numbers. Suppose there exists an optimal \((\varepsilon,\delta)\)-DP mechanism \(\mathcal{M}:V\to Q\) satisfying \(\Pr[\mathcal{M}(v)=T(v)]=\alpha_{T(v)}\), for every boundary vertex \(v\in\partial_{T}(\mathcal{G},V)\). Then for every vertex \(u\notin\partial_{T}(\mathcal{G},V)\), the optimal mechanism must satisfy \(\Pr[\mathcal{M}(u)=T(u)]=p(\mathrm{dist}(u,w),\alpha_{T(w)})\), where \(w\in\partial_{T}(\mathcal{G},T(u))\) is the closest boundary vertex to \(u\).3
Footnote 3: Due to a different labelling of vertices in [1], \(\tau\) in (6) is larger than the corresponding \(\tau\) in [1, Definition 8] by one. After appropriate transformations, they both result in the same expression for \(\Pr[\mathcal{M}(u)=T(u)]\).
Corollary 1 states that when the restricted mechanism is homogeneous on the boundary, i.e., \(\Pr[\mathcal{M}^{\prime}(u)=T(u)]=\Pr[\mathcal{M}^{\prime}(w)=T(w)]\) for every \(v,w\in\partial_{T}(\mathcal{G},V)\) such that \(T(v)=T(w)\), finding the optimal extension to a non-boundary vertex \(u\) using (7) reduces to first finding the closest boundary vertex to \(u\), denoted by \(w\). Note that by definition of the boundary and the distance, \(w\) must have the same true value as \(u\). Then, \(\Pr[\mathcal{M}(u)=T(u)]\) will be given by (5) with \(\alpha=\Pr[\mathcal{M}(w)=T(w)]\) and \(d=\mathrm{dist}(u,w)\). A particularly interesting boundary homogeneous case is when the \((\varepsilon,\delta)\)-DP mechanism is balanced, i.e., when \(\alpha_{\texttt{blue}}=\alpha_{\texttt{red}}\), which is stated below.
**Corollary 2**.: In the setting of Corollary 1, suppose \(\alpha_{\texttt{blue}}=\alpha_{\texttt{red}}\). Then, there exists a unique optimal mechanism and it is such that for every \(u\notin\partial_{T}(\mathcal{G},V)\),
\[\Pr[\mathcal{M}(u)=T(u)]=1-\frac{e^{\varepsilon}-1-\delta(e^{ \varepsilon(d+1)}+e^{d\varepsilon}-2)}{e^{d\varepsilon}(e^{\varepsilon}+1)(e^{ \varepsilon}-1)},\]
Fig. 2: The graph \(\mathcal{G}\) in Examples 2 and 3. In Example 2, we show that if the set on which the restricted mechanism is defined is not a boundary hitting set, then there might not be a unique optimal extension for the setting of Theorem 1. In Example 3, we show how Algorithm 1 works when the restricted mechanism is properly defined on the boundary hitting set \(\mathcal{H}_{T}=\{v_{1},v_{4}\}\).
where \(d\) is the distance of \(u\) to the boundary \(\partial_{T}(\mathcal{G},V)\).
### _Suitable Pairs_
To prove our results, we introduce a generalized framework that captures the key conditions of differential privacy. We note that differential privacy imposes local constraints on neighboring vertices. Although non-neighboring vertices ultimately constrain each other, they do so only through intermediate neighboring vertices. These constraints are realized through upper and lower bounds on the probability of the mechanism outputting a value, as captured in (1)-(4). Combining (2) and (3), we obtain the upper bound
\[U_{\texttt{DP}}(\alpha):=\min(e^{\varepsilon}\alpha+\delta,\frac{e^{ \varepsilon}+\delta-1+\alpha}{e^{\varepsilon}},1), \tag{8}\]
where \(\alpha=\Pr[\mathcal{M}(u)=j]\). Analogously, combining (1) and (4), we obtain the lower bound
\[L_{\texttt{DP}}(\alpha):=\max(e^{\varepsilon}\alpha-\delta-e^{ \varepsilon}+1,\frac{\alpha-\delta}{e^{\varepsilon}},0). \tag{9}\]
We generalize this notion in the following definition.
**Definition 12** (Suitable Pair).: Let \(L,U:[0,1]\to[0,1]\) be two increasing functions. We call \((L,U)\) a suitable pair if for every \(\alpha\in[0,1]\) the following three properties hold.
1. \(L(\alpha)\leq\alpha\leq U(\alpha)\),
2. \(L(U(\alpha))\leq\alpha\leq U(L(\alpha))\),
3. \(U(\alpha)\leq 1-L(1-\alpha)\).
The \((\varepsilon,\delta)\)-DP lower and upper bounds in (8) and (9) are then a special case of \((L,U)\) suitable pair.
**Proposition 2**.: _The functions \(U_{\texttt{DP}}\) and \(L_{\texttt{DP}}\) are a suitable pair._
The notion of a suitable pair abstracts away the detailed algebraic expressions of differential privacy, e.g., those appearing in (1)-(4). Specifically, the composition \(U^{d}\) captures the upper bound condition that a vertex \(u\) imposes on other vertices at distance \(d\).4 We now define the notion of privacy in the suitable pair framework.
Footnote 4: For \(d\geq 1\), \(U^{d}\) denotes \(d\) compositions of the function \(U\). For function \(L\), \(L^{d}\) is defined similarly.
**Definition 13** (\((L,u)\)-Privacy).: Let \(\mathcal{G}(V,E)\) be a graph and \((L,U)\) be a suitable pair. We say that a randomized mechanism \(\mathcal{M}:V\to Q\) is \((L,U)\)-private if, for any \(u\sim v\) and \(j\in Q\), it holds that \(\Pr[\mathcal{M}(v)=j]\in[L(\alpha),U(\alpha)]\), where \(\alpha=\Pr[\mathcal{M}(u)=j]\).
In Theorem 2 we generalize Theorem 1 to suitable pairs. We begin by showing an intermediate lemma that specifies the necessary and sufficient conditions for the existence of a mechanism extension.
**Lemma 1**.: _Let \((L,U)\) be a suitable pair and \(\mathcal{M}^{\prime}:\mathcal{H}_{T}\to Q\) be a randomized function on a boundary-hitting set \(\mathcal{H}_{T}\). Then, there exists an \((L,U)\)-private mechanism \(\mathcal{M}:V\to Q\), extending \(\mathcal{M}^{\prime}\), if and only if, for every \(u,v\in\mathcal{H}_{T}\) and \(d=\mathrm{dist}(u,v)\), we have \(\alpha\leq U^{d}(\beta)\) and \(\beta\leq U^{d}(\alpha)\), where \(\alpha=\Pr[\mathcal{M}^{\prime}(u)=j]\) and \(\beta=\Pr[\mathcal{M}^{\prime}(v)=j]\) for an arbitrarily chosen \(j\) from \(Q\)._
**Theorem 2**.: _Let \((L,U)\) be a suitable pair and \(\mathcal{M}^{\prime}:\mathcal{H}_{T}\to Q\) be a randomized function on a boundary-hitting set \(\mathcal{H}_{T}\). Then, Algorithm 1 either outputs the optimal \((L,U)\)-private extension of \(\mathcal{M}^{\prime}\) or no \((L,U)\)-private extension \(\mathcal{M}:V\to Q\) of \(\mathcal{M}^{\prime}\) exists._
```
Input: Graph \(\mathcal{G}(V,E)\), true function \(T\), suitable pair functions \((L,U)\), boundary-hitting set \(\mathcal{H}_{T}\subseteq V\), randomized function \(\mathcal{M}^{\prime}:\mathcal{H}_{T}\to Q\). Output: Optimal \((L,U)\)-private extension \(\mathcal{M}:V\to Q\) of \(\mathcal{M}^{\prime}\) if one exists. Choose \(j\in Q\) for\(w,v\in\mathcal{H}_{T}\)do \(d=\mathrm{dist}(w,v)\). if\(\Pr[\mathcal{M}^{\prime}(w)=j]>U^{d}(\Pr[\mathcal{M}^{\prime}(v)=j])\) or\(\Pr[\mathcal{M}^{\prime}(v)=j]>U^{d}(\Pr[\mathcal{M}^{\prime}(w)=j])\)then return"No\((L,U)\)-private extension exists." end if for\(w\in V-\mathcal{H}_{T}\)do \(\Pr[\mathcal{M}(w)=T(w)]=\) \(\min_{u\in\mathcal{H}_{T}}U^{\mathrm{dist}(w,u)}(\Pr[\mathcal{M}^{\prime}(u)=T (w)])\). end for returnOptimal extension \(\mathcal{M}:V\to Q\) of \(\mathcal{M}^{\prime}\).
```
**Algorithm 1**The Optimal \((L,U)\)-Private Extension
Algorithm 1 works as follows. First, it checks whether the randomized function \(\mathcal{M}^{\prime}\) is extensible at all. If there exists a vertex \(w\in\mathcal{H}_{T}\) such that the probability of outputting the true value \(T(w)\) exceeds the \((L,U)\) bounds imposed by all other vertices in \(\mathcal{H}_{T}\), then an extension is not possible.5 Otherwise, for each \(u\) not in \(\mathcal{H}_{T}\) the algorithm assigns \(\Pr[\mathcal{M}(u)=T(u)]\) to be the minimum upper bound imposed by the vertices in \(\mathcal{H}_{T}\). In this way, it obtains the unique optimal \((L,U)\)-private extension of \(\mathcal{M}^{\prime}\). Theorem 1 follows from Algorithm 1 by setting the upper and lower bound functions for DP according to (8) and (9).
Footnote 5: In Algorithm 1, we have fixed \(j\in Q\) at the beginning of the algorithm. However, this is not necessary. Based on Lemma 9, it is possible to select a different \(j\) in each iteration of the for-loop.
From a computational complexity point of view, the significance of Algorithm 1 is as follows. For a given vertex \(u\), a naive approach would consider every possible path between \(u\) and all other vertices in the graph to determine if it satisfies the privacy constraints. Whereas Algorithm 1 shows that as long as \(\mathcal{H}_{T}\) is a boundary-hitting set, it is sufficient to consider paths between \(u\) and \(\mathcal{H}_{T}\), thus reducing computational complexity. Moreover, one does not need to consider all paths, but only the shortest path between \(u\) and each \(w\in\mathcal{H}_{T}\).
However, in many applications, one might not necessarily be interested in retrieving the whole optimal mechanism but instead in evaluating it on a particular dataset \(u\in V\), reducing complexity even further. For this, we present Algorithm 2.
Whereas in Algorithm 1 we must compute all shortest paths between vertices in \(V-\mathcal{H}_{T}\) and those in \(\mathcal{H}_{T}\), in Algorithm 2 we need only to compute the shortest path between \(u\) and
\(\mathcal{H}_{T}\). If we denote the ball centered at the vertex \(u\) with radius at maximum distance to \(\mathcal{H}_{T}\) by B, then the complexity of finding the shortest path between \(u\) and \(\mathcal{H}_{T}\) using the Dijkstra Algorithm [22] is \(\Theta(|E(\texttt{B})|+|V(\texttt{B})|\log|V(\texttt{B})|)\), where \(E(\texttt{B})\) and \(V(\texttt{B})\) are the edges and vertices included in the ball, respectively. The complexity of checking for the existence of an extension is \(\mathcal{O}(|\mathcal{H}_{T}|^{2})\). Thus, the time complexity of Algorithm 2 is \(\Theta(|E(\texttt{B})|+|V(\texttt{B})|\log|V(\texttt{B})|)+\mathcal{O}(| \mathcal{H}_{T}|^{2})\).
In Proposition 1, we show that when the \((L,U)\) suitable pair in Algorithm 1 comes from the differential privacy framework, \(U^{d}(\alpha)=p(d,\alpha)\) where \(p(d,\alpha)\) was defined in Definition 9. The following example shows how Algorithm 1 works.
```
Input: Graph \(\mathcal{G}(V,E)\), true function \(T\), suitable pair functions \((L,U)\), boundary-hitting set \(\mathcal{H}_{T}\subseteq V\), randomized function \(\mathcal{M}^{\prime}:\mathcal{H}_{T}\to Q\), a vertex \(u\in V-\mathcal{H}_{T}\). Output: Optimal \((L,U)\)-private extension \(\mathcal{M}(u)\). Choose \(j\in Q\) for\(w,v\in\mathcal{H}_{T}\)do \(d=\mathrm{dist}(w,v)\). if\(\Pr[\mathcal{M}^{\prime}(w)=j]>U^{d}(\Pr[\mathcal{M}^{\prime}(v)=j])\) or\(\Pr[\mathcal{M}^{\prime}(v)=j]>U^{d}(\Pr[\mathcal{M}^{\prime}(w)=j])\)then return"No \((L,U)\)-private extension exists." end if end for return\(\Pr[\mathcal{M}(u)=T(u)]=\) \(\min\limits_{w\in\mathcal{H}_{T}}U^{\mathrm{dist}(w,u)}(\Pr[\mathcal{M}^{ \prime}(w)=T(u)])\).
```
**Algorithm 2**The Optimal \((L,U)\)-Private Extension Evaluated at on Particular Dataset
**Example 3**.: Consider the graph in Fig. 2 again and let the boundary hitting set be \(\mathcal{H}_{T}=\{v_{1},v_{4}\}\). Let \(\epsilon=\log(2)\), \(\delta=0\) and fix the mechanism on \(\mathcal{H}_{T}\) as \(\alpha=\Pr[\mathcal{M}(v_{1})=\texttt{blue}]=1-\Pr[\mathcal{M}(v_{1})= \texttt{red}]=0.3\) and \(\beta=\Pr[\mathcal{M}(v_{4})=\texttt{blue}]=1-\Pr[\mathcal{M}(v_{4})= \texttt{red}]=0.1\). Note that \(\mathrm{dist}(v_{1},v_{4})=3\).
Algorithm 1 we first checks that the mechanism can be extended. Let \(j=\texttt{blue}\). Algorithm 1 checks that \(0.3\leq U^{3}(0.1)=p(3,0.1)=0.7\) and \(0.1\leq U^{3}(0.3)=p(3,0.3)=0.9\). Therefore, the partial mechanism can be extended.6
Footnote 6: For \(\epsilon=\log(2)\) and \(\delta=0\), (6) gives \(\tau=2\) for \(\alpha=0.1\) and \(\tau=1\) for \(\alpha=0.3\). Therefore, \(p(3,0.1)=0.7\) and \(p(3,0.3)=0.9\).
Next, Algorithm 1 assigns optimal values to \(\Pr[\mathcal{M}(v_{2})=\texttt{blue}]\), and \(\Pr[\mathcal{M}(v_{3})=\texttt{blue}]\). Let us first consider \(v_{2}\) and the upper bound that each vertex \(v_{1}\) and \(v_{4}\) impose on \(v_{2}\). From Definition 9, we have \(U^{\mathrm{dist}(v_{1},v_{2})}(0.3)=p(1,0.3)=0.6\) and \(U^{\mathrm{dist}(v_{2},v_{4})}(0.1)=p(2,0.1)=0.4\). Therefore, \(\Pr[\mathcal{M}(v_{2})=\texttt{blue}]=0.4\). This is remarkable in the sense that \(v_{1}\) which is the closest vertex to \(v_{2}\) in the boundary hitting set is not the one that imposes the tightest upper bound on \(\Pr[\mathcal{M}(v_{2})=\texttt{blue}]\). Instead, \(v_{4}\) which is farther from \(v_{2}\), has a more restrictive effect on \(v_{4}\) for taking its true value \(\texttt{blue}\). Finally, consider \(v_{3}\). We have \(U^{\mathrm{dist}(v_{1},v_{3})}(0.3)=p(2,0.3)=0.8\) and \(U^{\mathrm{dist}(v_{3},v_{4})}(0.1)=p(1,0.1)=0.2\). Therefore, \(\Pr[\mathcal{M}(v_{3})=\texttt{blue}]=0.2\).
## III Proofs
In this section, we prove all our results. We start by showing some intermediate lemmas that we use.
### _Intermediate Lemmas_
This lemma states a graph-theoretic result that we use in various proofs.
**Lemma 2**.: _Let \(\mathcal{G}(V,E)\) be a graph and \(u\sim v\in V\). Then for every vertex \(w\in V\), we have \(\mathrm{dist}(u,w)\leq\mathrm{dist}(v,w)+1\)._
Proof.: Consider a shortest path \(w=w_{0},w_{1},\ldots,w_{\mathrm{dist}(w,v)}=v\) connecting \(w\) to \(v\). Since \(u\) is a neighbor of \(v\), then \(w=w_{0},w_{1},\ldots,w_{\mathrm{dist}(w,v)}=v,u\) is a path connecting \(w\) to \(u\) with length \(\mathrm{dist}(w,v)+1\). Thus, \(d(w,u)\leq\mathrm{dist}(w,v)+1\).
The triangular inequality follows as an extension, where for any \(u,v,w\in V\), \(\mathrm{dist}(u,w)\leq\mathrm{dist}(u,v)+\mathrm{dist}(v,w)\).
In the next two lemmas, we provide explicit forms for the upper bound function \(U_{\texttt{Dp}}\) and the lower bound function \(L_{\texttt{Dp}}\) in the suitable pair, corresponding to the \((\varepsilon,\delta)\)-DP mechanism.
**Lemma 3**.: _The upper bound function in (8) can be rewritten as follows._
\[U_{\texttt{Dp}}(\alpha)=\begin{cases}e^{\varepsilon}\alpha+\delta&\text{ if } \alpha\leq\frac{1-\delta}{e^{\varepsilon}+1},\\ \frac{e^{\varepsilon}+\delta-1+\alpha}{e^{\varepsilon}}&\text{ if }\frac{1-\delta}{e^{ \varepsilon}+1}\leq\alpha\leq 1-\delta,\\ 1&\text{ if }1-\delta\leq\alpha.\end{cases}\]
Proof.: In the first case,
\[\alpha\leq\frac{1-\delta}{e^{\varepsilon}+1} \Leftrightarrow(e^{2\varepsilon}-1)\alpha\leq(e^{\varepsilon}-1)(1-\delta)\] \[\Leftrightarrow e^{2\varepsilon}\alpha-\alpha\leq e^{\varepsilon}-1+ \delta-e^{\varepsilon}\delta\] \[\Leftrightarrow e^{2\varepsilon}\alpha+e^{\varepsilon}\delta\leq \alpha-1+e^{\varepsilon}+\delta\] \[\Leftrightarrow e^{\varepsilon}\alpha+\delta\leq\frac{e^{ \varepsilon}+\delta-1+\alpha}{e^{\varepsilon}},\]
where the first equivalence holds if \(e^{\varepsilon}-1>0\); if \(e^{\varepsilon}-1=0\) the last inequality also holds. Also,
\[\alpha\leq\frac{1-\delta}{e^{\varepsilon}+1} \Leftrightarrow\alpha e^{\varepsilon}+\alpha\leq 1-\delta\] \[\Leftrightarrow\alpha e^{\varepsilon}+\delta\leq 1-\alpha\] \[\Rightarrow\alpha e^{\varepsilon}+\delta\leq 1.\]
In the second case,
\[\alpha\leq 1-\delta \Leftrightarrow\alpha-1+e^{\varepsilon}+\delta\leq e^{\varepsilon}\] \[\Leftrightarrow\frac{e^{\varepsilon}+\delta-1+\alpha}{e^{ \varepsilon}}\leq 1.\]
Finally,
\[1-\delta\leq\alpha \Leftrightarrow e^{\varepsilon}\leq e^{\varepsilon}+\delta-1+\alpha\] \[\Leftrightarrow 1\leq\frac{e^{\varepsilon}+\delta-1+\alpha}{e^{ \varepsilon}}.\]
**Lemma 4**.: _The lower bound function in (9) can be rewritten as follows._
\[L_{\text{\tiny{\sc DP}}}(\alpha)=\begin{cases}0&\text{if }\alpha\leq\delta,\\ \dfrac{\alpha-\delta}{e^{\varepsilon}}&\text{if }\delta\leq\alpha\leq\dfrac{ \delta+e^{\varepsilon}}{e^{\varepsilon}+1},\\ e^{\varepsilon}\alpha-\delta-e^{\varepsilon}+1&\text{if }\alpha\geq\dfrac{ \delta+e^{\varepsilon}}{e^{\varepsilon}+1}.\end{cases}\]
Proof.: Note that \(L_{\text{\tiny{\sc DP}}}(\alpha)=0\) if and only if \(e^{\varepsilon}\alpha-\delta-e^{\varepsilon}+1\leq 0\) and \(\frac{\alpha-\delta}{e^{\varepsilon}}\leq 0\). This will happen if and only if \(\alpha\leq\frac{e^{\varepsilon}-1+\delta}{e^{\varepsilon}}\) and \(\alpha\leq\delta\), respectively. Since \(\delta\leq\frac{e^{\varepsilon}-1+\delta}{e^{\varepsilon}}\), the first case follows.
For the second and third cases,
\[\alpha\leq\dfrac{\delta+e^{\varepsilon}}{e^{\varepsilon}+1} \Leftrightarrow(e^{\varepsilon}+1)\alpha\leq(\delta+e^{\varepsilon})\] \[\Leftrightarrow(e^{2\varepsilon}-1)\alpha\leq\delta(e^{ \varepsilon}-1)+e^{\varepsilon}(e^{\varepsilon}-1)\] \[\Leftrightarrow e^{2\varepsilon}\alpha-e^{\varepsilon}\delta-e^{2 \varepsilon}+e^{\varepsilon}\leq\alpha-\delta\] \[\Leftrightarrow e^{\varepsilon}\alpha-\delta-e^{\varepsilon}+1 \leq\frac{\alpha-\delta}{e^{\varepsilon}}.\]
The next lemma establishes the symmetric nature of suitable pair functions.
**Lemma 5**.: _Let \(\alpha_{1},\alpha_{2}\in[0,1]\) and \((L,U)\) be a suitable pair. Then, \(\alpha_{2}\in[L(\alpha_{1}),U(\alpha_{1})]\) if and only if \(\alpha_{1}\in[L(\alpha_{2}),U(\alpha_{2})]\)._
Proof.: Let \((L,U)\) be a suitable pair and suppose that \(L(\alpha_{1})\leq\alpha_{2}\leq U(\alpha_{1})\). In particular, we have \(L(\alpha_{1})\leq\alpha_{2}\). Since by definition, \(U\) is an increasing function, it follows that \(U(L(\alpha_{1}))\leq U(\alpha_{2})\). On the other hand, according to the definition of \((L,U)\) suitable pair, \(\alpha_{1}\leq U(L(\alpha_{1}))\). Therefore, \(\alpha_{1}\leq U(L(\alpha_{1}))\leq U(\alpha_{2})\).
Similarly, we have \(\alpha_{2}\leq U(\alpha_{1})\). Since \(L\) is also an increasing function, we have \(L(\alpha_{2})\leq L(U(\alpha_{1}))\). According to the definition of \((L,U)\) suitable pair, \(L(U(\alpha_{1}))\leq\alpha_{1}\). Therefore, \(L(\alpha_{2})\leq L(U(\alpha_{1}))\leq\alpha_{1}\). Thus, \(\alpha_{1}\in[L(\alpha_{2}),U(\alpha_{2})]\).
The reverse implication follows analogously.
**Lemma 6**.: _Mechanism \(\mathcal{M}:V\to Q\) is \((L,U)\)-private if and only if for every color \(j\in Q\) and every two datasets \(u,v\in V\) we have:_
\[\beta\in[L^{d}(\alpha),U^{d}(\alpha)],\]
_where \(\alpha:=\Pr[\mathcal{M}(u)=j]\), \(\beta:=\Pr[\mathcal{M}(v)=j]\) and \(d=\operatorname{dist}(u,v)\) is the distance between them._
Proof.: We first prove the forward direction through induction. For the case \(d=1\), the claim follows directly from Definition 13. Suppose that the claim holds for \(d\). Let \(u,v\in V\) such that \(\operatorname{dist}(u,v)=d+1\). Consider a path of length \(d+1\) from \(u\) to \(v\). Let \(w\) be the one-before-the-last vertex in the path, where \(\operatorname{dist}(u,w)=d\). Denote \(\gamma:=\Pr[\mathcal{M}(w)=j]\). By the induction hypothesis for the vertices \(u,w\) we have \(\gamma\in[L^{d}(\alpha),U^{d}(\alpha)]\). Also, since \(L,U\) are increasing functions, we have
\[\gamma\geq L^{d}(\alpha) \Rightarrow L(\gamma)\geq L^{d+1}(\alpha)\] \[\gamma\leq U^{d}(\alpha) \Rightarrow U(\gamma)\leq U^{d+1}(\alpha)\]
Thus, \(L^{d+1}(\alpha)\leq L(\gamma)\leq U(\gamma)\leq U^{d+1}(\alpha)\). Since, \(w\) and \(v\) are adjacent, we have \(\beta\in[L(\gamma),U(\gamma)]\). Therefore,
\[L^{d+1}(\alpha)\leq L(\gamma)\leq\beta\leq U(\gamma)\leq U^{d+1}(\alpha).\]
Also, the backward can be proved by only considering neighbor vertices. This completes the proof.
The next lemma extends the third condition of the suitable pair in Definition 12.
**Lemma 7**.: _Let \(d\) be a positive integer and \((L,U)\) be a suitable pair. Then for every \(\alpha\in[0,1]\) we have:_
\[U^{d}(\alpha)\leq 1-L^{d}(1-\alpha).\]
Proof.: We prove this by induction on \(d\) and the proof only uses the monotonicity of the upper bound function \(U\). For a fixed \(\alpha\in[0,1]\) and \(d=1\), we have the following equations directly from the third condition in Definition 12:
\[U(\alpha)+L(1-\alpha)\leq 1\iff U(\alpha)\leq 1-L(1-\alpha).\]
Assume that the condition in the lemma is satisfied for \(d\). Then, we have:
\[U^{d}(\alpha) \leq 1-L^{d}(1-\alpha),\] \[\Rightarrow U\big{(}U^{d}(\alpha)\big{)} \leq U\big{(}1-L^{d}(1-\alpha)\big{)}\] \[\leq 1-L\Big{(}1-\big{(}1-L^{d}(1-\alpha)\big{)}\big{)}\] \[=1-L\big{(}L^{d}(1-\alpha)\big{)}\] \[=1-L^{d+1}(1-\alpha),\]
which completes the proof.
The following Lemma is a simple extension of the second property of an \((L,U)\) suitable pair, which we will use.
**Lemma 8**.: _For every \(\alpha\in[0,1]\) the following holds:_
\[\alpha\leq U^{d}(L^{d}(\alpha)).\]
Proof.: We prove this by iteratively applying the definition of \((L,U)\) suitable pair as follows:
\[U^{d}(L^{d}(\alpha)) =U^{d-1}\Big{(}U\big{(}L(L^{d-1}(\alpha)\big{)}\Big{)}\geq\ U^{d-1 }(L^{d-1}(\alpha))\] \[\geq\ldots\geq U(L(\alpha))\geq\alpha.\]
**Lemma 9**.: _Let \(\alpha,\beta\in[0,1]\) and \(d\geq 1\) be given and assume that the following two relations hold: \(\alpha\leq U^{d}(\beta)\) and \(\beta\leq U^{d}(\alpha)\). Then, we have:_
\[1-\beta \geq L^{d}(1-\alpha), \tag{10}\] \[1-\alpha \leq U^{d}(1-\beta),\] (11) \[1-\alpha \geq L^{d}(1-\alpha),\] (12) \[1-\beta \leq U^{d}(1-\alpha),\] (13) \[\alpha \leq L^{d}(\beta),\] (14) \[\beta \leq L^{d}(\alpha). \tag{15}\]
Proof.: \[\beta\leq U^{d}(\alpha)\Rightarrow 1-\beta\geq 1-U^{d}(\alpha)\underset{(a)}{ \geq}L^{d}(1-\alpha),\] (16)
where \((a)\) follows from Lemma 7. Similarly, we have \(1-\alpha\geq L^{d}(1-\beta)\).
Now, we apply \(U^{d}\) to the derived relation in (16) to write
\[U^{d}(1-\beta)\geq U^{d}(L^{d}(1-\alpha))\underset{(a)}{\geq}1-\alpha, \tag{17}\]
where \((a)\) follows from Lemma 8. Similarly, we have \(U^{d}(1-\alpha)\geq 1-\beta\).
The one before last inequality follows from (17) and the fact that \(U^{d}(1-\beta)\leq 1-L^{d}(\beta)\) according to Lemma 7. Similarly, we have \(L^{d}(\alpha)\leq\beta\).
The above lemma gives a sufficient condition for checking the feasibility of mechanism extension only based on the upper bound function \(U\) (instead of both \(L\) and \(U\)) and for one mechanism value, say \(j\in Q\) (instead of both values in \(Q\)).
### _Proof of Lemma 1_
We first prove the forward direction. Assume that there exists an \((L,U)\)-private mechanism \(\mathcal{M}:V\to Q\), extending \(\mathcal{M}^{\prime}\). By the definition of mechanism extension \(\mathcal{M}|_{\mathcal{H}_{T}}=\mathcal{M}^{\prime}\). Therefore, for any \(u,v\in\mathcal{H}_{T}\), \(\Pr[\mathcal{M}(u)=j]=\Pr[\mathcal{M}^{\prime}(u)=j]\) and \(\Pr[\mathcal{M}(v)=j]=\Pr[\mathcal{M}^{\prime}(v)=j]\). The result then follows directly from Lemma 6.
Now we prove the reverse direction by an explicit construction of \(\mathcal{M}\). Fix \(j\in Q\). Assume that for every \(u,v\in\mathcal{H}_{T}\) and \(d=\mathrm{dist}(u,v)\), we have \(\alpha\leq U^{d}(\beta)\) and \(\beta\leq U^{d}(\alpha)\), where \(\alpha=\Pr[\mathcal{M}^{\prime}(u)=j]\) and \(\beta=\Pr[\mathcal{M}^{\prime}(v)=j]\). We show how to construct a valid \((L,U)\)-private extension \(\mathcal{M}\) from \(\mathcal{M}^{\prime}\). First, set \(\mathcal{M}|_{\mathcal{H}_{T}}=\mathcal{M}^{\prime}\). If \(\mathcal{H}_{T}=V\), we are done. Otherwise, for every \(w\notin\mathcal{H}_{T}\) assign
\[\Pr[\mathcal{M}(w)=T(w)]=\min_{u\in\mathcal{H}_{T}}U^{\mathrm{ dist}(w,u)}(\Pr[\mathcal{M}^{\prime}(u)=T(w)]). \tag{18}\]
For \(w\notin\mathcal{H}_{T}\), let \(u_{w}\in\mathcal{H}_{T}\) be a minimizer of (18) and let \(\alpha_{u_{w}}:=\Pr[\mathcal{M}^{\prime}(u_{w})=T(w)]\). That is, \(U^{\mathrm{dist}(w,u_{w})}(\alpha_{u_{w}})\leq U^{\mathrm{dist}(w,u)}(\Pr[ \mathcal{M}^{\prime}(u)=T(w)])\) for any \(u\in\mathcal{H}_{T}\). We must prove that this mechanism is \((L,U)\)-private according to Defintion 13. However, it suffices to fix a \(j\in Q\) a priori and for every \(v\sim w\in V\) only prove \(\alpha_{w}\leq U(\alpha_{v})\) and \(\alpha_{v}\leq U(\alpha_{w})\), where \(\alpha_{v}=\Pr[\mathcal{M}(v)=j]\) and \(\alpha_{w}=\Pr[\mathcal{M}(w)=j]\). The remaining \((L,U)\) relations follow from Lemmas 6 and 9. We need to consider three cases.
**Case 1 (\(v,w\in\mathcal{H}_{T}\)):** This is automatically satisfied by the assumption in the reverse direction of the Lemma (for \(d=\mathrm{dist}(v,w)=1\)).
**Case 2 (\(w\notin\mathcal{H}_{T}\), \(v\in\mathcal{H}_{T}\)):** We consider two subcases depending on the true value of dataset \(w\).
**Subcase 2.1 (\(j=T(w)\)):** We have
\[\alpha_{w} =\Pr[\mathcal{M}(w)=T(w)]=U^{\mathrm{dist}(w,u_{w})}(\alpha_{u_{w}})\] \[\underset{(a)}{\leq}U^{\mathrm{dist}(w,v)}(\Pr[\mathcal{M}^{ \prime}(v)=T(w)])\] \[\underset{(b)}{=}U(\Pr[\mathcal{M}(v)=T(w)]):=U(\alpha_{v}),\]
where \((a)\) follows because \(v\in\mathcal{H}_{T}\) and hence is part of the minimiization in (18). In \((b)\) we used the fact that \(v,w\) are neighbours and \(\mathcal{M}|_{\mathcal{H}_{T}}=\mathcal{M}^{\prime}\). Furthermore,
\[\alpha_{v} =\Pr[\mathcal{M}(v)=T(w)]\underset{(a)}{\leq}U^{\mathrm{dist}(v,u _{w})}(\alpha_{u_{w}})\] \[\underset{(b)}{\leq}U^{\mathrm{dist}(v,w)}(U^{\mathrm{dist}(w,u _{w})}(\alpha_{u_{w}})):=U(\alpha_{w}),\]
where \((a)\) follows the assumption of reverse part of lemma for \(v,u_{w}\in\mathcal{H}_{T}\). Inequality \((b)\) follows from triangular inequality and the propoetry of \((L,U)\) pair that for \(d_{1}\geq d_{2}\) and any \(\alpha\in[0,1]\), we have \(U^{d_{1}}(\alpha)\geq U^{d_{2}}(\alpha)\). The last equality follows from construction (18) and the fact that \(v\sim w\).
**Subcase 2.2 (\(j\neq T(w)\)):** This subcase can be proved by using the results we just proved for \(j=T(w)\) and invoking Lemma 9. Since we now have \(1-\alpha_{w}\leq U(1-\alpha_{v})\) and \(1-\alpha_{v}\leq U(1-\alpha_{w})\).
**Case 3 (\(v,w\notin\mathcal{H}_{T}\)):** Note that we must have \(T(v)=T(w)\). Otherwise, \(T(v)\neq T(w)\) means that two neighbors \(v\sim w\) with different true values are both outside of \(\mathcal{H}_{T}\), which contradicts the definition of a boundary-hitting set. We consider two subcases depending on the true value of these vertices.
**Subcase 3.1: (\(j=T(w)=T(v)\))** We have
\[\alpha_{w} :=U^{\mathrm{dist}(w,u_{w})}(\alpha_{u_{w}})\] \[\underset{(a)}{\leq}U^{\mathrm{dist}(w,u_{v})}(\alpha_{u_{v}}) \underset{(b)}{\leq}U^{\mathrm{dist}(w,v)+\mathrm{dist}(v,u_{v})}(\alpha_{u_{v}}) \tag{19}\] \[=U^{\mathrm{dist}(w,v)}(U^{\mathrm{dist}(v,u_{v})}(\alpha_{u_{w}}) ):=U(\alpha_{v}),\]
where \((a)\) follows because \(j=T(v)=T(w)\) and hence, according to construction (18) there must exist \(u_{v}\in\mathcal{H}_{T}\) such that \(\alpha_{v}:=U^{\mathrm{dist}(v,u_{v})}(\alpha_{u_{v}})\). Inequality \((b)\) follows from applying the triangular inequality.
By swapping the roles of \(\alpha_{v}\) and \(\alpha_{w}\) in the above, we obtain \(\alpha_{v}\leq U(\alpha_{w})\).
**Subcase 3.2 (\(j\neq T(w)=T(v)\)):** This follows directly from case 3.1 just proved and Lemma 9.
### _Proof of Theorem 2_
We note that the value assigned to \(\Pr[\mathcal{M}(w)=T(w)]\) in Algorithm 1 is the same as in (18). Thus, by Lemma 1 the algorithm finds an \((L,U)\)-private extension if it exists.
We now show that the private mechanism is the unique optimal mechanism.
Suppose there exists another \((L,U)\)-private mechanism \(\mathcal{M}_{2}\) which extends \(\mathcal{M}^{\prime}\). Let \(w\in V\) be a vertex with \(T(w)=j\). Then, by Lemma 6, for every \(u\in\mathcal{H}_{T}\), it follows that \(\Pr[\mathcal{M}_{2}(w)=j]\leq U^{\mathrm{dist}(w,u)}\left(\Pr[\mathcal{M}^{ \prime}(u)=j]\right)\). Thus, \(\Pr[\mathcal{M}(w)=j]\). Thus, \(\Pr[\mathcal{M}_{2}(w)=T(w)]\leq\Pr[\mathcal{M}(w)=T(w)]\) for every \(w\in V\). Thus, \(\mathcal{M}\) dominates every other \(\mathcal{M}_{2}\) and is therefore the unique optimal.
### _Proof of Proposition 2_
The functions \(U_{\texttt{Dp}}\) and \(L_{\texttt{Dp}}\) are a suitable pair if they are increasing functions and, for every \(\alpha\in[0,1]\) they satisfy the following conditions:
1. \(L_{\texttt{DP}}(\alpha)\leq\alpha\leq U_{\texttt{DP}}(\alpha)\),
2. \(L_{\texttt{DP}}(U_{\texttt{DP}}(\alpha))\leq\alpha\leq U_{\texttt{DP}}(L_{ \texttt{DP}}(\alpha))\),
3. \(U_{\texttt{DP}}(\alpha)\leq 1-L_{\texttt{DP}}(1-\alpha)\).
We prove each condition as its own Lemma. We first prove the third condition as it will be used in proving the first condition.
**Lemma 10**.: \(U_{\texttt{DP}}(\alpha)=1-L_{\texttt{DP}}(1-\alpha)\)_._
Proof.: Assume \(L_{\texttt{DP}}(1-\alpha)>0\) and \(U_{\texttt{DP}}(\alpha)<1\). Then,
\[U_{\texttt{DP}}(\alpha):=\min(e^{\varepsilon}\alpha+\delta,\frac{e^{ \varepsilon}+\delta-1+\alpha}{e^{\varepsilon}}). \tag{20}\]
And,
\[L_{\texttt{DP}}(1-\alpha):=\max(1-e^{\varepsilon}\alpha-\delta,\frac{1-\alpha -\delta}{e^{\varepsilon}}). \tag{21}\]
Therefore, we have:
\[U_{\texttt{DP}}(\alpha)+L_{\texttt{DP}}(1-\alpha) =\min(e^{\varepsilon}\alpha+\delta,\frac{e^{\varepsilon}+\delta- 1+\alpha}{e^{\varepsilon}})\] \[\quad+\max(1-e^{\varepsilon}\alpha-\delta,\frac{1-\alpha-\delta} {e^{\varepsilon}}).\]
Then, we have:
\[U_{\texttt{DP}}(\alpha)=e^{\varepsilon}\alpha+\delta \iff\] \[e^{\varepsilon}\alpha+\delta\leq\frac{e^{\varepsilon}+\delta-1+ \alpha}{e^{\varepsilon}} \iff\] \[e^{\varepsilon}\alpha+\delta\leq 1+\frac{\delta-1+\alpha}{e^{ \varepsilon}} \iff\] \[\frac{1-\delta-\alpha}{e^{\varepsilon}}\leq 1-e^{\varepsilon}\alpha-\delta\iff\] \[L_{\texttt{DP}}(1-\alpha)=1-e^{\varepsilon}\alpha-\delta.\]
Therefore, \(U_{\texttt{DP}}(\alpha)+L_{\texttt{DP}}(1-\alpha)=1\) whenever \(U_{\texttt{DP}}(\alpha)=e^{\varepsilon}\alpha+\delta\). Similarly,
\[U_{\texttt{DP}}(\alpha)=\frac{e^{\varepsilon}+\delta-1+\alpha} {e^{\varepsilon}} \iff\] \[e^{\varepsilon}\alpha+\delta\geq\frac{e^{\varepsilon}+\delta-1+ \alpha}{e^{\varepsilon}} \iff\] \[e^{\varepsilon}\alpha+\delta\geq 1+\frac{\delta-1+\alpha}{e^{ \varepsilon}} \iff\] \[\frac{1-\delta-\alpha}{e^{\varepsilon}}\geq 1-e^{\varepsilon}\alpha-\delta\iff\] \[L_{\texttt{DP}}(1-\alpha)=\frac{1-\delta-\alpha}{e^{\varepsilon}}.\]
Therefore, \(U_{\texttt{DP}}(\alpha)+L_{\texttt{DP}}(1-\alpha)=1\) again. Finally, Lemma 3 implies that:
\[U_{\texttt{DP}}(\alpha)=1 \iff 1-\delta\leq\alpha \iff \tag{22}\] \[1-\delta-\alpha\leq 0 \iff\frac{1-\delta-\alpha}{e^{\varepsilon}}\leq 0, \tag{23}\]
and
\[U_{\texttt{DP}}(\alpha)=1 \iff 1-\delta\leq\alpha \iff \tag{24}\] \[1-\delta-\alpha\leq 0 \Rightarrow 1-\delta-e^{\varepsilon}\alpha\leq 0. \tag{25}\]
Therefore, \(L_{\texttt{DP}}(1-\alpha)=0\) and \(U_{\texttt{DP}}(\alpha)+L_{\texttt{DP}}(1-\alpha)=1\).
**Lemma 11**.: \(L_{\texttt{DP}}(\alpha)\leq\alpha\leq U_{\texttt{DP}}(\alpha)\)_._
Proof.: We first prove \(U_{\texttt{DP}}(\alpha)\geq\alpha\). We have 3 cases based on the value of \(U_{\texttt{DP}}(\alpha)\) following from Lemma 3.
**Case 1** (when \(U_{\texttt{DP}}(\alpha)=e^{\varepsilon}\alpha+\delta\)): Since \(e^{\varepsilon}\geq 1\) and \(\delta\geq 0\) it holds that \(U_{\texttt{DP}}(\alpha)=e^{\varepsilon}\alpha+\delta\geq\alpha\).
**Case 2** (when \(U_{\texttt{DP}}(\alpha)=\frac{e^{\varepsilon}+\delta-1+\alpha}{e^{\varepsilon}}\)): Since \(e^{\varepsilon}\geq 1\), \(\alpha\leq 1\) and \(\delta\geq 0\) we have
\[1+\frac{\delta}{(e^{\varepsilon}-1)}\geq 1\geq\alpha \iff\] \[1+\frac{\delta}{(e^{\varepsilon}-1)}\geq\alpha \iff\] \[(e^{\varepsilon}-1)+\delta\geq(e^{\varepsilon}-1)\alpha \iff\] \[e^{\varepsilon}+\delta-1\geq e^{\varepsilon}\alpha-\alpha \iff\] \[e^{\varepsilon}+\delta-1+\alpha\geq e^{\varepsilon}\alpha \iff\] \[\frac{e^{\varepsilon}+\delta-1+\alpha}{e^{\varepsilon}}\geq\alpha.\]
**Case 3** (when \(U_{\texttt{DP}}(\alpha)=1\)): Follows from \(\alpha\leq 1\).
We now show that \(L_{\texttt{DP}}(\alpha)\leq\alpha\). It follows from Lemma 10 and what we just proved that \(1-L_{\texttt{DP}}(\alpha)=U_{\texttt{DP}}(1-\alpha)\geq 1-\alpha\). Therefore, \(\alpha\geq L_{\texttt{DP}}(\alpha)\).
**Lemma 12**.: \(L_{\texttt{DP}}(U_{\texttt{DP}}(\alpha))\leq\alpha\leq U_{\texttt{DP}}(L_{ \texttt{DP}}(\alpha))\)_._
Proof.: First, we prove the first inequality. We write \(L_{\texttt{DP}}(U_{\texttt{DP}}(\alpha))\) as follows:
\[L_{\texttt{DP}}(U_{\texttt{DP}}(\alpha))=\max(0,e^{\varepsilon}U_{\texttt{DP} }(\alpha)-\delta-e^{\varepsilon}+1,\frac{U_{\texttt{DP}}(\alpha)-\delta}{e^{ \varepsilon}}). \tag{26}\]
We will have \(L_{\texttt{DP}}(U_{\texttt{DP}}(\alpha))\leq\alpha\) if and only if each expression in the \(\max\) expression above is less than \(\alpha\). Obviously \(0\leq\alpha\) is true. The other conditions are equivalently written below:
\[e^{\varepsilon}U_{\texttt{DP}}(\alpha)-\delta-e^{\varepsilon}+1 \leq\alpha \iff U_{\texttt{DP}}(\alpha)\leq\frac{\alpha+\delta+e^{\varepsilon}-1}{e^{ \varepsilon}},\] \[\frac{U_{\texttt{DP}}(\alpha)-\delta}{e^{\varepsilon}} \leq\alpha \iff U_{\texttt{DP}}(\alpha)\leq e^{\varepsilon}\alpha+\delta,\]
where the inequalities are true according to the definion of \(U_{\texttt{DP}}\) in (8). Similarly, we have \(\alpha\leq U_{\texttt{DP}}(L_{\texttt{DP}}(\alpha))\).
The last step of completing the proof of Proposition 2 is to prove that both functions \(U_{\texttt{DP}}\) and \(L_{\texttt{DP}}\) are increasing. However, this can be done by using the equivalent definition of \(U_{\texttt{DP}}(\alpha)\) in Lemma 3. In the first two cases, this function is linear and in the third case, it is a constant. Also, this function is continuous. For proving the same for \(L_{\texttt{DP}}(\alpha)\), we use the fact that \(U_{\texttt{DP}}(1-\alpha)=1-L_{\texttt{DP}}(\alpha)\). Therefore \(L_{\texttt{DP}}(\alpha)=1-U_{\texttt{DP}}(1-\alpha)\) is an increasing function.
### _Proof of Proposition 1_
Via Proposition 2, \((L_{\texttt{DP}},U_{\texttt{DP}})\) are \((L,U)\) suitable pairs. Therefore, by Lemma 6, \(\mathcal{M}\) will be \((\varepsilon,\delta)\)-DP if an only if for every color \(j\in Q\) and every \(u,v\in V\) we have:
\[\beta\in[L^{d}(\alpha),U^{d}(\alpha)],\]
where \(\alpha:=\Pr[\mathcal{M}(u)=j]\), \(\beta:=\Pr[\mathcal{M}(v)=j]\) and \(d=\operatorname{dist}(u,v)\) is the distance between them. Therefore, we first compute and simplify \(U^{d}_{\texttt{DP}}(\alpha)\). We use Lemma 3 for this.
**Part 1** (Showing that \(\beta\leq U^{d}_{\texttt{DP}}(\alpha)\)): First assume that there exists some \(\tau\) such that for all \(1\leq d\leq\tau\), we have
\(\min(e^{\varepsilon}U_{\textsc{DP}}^{d-1}(\alpha)+\delta,\frac{U_{\textsc{DP}}^{d-1 }(\alpha)-1+e^{\varepsilon}+\delta}{e^{\varepsilon}},1)=e^{\varepsilon}U_{ \textsc{DP}}^{d-1}(\alpha)+\delta\). That is, the first case in Lemma 3 is the tightest upper bound on \(U_{\textsc{DP}}^{d}(\alpha)=U(U_{\textsc{DP}}^{d-1}(\alpha))\). We will soon find the largest \(\tau\) for which this can happen. Iterating over \(d=\tau,\tau-1,\cdots,1\), we will calculate the closed-form expression through induction as follows.
\[U_{\textsc{DP}}^{d}(\alpha) =e^{\varepsilon}U_{\textsc{DP}}^{d-1}(\alpha)+\delta\] \[=e^{\varepsilon}\big{(}e^{\varepsilon}U_{\textsc{DP}}^{d-2}( \alpha)+\delta\big{)}+\delta\] \[\cdots\] \[=e^{\varepsilon}\Bigg{(}e^{\varepsilon}\Big{(}\cdots\big{(}e^{ \varepsilon}U_{\textsc{DP}}(\alpha)+\delta\big{)}\cdots+\delta\Big{)}+\delta \Bigg{)}+\delta\] \[=e^{\varepsilon}\Bigg{(}e^{\varepsilon}\Big{(}\cdots\big{(}e^{ \varepsilon}(e^{\varepsilon}\alpha+\delta)+\delta\big{)}\cdots+\delta\Big{)}+ \delta\Bigg{)}+\delta\] \[=e^{de}\alpha+\delta\frac{e^{de}-1}{e^{\varepsilon}-1}. \tag{27}\]
We want to find the last index for which the iterations (27) hold. First, on the one hand, \(\tau\) satisfies
\[U_{\textsc{DP}}^{\tau}(\alpha)=e^{\tau\varepsilon}\alpha+\delta\frac{e^{\tau \varepsilon}-1}{e^{\varepsilon}-1}. \tag{28}\]
On the other hand, by the definition of \(\tau\), for \(d=\tau+1\), the second case in Lemma 3 will give the tightest upper bound on \(U_{\textsc{DP}}^{\tau+1}(\alpha)\). That is,
\[\min(e^{\varepsilon}U_{\textsc{DP}}^{\tau}(\alpha)+\delta,\frac{ U_{\textsc{DP}}^{\tau}(\alpha)-1+e^{\varepsilon}+\delta}{e^{\varepsilon}},1)\] \[=\frac{U_{\textsc{DP}}^{\tau}(\alpha)-1+e^{\varepsilon_{\tau}}+ \delta}{e^{\varepsilon}}.\]
Therefore, from Lemma 3, we must have
\[U_{\textsc{DP}}^{\tau}(\alpha)\geq\frac{1-\delta}{e^{\varepsilon}+1}. \tag{29}\]
Combining (29) and (28) gives the \(\tau\) as defined in (6).
Now, note that the terms in the three conditions of Lemma 3 are all monotonically non-decreasing. Also, the rate of increase with respect to \(\alpha\) for the conditions of Lemma 3 is respectively, \(e^{\varepsilon}\), \(1/e^{\varepsilon}\) and \(0\). Therefore, once the second case in Lemma 3 becomes the tightest, it will remain so. So is for the third case. In summary, the cases in Lemma 3 do not "toggle" or "alternate" in providing the tightest upper bound for \(d>\tau\).
The last step is to provide a closed-form expression for the iterations \(\tau<d\), where the second case of Lemma 3 is active. Starting with \(d=\tau+1\), we will have
\[U_{\textsc{DP}}^{\tau+1}(\alpha) =\frac{U_{\textsc{DP}}^{\tau}(\alpha)-1+e^{\varepsilon}+\delta}{ e^{\varepsilon}}\] \[=\frac{\big{(}e^{\tau\varepsilon}\alpha+\delta\frac{e^{\tau \varepsilon}-1}{e^{\varepsilon}-1}\big{)}-1+e^{\varepsilon}+\delta}{e^{ \varepsilon}}\] \[=e^{(\tau-1)e}\alpha+1-\frac{1}{e^{\varepsilon}}+\frac{\delta(e ^{\tau\varepsilon}+e^{\varepsilon}-2)}{e^{\varepsilon}(e^{\varepsilon}-1)}, \tag{30}\]
which matches the second case in (5) for \(d=\tau+1\). One can use (30) to continue with \(d>\tau+1\).
**Part 2** (Showing that \(\beta\geq L_{\textsc{DP}}^{d}(\alpha)\)): From Lemma 7, we have that \(U_{\textsc{DP}}^{d}(1-\alpha)\leq 1-L_{\textsc{DP}}^{d}(\alpha)\) or \(L_{\textsc{DP}}^{d}(\alpha)\leq 1-U_{\textsc{DP}}^{d}(1-\alpha)\). From Lemma 9, we have that \(\beta\leq U_{\textsc{DP}}^{d}(\alpha)\) implies \(\beta\geq 1-U_{\textsc{DP}}^{d}(1-\alpha)\). The result follows immediately.
### _Proof of Theorem 1_
By Proposition 2, \((L_{\textsc{DP}},U_{\textsc{DP}})\) is a suitable pair. Thus, Algorithm 1 can be utilized to find the unique optimal \((\epsilon,\delta)\)-DP mechanism. Then, according to the proof of Proposition 1, the expression for \(U_{\textsc{DP}}^{d}\) is equal to \(p(d,\alpha)\) in Definition 9.
## IV On the Generality of Reasonable Utility
One of our main contributions is the notion of reasonable utility that we present in Definition 2 for binary DP mechanisms. To the best of our knowledge, when restricted to binary mechanisms, all utilities previously suggested in the literature conform to this concept, including those in [13, 14, 15, 16, 17, 18, 19, 20]. Another general notion of utility for binary DP mechanisms is the following.
**Definition 14**.: (Strong reasonable utility). Let \(T\) be the true function and \(\mathfrak{M}_{\epsilon,\delta}\) the family of \((\epsilon,\delta)\)-DP mechanisms. Let \(\mathcal{U}:\mathfrak{M}_{\epsilon,\delta}\rightarrow\mathbb{R}^{\geq 0}\) be a utility function that assigns non-negative real numbers to the mechanisms in \(\mathfrak{M}_{\epsilon,\delta}\). We say \(\mathcal{U}\) is a strong reasonable utility function if the following conditions hold.
1. If \(\mathcal{M}_{1},\mathcal{M}_{2}\in\mathfrak{M}_{\epsilon,\delta}\) and for all \(v\in V\), \(\Pr[\mathcal{M}_{1}(v)=T(v)]\geq\Pr[\mathcal{M}_{2}(v)=T(v)]\) then \(\mathcal{U}(\mathcal{M}_{1})\geq\mathcal{U}(\mathcal{M}_{2})\).
2. If \(\mathcal{M}_{1},\mathcal{M}_{2}\in\mathfrak{M}_{\epsilon,\delta}\) and for all \(v\in V\), \(\Pr[\mathcal{M}_{1}(v)=T(v)]\geq\Pr[\mathcal{M}_{2}(v)=T(v)]\) and for at least one \(x\), the inequality is strict, then \(\mathcal{U}(\mathcal{M}_{1})>\mathcal{U}(\mathcal{M}_{2})\).
Similar to the proof of Theorem 2, one can argue that for any partial mechanism and any strong reasonable utility function \(\mathcal{U}\), if an \((\varepsilon,\delta)\) differentially private extension exists, then there exists a unique optimal extension with respect to \(\mathcal{U}\). This optimal extension is independent of the actual utility function, and, moreover, Algorithm 1 outputs this unique optimum extension mechanism.
## V Extending to Non-Binary Mechanisms
It is not clear how to extend the notion of reasonable utility to non-binary DP mechanisms. The difficulty arises because, in binary mechanisms (e.g., \(Q=\{\texttt{blue},\texttt{red}\}\)), optimizing the output probability for the true value (e.g., blue) is straightforward and is equivalent to minimizing its incorrect outcome (e.g., red). In contrast, with more options (e.g., \(Q=\{\texttt{blue},\texttt{red},\texttt{green}\}\)), it is clear that the probability of the true outcome (blue) should be optimized, but the treatment of other outcomes (red, green) lacks clarity without further assumptions. This issue has been explored using a lexicographical ordering [23] and a dominance ordering [24] based assumption. Optimal mechanisms for these cases have been proposed for the special boundary homogeneous case.
|
2309.14640 | Ab initio surface chemistry with chemical accuracy | First-principles calculations are a cornerstone of modern surface science and
heterogeneous catalysis. However, accurate reaction energies and barrier
heights are frequently inaccessible due to the approximations demanded by the
large number of atoms. Here we combine developments in local correlation and
periodic correlated wavefunction theory to solve the many-electron
Schr\"odinger equation for molecules on surfaces with chemical accuracy,
commonly defined as 1~kcal/mol. As a demonstration, we study water on the
surface of \ce{Al2O3} and \ce{TiO2}, two prototypical and industrially
important metal oxides for which we obtain converged energies at the level of
coupled-cluster theory with single, double, and perturbative triple excitations
[CCSD(T)], commonly known as the "gold-standard" in molecular quantum
chemistry. We definitively resolve the energetics associated with water
adsorption and dissociation, enabling us to address recent experiments and to
analyze the errors of more commonly used approximate theories. | Hong-Zhou Ye, Timothy C. Berkelbach | 2023-09-26T03:43:31Z | http://arxiv.org/abs/2309.14640v2 | # Ab initio surface chemistry with chemical accuracy
###### Abstract
First-principles calculations are a cornerstone of modern surface science and heterogeneous catalysis. However, accurate reaction energies and barrier heights are frequently inaccessible due to the approximations demanded by the large number of atoms. Here we show that these approximations can be systematically eliminated to solve the many-electron Schrodinger equation for molecules on surfaces with chemical accuracy, commonly defined as 1 kcal/mol. As a demonstration, we study water on the surface of Al\({}_{2}\)O\({}_{3}\) and TiO\({}_{2}\), two prototypical and industrially important metal oxides for which we obtain converged energies at the level of coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], commonly known as the "gold-standard" in molecular quantum chemistry. We definitively resolve the energetics associated with water adsorption and dissociation, enabling us to address recent experiments and to analyze the errors of more commonly used approximate theories.
## I Introduction
The structure, bonding, and chemistry of molecules and materials is governed by the many-electron Schrodinger equation, which, for all but the simplest systems, must be solved approximately using numerical techniques. Especially for solids and surfaces, containing a semi-infinite number of atoms, severe approximations have historically been necessary, and a primary effort of computational materials science has been the gradual elimination of these approximations. Early work used noninteracting and mean-field theories of band structure, evolving into the popular density functional theory (DFT), all of which reduce the many-electron Schrodinger equation to a set of self-consistent one-electron Schrodinger equations. However, the limitations of DFT have been noted in the context of chemical reactions,[1; 2] surface adsorption,[3; 4] and heterogeneous catalysis,[5; 6] encouraging the development of more accurate methods applicable to complex systems epitomized by molecules on periodic solid surfaces.
To go beyond one-electron theories, explicit electron correlations can be reintroduced with finite- or infinite-order perturbation theories that can, in principle, be systematically converged to a numerically exact solution. Here, we show that, with new methodological developments, this convergence can be achieved along all necessary axes--including the description of electron correlation, the one-electron basis set, and the size of the model surface--to provide surface chemistry energetics with chemical accuracy, comparable to that which is achievable for small-molecule main-group chemistry.[7; 8] Specifically, as our highest level of theory, we apply coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)],[9] commonly known as the "gold standard" in molecular quantum chemistry. The application of such methods to solids has been increasingly pursued over the last few years.[10; 11; 12; 13] We leverage several recent developments in periodic integral evaluation[14; 15; 16] and Gaussian basis sets[17] along with a new implementation of periodic CCSD(T) with local correlation to enable a quantitative study of reactive chemistry on solid surfaces at this high level of theory.
## II Computational methods
Within the Born-Oppenheimer approximation, the total electronic energy is expressed as a sum of the mean-field energy and the correlation energy, \(E=E_{0}+E_{\mathrm{c}}\). We use a local, fragment-based approach wherein the correlation energy of a supercell containing \(N\) electrons is expressed as a sum of contributions from all \(N\) localized occupied orbitals \(i\), \(E_{\mathrm{c}}=\sum_{i=1}^{N}E_{\mathrm{c}}^{(i)}\). Importantly, each contribution \(E_{\mathrm{c}}^{(i)}\) is evaluated independently in a truncated set of occupied and unoccupied orbitals that are optimized for local orbital \(i\); specifically, we use local natural orbitals (LNOs) from second-order perturbation theory.[18; 19] For the insulating materials studied here, the number of LNOs needed for a target accuracy is independent of the total system size. Thus, the cost of calculating the total correlation energy grows only linearly with \(N\) and each calculation of \(E_{\mathrm{c}}^{(i)}\) is independent of all others, which enables highly efficient simulation of large systems through parallel computing. By increasing the number of LNOs, we converge to the exact CCSD(T) energy at a fraction of the cost. This low cost allows us to simulate periodic solids with supercells containing over 100 atoms and almost 1000 electrons using high-quality correlation-consistent one-electron basis sets[17] and thus to reliably eliminate errors stemming from incomplete basis sets, cluster models, or small simulation cells.
## III Water on metal oxides
As a demonstrative application of these developments, we study the interaction between water and solid metal oxides, which are two of the most abundant substances on Earth. Understanding the chemistry of their interaction is important for myriad technological applications, including electronics, catalysis, and corrosion.[20; 21; 22; 23; 24; 25; 26; 27] For example, semiconducting metal oxides such as TiO\({}_{2}\) are popular photocatalysts for solar water splitting.[23; 24] The chemistry of water on metal oxide surfaces also serves as an important model for general surface chemistry and heterogeneous catalysis, motivating extensive experimental and theoretical research
efforts,[21; 25; 26; 27] with experiments primarily performed using temperature-programmed desorption,[28; 29] vibrational sum frequency generation,[30] and scanning probe microscopies.[31; 32] Specifically, we study Al\({}_{2}\)O\({}_{3}\) and TiO\({}_{2}\), two prototypical metal oxides and subjects of ongoing debates about the fate of a molecularly adsorbed water molecule,[27; 29; 32] which we aim to resolve in the present work.
We first consider Al\({}_{2}\)O\({}_{3}\), which is a common support in heterogeneous catalysis and has been intensively studied as a model metal-oxide surface for water reactivity.[25; 26; 29; 33] In particular, the most stable \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)(0001) surface has been characterized, computationally by DFT and experimentally under ultrahigh vacuum, to be aluminum-terminated with significant structural distortions.
Water undergoes molecular adsorption through an interaction between the water molecule's oxygen lone pair and a three-fold coordinated surface aluminum, after which it can potentially dissociate, transferring a hydrogen atom to a neighboring surface oxygen atom, yielding OH\({}_{\text{ads}}\) and OH\({}_{\text{surf}}\) fragments (Fig. 1A). DFT calculations spanning 25 years[34; 35; 25; 23] all predict that dissociation is favorable by about 10 kcal/mol, with a small barrier of about 4 kcal/mol, suggesting fast and complete dissociation within a few nanoseconds at room temperature (Fig. 1B). The first experimental support for this process came in 2014 by observing signatures of surface hydroxyls in vibrational spectroscopy.[26] However, later experiments using vibrational spectroscopy and temperature-programmed desorption found that the dissociated products can take days to form even in ambient conditions.[29; 30] This timescale is in stark contrast to the small reaction barrier predicted by DFT, highlighting the challenge of studying elementary chemical reactions on well-defined surfaces and raising questions about the origin of this discrepancy.
With high-level periodic quantum chemistry methods in hand, we can accurately quantify the surface reaction energetics, which must be carefully converged with respect to the number of correlated orbitals, the basis set size, the surface size and the slab thickness (Fig. 1C-H). Figure 1C visualizes the unoccupied LNO subspace for a representative localized occupied orbital, where the number of LNOs can be increased systematically by tightening the truncation threshold. Figure. 1D shows the convergence of the LNO-CCSD(T) reaction energy and barrier height for a small \(1\times 1\) surface model containing six atomic layers (6L/\(1\times 1\)) with a TZ basis set, which is the largest system where canonical CCSD(T) results can be generated for comparison. Both energies are seen to converge quickly to the canonical results within chemical accuracy by using about 100 LNOs per occupied orbital, which is a small fraction of the total number of orbitals (about 630). This fast convergence is consistent with the large gap of Al\({}_{2}\)O\({}_{3}\) and its weakly correlated, main-group electronic structure. As shown in Fig. 1F, the smaller number of LNOs results in significant speedups of CCSD(T); for the large basis sets necessary to eliminate basis set incompleteness errors, the speedup is more than a factor of 100. This high computational efficiency allows us to apply LNO-CCSD(T) to much larger surface models beyond the reach of canonical CCSD(T), as exemplified in Fig. 1E for a 12L/\(2\times 2\) surface model that contains over 80 atoms and 2000 orbitals in the TZ basis. The LNO-CCSD(T) energies again show quick convergence, requiring a number of LNOs comparable to that of the smaller system, demonstrating the ability to scale to large systems without the significant increase in cost that accompanies canonical CCSD(T). We are thus able to fully converge the reaction energetics with respect to the surface size and the basis set size (Fig. 1G) as well as the slab thickness (Fig. 1H).
Our largest system studied (6L/\(3\times 3\) in the TZ basis) contains over 90 atoms and 2500 orbitals.
Our final results obtained using CCSD(T) are presented in Fig. 2A, where they are compared to those obtained by DFT with the popular PBE functional[36] (DFT@PBE), which was used by previous theoretical studies of the same system.[33] CCSD(T) confirms that the dissociative adsorption product predicted by DFT@PBE is thermodynamically more stable than the molecularly adsorbed water by about 9 kcal/mol. However, the reaction barrier from CCSD(T) is about 9 kcal/mol, which is almost twice that predicted by DFT@PBE.
The underestimation of reaction barriers by DFT with a semilocal functional like PBE is well-known and can be traced back to the systematic self-interaction error (SIE) of semilocal functionals.[1] In Fig. 2A, we also show the reaction energetics obtained using DFT with a hybrid functional, PBE0,[37] and the second-order Moller-Plesset perturbation theory, MP2. PBE0 mitigates the SIE of PBE through inclusion of the exact exchange energy,[2] while MP2 removes completely the SIE and includes approximate many-body electron correlation to second order. Both methods are seen to improve upon DFT@PBE in the calculated reaction barrier and approach the CCSD(T) result. However, PBE0 and MP2 still underestimate the barrier by about 2 and 1 kcal/mol, respectively, highlighting the challenge of achieving chemical accuracy even with improved functionals or correlated wavefunction methods.
The higher barrier predicted by CCSD(T) has a significant impact on the kinetics of the surface reaction over a wide range of temperature. We approximate the reaction rate using harmonic transition state theory,[38]\(k(T)=\left(k_{\text{B}}T/h\right)\exp\left\{-\left[\Delta E^{\ddagger}+ \Delta G^{\ddagger}_{\text{vib}}(T)\right]/k_{\text{B}}T\right\}\), where \(h\) is Planck's constant, \(k_{\text{B}}\) is Boltzmann's constant, \(T\) is temperature, \(\Delta E^{\ddagger}\) is the reaction barrier shown in Fig. 2A, and \(\Delta G^{\ddagger}_{\text{vib}}(T)\) is the vibrational activation free energy for which we take a constant value of \(-0.5\) kcal/mol as estimated in previous work.[35] The reaction timescales using CCSD(T) and DFT@PBE barrier heights are shown in Fig. 2B for the temperature range of 100-300 K.
At 300 K, all levels of theory predict the dissociation of the water O-H bond to be fast on the surface, with CCSD(T) predicting \(k^{-1}\approx 0.1\)\(\mu\)s, which is three orders of magnitude slower than \(k^{-1}\approx 0.1\) ns from DFT@PBE. The difference between CCSD(T) and DFT@PBE becomes even more prominent at lower temperature. At 100 K, DFT@PBE still predicts fast dissociation with \(k^{-1}\approx 1\) ms. By contrast, CCSD(T) predicts slow kinetics with a time scale of about a day, i.e., about eight orders of magnitude slower than that predicted by DFT@PBE. While the slow kinetics at 100 K predicted by the CCSD(T) barrier height are consistent with cryogenic data,[29]
the prediction at 300 K does not agree with experiments performed at room temperature.[30]
Although classical transition state theory is approximate, due to its neglect of recrossing events and quantum tunnelling,[38] it is highly unlikely that these corrections would be large enough[35] to predict a lifetime consistent with room-temperature experiments,[30] especially in ultrahigh vacuum. Therefore, we conclude that unimolecular adsorption and dissociation of water on a pristine \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)(0001) surface requires about a day at 100 K, but should occur by this mechanism on the sub-microsecond time scale at 300 K. We also predict the dissociation to be essentially irreversible: at 300 K, we predict \(k^{-1}\approx 10\) s for the recombination of the two surface hydroxyl groups, which is \(10^{8}\) times slower than the dissociation reaction. Deviations seen in experiment must be associated with mechanisms not considered here, such as alternative surface motifs, competing reaction pathways, or cooperative effects.[29] Inaccuracies of the electronic structure theory,
Figure 1: (A) Atomic structure of a single water molecule adsorbed on the \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)(0001) surface. The molecularly adsorbed water molecule (left) may transfer a hydrogen to a neighboring surface oxygen via the transition state (middle), resulting in OH\({}_{\text{ads}}\) and OH\({}_{\text{surf}}\) fragments (right). (B) Schematic illustration of the potential energy surface associated with the water dissociation reaction predicted by DFT. (C) Isosurface visualization of a representative localized occupied orbital and the density of the corresponding unoccupied LNOs, the number of which increases with tightening truncation threshold (left to right). (D) The convergence of the reaction energy (blue) and barrier (red) calculated by LNO-CCSD(T) with the LNO subspace size for a small surface model of 6 atomic layers and \(1\times 1\) surface using a TZ basis set. With about 100 LNOs per occupied orbital, the LNO-CCSD(T) energies converge to the canonical CCSD(T) results (solid horizontal line) to an accuracy better than 1 kcal/mol (shaded area). (E) The same as in (D), but for a larger surface model with 12 atomic layers and \(2x2\) surface size, where canonical CCSD(T) is unavailable. (F) Wall time of LNO-CCSD(T) calculations for the 6L/\(1\times 1\) surface model using basis sets of increasing size compared to canonical CCSD(T). (G) Reaction energy (blue) and barrier (red) calculated by LNO-CCSD(T) for a 6L slab with increasing surface size and basis set size. The canonical CCSD(T) results are also shown in hollow markers for the \(1\times 1\) surface. Both energies are converged to better than 1 kcal/mol of accuracy using a \(2\times 2\) surface and a TZ basis set. (H) Reaction energy (blue) and barrier (red) calculated by LNO-CCSD(T) in a TZ basis set for a \(2\times 2\) surface with increasing slab thickness. Both energies are converged to better than 1 kcal/mol of accuracy using a 12L model.
which we have shown to be large with common density functional approximations, have been eliminated by our work.
We test the transferability of our findings by studying TiO\({}_{2}\), which has a more complicated electronic structure due to the 3d electrons of the transition metal Ti. The water-TiO\({}_{2}\) interface has been a focus of intensive research activities since the 1970s for its importance in photocatalytic water splitting for hydrogen generation.[23; 24] Particularly, the most stable rutile TiO\({}_{2}\)(110) surface has been characterized, computationally by DFT and experimentally under ultrahigh vacuum, to be terminated by alternating rows of five-fold coordinated titanium (Ti\({}_{\rm 5c}\)) and bridging oxygens (O\({}_{\rm b}\)). Water undergoes molecular adsorption via interaction of its lone pair with Ti\({}_{\rm 5c}\) and a weak hydrogen bond with a neighboring O\({}_{\rm b}\).
Like for Al\({}_{2}\)O\({}_{3}\), the possibility of subsequent dissociative adsorption is debated--not only the kinetics but also the thermodynamics. The hypothesized dissociation occurs through water transferring a proton to its neighboring O\({}_{\rm b}\), yielding OH\({}_{\rm ads}\) and OH\({}_{\rm water}\) fragments (Fig. 3A). Controversial results were seen in early experimental studies, wherein temperature-programmed desorption[41] and scanning tunneling microscope[31; 42] experiments found signals of only molecularly adsorbed water in the absence of surface defects, while photoemission spectroscopy[43] observed signatures of OH\({}_{\rm surf}\) and supported mixed molecular and dissociative adsorption. In a recent combined scanning tunneling microscope-molecular beam experiment, dissociation was measured to be slightly endoergic by about 0.8 kcal/mol, with a kinetic barrier of about 8.3 kcal/mol.[32] By contrast, DFT calculations spanning 25 years found conflicting predictions on the relative stability of the two adsorption states,[32; 44; 45; 42] where results depend sensitively on choices of approximate functionals,[46; 47] dispersion corrections,[48] and treatments of strong electron correlation.[48; 39]
We revisit this problem with periodic CCSD(T). Despite the more strongly correlated electronic structure of a transition metal oxide, the LNOs are well-localized around their associated localized occupied orbital (Fig. 3B). We have performed a convergence study analogous to that for Al\({}_{2}\)O\({}_{3}\) and determined that the reaction energy and barrier height can be converged to chemical accuracy by using about 400 LNOs per occupied orbital and a 7L/1 \(\times\) 3 surface model (Fig. 3C) that contains about 130 atoms and 5000 orbitals when using a TZ basis set. Our final results are presented in Fig. 3D, where they are compared to DFT with the PBE functional.[36] CCSD(T) predicts the molecular adsorption to be slightly more favorable than the dissociative product by about 2.4 kcal/mol with a kinetic barrier of about 7.9 kcal/mol, which are in reasonable agreement with experimental findings,[32] especially considering that our calculations neglect vibrational and finite-temperature effects. DFT@PBE underestimates both the reaction energy and barrier by about 2 kcal/mol compared to CCSD(T). By contrast, MP2 overestimates the barrier by about the same amount and slightly underestimates the reaction energy, suggesting that third- and higher-order electron correlation effects play an important role in the water-TiO\({}_{2}\) chemistry.
A common approach to improving the performance of semilocal functionals like PBE, especially in transition-metal containing materials, is the DFT+\(U\) method.[49] For TiO\({}_{2}\), this method introduces a local Coulomb interaction to the Ti 3\(d\) atomic orbital, raising the energy of the Ti 3\(d\) band and yielding an increased band gap in better agreement with experiment. For bulk rutile TiO\({}_{2}\), the experimental band gap is about 3.0 eV and PBE predicts 1.8 eV; PBE+\(U\) with the value \(U=8\) eV predicts 2.9 eV.[39]
Applying the same approach to the water dissociation reaction, we find that it worsens agreement with CCSD(T) and experiment: it predicts an exoergic reaction by about 2.5 kcal/mol and a low barrier height of 1.5 kcal/mol (Fig. 3D). This behavior can be understood by considering the bonding of the molecularly adsorbed water molecule. Raising the energy of the empty accepting Ti 3\(d\) orbital weakens the bond and reduces the adsorption energy. In other words, with
Figure 2: (A) Reaction energetics associated with the water dissociation reaction calculated using different electronic structure methods. (B) Inverse rate for the water dissociation reaction in the temperature range of 100 K to 300 K evaluated using harmonic transition state theory. For CCSD(T), the shaded area encompasses an energy uncertainty of \(\pm\)0.5 kcal/mol.
PBE+\(U\), the reactant is destabilized, leading to a smaller barrier height and a more exoergic reaction.
In Fig. 3E and F, we show the bulk band gap and the water dissociation barrier height as a function of the parameter \(U\). Improving one worsens the other, and there is no single value of \(U\) that accurately predicts both properties. This competing behavior is typical of DFT[3] and highlights the challenge of using DFT for computational heterogeneous catalysis, where stretched bonds, dispersion, and strong correlation are all important. By contrast, high-level quantum chemical wavefunction approaches, such as CCSD(T), offer balanced and unbiased treatments of many-body electron correlation without empirical parameters.
## IV Conclusions
To summarize, we computationally investigated the chemistry of water on two prototypical metal-oxide surfaces, Al\({}_{2}\)O\({}_{3}\) and TiO\({}_{2}\), using state-of-the-art periodic quantum chemistry. The quantitative reaction energetics made possible by this work sheds new light on the long-standing puzzles between previous computational studies and experimental observations regarding the chemical equilibrium and kinetics of water molecules on these surfaces. The results of our high-level quantum chemistry calculations also allowed an unbiased examination of the performance of DFT for elementary steps of reactions on real surfaces, for which comparisons to experiment are extremely challenging. Such accurate calculations can immediately be used for validation, selection, or design of more affordable density functional approximations or as training data for machine learning of force fields.[50] Although alternatives to CCSD(T) for multiconfigurational electronic struc
Figure 3: (A) Atomic structure of a single water molecule adsorbed on the rutile TiO\({}_{2}\)(110) surface (left), which may transfer a hydrogen to a neighboring surface oxygen via the transition state (middle), leaving OH\({}_{\text{ads}}\) and OH\({}_{\text{surf}}\) fragments (right). (B) Isosurface visualization of a representative localized occupied orbital and the density of the corresponding unoccupied LNOs obtained with tightening truncation (left to right). (C) The convergence of the LNO-CCSD(T) reaction energy (blue) and barrier (red) with the LNO subspace size for a 7L/1 \(\times\) 3 model using a TZ basis set. Both energies are converged to an accuracy better than 1 kcal/mol (shaded area) with about 400 LNOs per occupied orbital, which is a small fraction of the total orbital count (about 5000). (D) Reaction energetics associated with the water dissociation reaction calculated using different electronic structure methods. For PBE+\(U\), \(U=8\) eV is applied to the 3\(d\) band of Ti as suggested by previous work.[39] (E) Band gap of bulk rutile TiO\({}_{2}\) predicted by PBE+\(U\) with different values of \(U\), compared with the experimental value.[40] (F) Barrier height for the water dissociation predicted by PBE+\(U\) with different values of \(U\), compared with the CCSD(T) reference.
ture of strongly correlated solids must be pursued,[6] we anticipate that high-level periodic quantum chemistry approaches will play an increasingly important role in the toolbox of surface science. The large system sizes needed for convergence essentially demand the use of local correlation methods, such as the one used here.
## Acknowledgements
We thank Garnet Chan and Sandeep Sharma for useful discussions. T.C.B. acknowledges the hospitality of the Center for Computational Quantum Physics, Flatiron Institute, where a portion of this work was completed. The Flatiron Institute is a division of the Simons Foundation. This work was supported by the National Science Foundation under Grant Nos. OAC-1931321 and CHE-1848369. We acknowledge computing resources from Columbia University's Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010.
## Supporting Information
See the supporting information for (i) overall computational details, (ii) convergence of calculations for water on \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)(0001), and (iii) convergence of calculations for water on rutile TiO\({}_{2}\)(110).
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.10426 | Multi-Object Graph Affordance Network: Goal-Oriented Planning through
Learned Compound Object Affordances | Learning object affordances is an effective tool in the field of robot
learning. While the data-driven models investigate affordances of single or
paired objects, there is a gap in the exploration of affordances of compound
objects composed of an arbitrary number of objects. We propose the Multi-Object
Graph Affordance Network which models complex compound object affordances by
learning the outcomes of robot actions that facilitate interactions between an
object and a compound. Given the depth images of the objects, the object
features are extracted via convolution operations and encoded in the nodes of
graph neural networks. Graph convolution operations are used to encode the
state of the compounds, which are used as input to decoders to predict the
outcome of the object-compound interactions. After learning the compound object
affordances, given different tasks, the learned outcome predictors are used to
plan sequences of stack actions that involve stacking objects on top of each
other, inserting smaller objects into larger containers and passing through
ring-like objects through poles. We showed that our system successfully modeled
the affordances of compound objects that include concave and convex objects, in
both simulated and real-world environments. We benchmarked our system with a
baseline model to highlight its advantages. | Tuba Girgin, Emre Ugur | 2023-09-19T08:40:46Z | http://arxiv.org/abs/2309.10426v3 | Multi-Object Graph Affordance Network: Enabling Goal-Oriented Planning through Compound Object Affordances
###### Abstract
Learning object affordances is an effective tool in the field of robot learning. While the data-driven models delve into the exploration of affordances of single or paired objects, there is a notable gap in the investigation of affordances of compound objects that are composed of an arbitrary number of objects with complex shapes. In this study, we propose Multi-Object Graph Affordance Network (MOGAN) that models compound object affordances and predicts the effect of placing new objects on top of the existing compound. Given different tasks, such as building towers of specific heights or properties, we used a search based planning to find the sequence of stack actions with the objects of suitable affordances. We showed that our system was able to correctly model the affordances of very complex compound objects that include stacked spheres and cups, poles, and rings that enclose the poles. We demonstrated the applicability of our system in both simulated and real-world environments, comparing our systems with a baseline model to highlight its advantages.
## I Introduction
The affordances concept, introduced by J.J. Gibson to refer to the action possibilities provided by the environment [1], has been significantly influential in robotics research in the last decade [2, 3]. Especially developmental aspects of affordances have been widely adopted in robot learning research [4, 5, 6, 7]. While the previous works explore the affordances of single or paired object interactions, the affordances of compound objects that are composed of an arbitrary number of objects of complex shapes and different sizes have not been sufficiently investigated [8].
Consider an infant trying to build a tower with its toys. Because of the different shapes and sizes of the objects, each one affords different actions. The affordances of the objects may change according to their relations with the other objects in the environment, i.e., while an empty cup is insertable by spheres, an empty cup below a cube can not be insertable anymore. However, if there is a large ring above the cup, it remains insertable. Therefore, the affordance of a tower is formed by the objects in it, but it is not straightforward to model as it also depends on the relative positions of the objects of different affordances. Therefore, we tackle this problem by representing the objects in the compound as a graph, as the graph representation preserves spatial relations between objects, enabling effective reasoning.
In our study, we propose the Multi-Object Graph Affordance Network (MOGAN), which learns features from the graph representations utilizing graph neural networks (GNN) and predicts the effects of an action applied. An action is defined as a pick and place operation of a new object onto the current compound object. The effects are the spatial displacements between the new objects and the objects in the compound structure. Because the objects have complex shapes, sophisticated effects are considered, and a suitable novel effect representation is used. In the end, the learned affordances are represented as the relations of the compound object, the new single object, and the effects.
We designed six different tasks using an inventory of single objects, including poles, cups and rings of different sizes, boxes, and spheres. The affordances learned through our model, MOGAN, were utilized to plan a sequence of pick and place actions for constructing a compound object to accomplish a desired task. The results of our model were compared with those of the baseline model that concatenates features of the compound object and the single object instead of a graph structure.
In summary, we propose the MOGAN model that learns affordances given an input graph and an action, and a novel effect encoding to represent the generated sophisticated effects. We showed the applicability of our system, achieving the various tasks in the simulation environment and the real world.
Fig. 1: Execution of the plan generated using our MOGAN model to build the shortest compound object in the real world.
## II Related Work
### _Affordances_
The study of affordances has attracted significant attention in recent years. [9] detects affordances of objects in images along with their classes, with affordances being labeled at the pixel level. This study focuses on designing a novel network architecture to predict affordances and classes simultaneously, with no emphasis on robotic applications such as robot-environment interactions and planning. Various robotics research benefits affordances to enhance precision in grasping, picking, and placing operations. [10] designs a ROS package enabling the operators to specify grasp poses. [11] learns grasping policies utilizing 3D thermal affordance maps. Learning contact information as affordances, as studied in [12] and [13], is another way to tackle the existing problems. [14] designs a self-supervised affordance learning model that labels gripper open and close points while the robot is controlled through human teleoperation. [15] extends this work, grounding large language models to robotic applications. While these studies supervise their models to learn affordances using contact points and gripper signals, we consider the effects of the manipulator's actions to explore the affordances.
Multiple approaches have studied the exploration of affordances learning effects through interactions [16, 17, 18] explores affordance categories according to the effects of tool usage. They map the features extracted from the observations and affordance classes discovered by clustering the effects with the k-means algorithm. [19] define affordances as the probability of effects given the object features, tool features, and action. With the formulation of goals as symbols, they achieve probabilistic planning. [20, 21] also performed sub-symbolic and symbolic planning, using affordances of only single or paired objects. In our study, we adapt deep neural network architectures to exploit their representation capacity for compound object affordances.
### _Graph Neural Networks_
Graph Neural Networks are effective for learning meaningful representations of compound structures and their relations. Consequently, it has gained extensive adoption to reason relations between multi-object systems [22, 23, 24, 25].
[26] represents multi-object scenes as fully connected graph structures based on partial observations. They design their tasks using logical rules and learn them as the relations between nodes. A search algorithm plans an action sequence to accomplish the desired task. In our study, we learn relations between complex objects through observed effects without defining logical symbols. [27] designs a GNN architecture by learning point-wise affordances from a point cloud dataset. [28] learns actions depicted in images, designing a GNN model with the concept of affordances. They represent humans and objects in the images as nodes of the graph to learn the relations between them. This study focuses on human-object interactions through images, while our study takes advantage of the observations, actions, and effects derived from a dynamic setup.
Overall, in our study, we represent compound objects as graph structures, learn their features utilizing GNNs, and learn the affordances through effect predictions. We plan a sequence of actions (selecting an object to place it on the compound object) with a search algorithm utilizing the learned affordances.
## III Method
Our proposed method models the affordances of compound objects, which are composed of an arbitrary number of objects that are placed on top of each other. Given the compound object and a new object, it learns to predict the effects generated by placing the new object on top of the compound object. In our framework, an affordance, which is denoted as \(A\), is defined as the relation between the compound object (\(T\)) that resides on the table, the object (\(o\)) that is placed on top of the compound object, and the
Fig. 2: MOGAN: Multi-Object Graph Affordance Network Architecture. The depth images of single objects are encoded with the autoencoder. It then constructs the graph representation of the compound object. The proposed model, MOGAN, extracts meaningful features from the graph and predicts the resulting effects between a single object and a queried object within the compound object.
effect (\(E\)) generated: \(A=(T,o,E)\). Given \(T\) and \(o\), our system is expected to learn to predict \(E\).
For learning, at each step, the robot randomly selects and picks up an object, places it on top of the current object compound, and observes the effects until either the new object falls down or the object compound collapses. At the start of each exploration cycle, the size of the object compound is initialized as 0. In the rest of this section, we first describe how compound and single objects (\(T\) and \(o\)) and effects (\(E\)) are represented, and the details of the learning algorithm. Finally we describe how the learned affordances can be used to make plans in order to achieve different goals.
### _Single Object Representation_
The single objects are represented with autoencoder features obtained from their depth images. The encoder part of the autoencoder receives a 32x32 normalized depth image and consists of 3 linear layers with neuron sizes 256, 256, and 64, respectively. The latent space size 4 was found to be sufficient empirically in order to represent the images of the set of the objects used in this study. The decoder part is the reverse of the encoder. When collecting the learned hidden representations, the maximum and minimum values of the depth images are appended to prevent the information loss caused by the normalization operation. Therefore, the single objects \(o\) are represented with a feature vector of size 6.
### _Compound Object Representation_
The compound objects are composed of different objects placed on top of each other. In order to represent a compound object, both the features of the single objects inside the compound and the spatial relations between the objects are required to be used. For this, we utilize a graph-based structure. A graph, denoted as \(G\), is defined as a tuple of nodes \(N\) and edges \(E\).
\[G=(N,E)\]
\[N=\{n_{1},n_{2},...,n_{k}\},n_{i}\in\mathbb{R}^{p},1\leq i\leq k\]
\[E=\{e_{1},e_{2},...,e_{k-1}\},e_{i}\in\mathbb{R}^{q},1\leq i\leq k-1.\]
Each node, \(n\), consists of the object features acquired through the autoencoder. As described before, the feature size \(p\) is set as 6. \(k\) indicates the number of nodes within the graph, with no specific limits on this count. A directed edge between two nodes is defined if the objects were placed sequentially in the tower in a direction from the top to the bottom object. All nodes form self-connections.
### _Effect Representation_
When an object is placed on the compound object, different types of effects, such as insertion in different ways, stacking, or toppling, are observed. Instead of categorizing each effect instance into a pre-defined effect category, we propose a generic continuous effect representation that captures the 3D spatial relations between the placed object and each object in the compound. In other words, the effect represents the spatial outcome of placing the new object on the compound and is encoded as a combination: \(E=[E_{1},E_{2},E_{3}]\). \(E_{1}\) describes the height differences between the top and bottom surfaces of each object pairs.
\[E_{1}=\{E_{1}^{1},E_{1}^{2},..,E_{1}^{k}\},E_{1}^{i}\in\mathbb{R}^{2},1\leq i\leq k\]
\[E_{1}^{i}=\{s(|z_{i}^{+}-z_{k+1}^{+}|),s(|z_{i}^{-}-z_{k+1}^{-}|)\}\]
\(E_{1}^{i}\) correpsonds to the effect between the new object and the \(i^{\text{th}}\) object in the object compound. \(z^{+}\) and \(z^{-}\) describe the maximum and minimum height values of an object. \(s\) is a sign function that assigns signs to the effect values. The faces of the \(i^{\text{th}}\) object are considered as planes that divide the Cartesian space into positive and negative regions. If the concerned face of the new object remains on the positive side, the sign of the effect becomes positive. Otherwise, it becomes negative.
\(E_{2}\) encodes the lateral spatial differences between objects. The differences are calculated by sending imaginary rays through the new object, as shown in Fig. 3. If the ray does not intersect with the interested object (outlined with green color), the relevant effect becomes 0. The signs of the differences are calculated with the sign function \(s\) considering the faces that the imaginary rays cut.
\[E_{2}=\{E_{2}^{1},E_{2}^{2},..,E_{2}^{k}\},E_{2}^{i}\in\mathbb{R}^{4},1\leq i\leq k\]
\[E_{2}^{i}=\{s(|x_{i}^{+}-x_{k+1}^{+}|),s(|x_{i}^{-}-x_{k+1}^{-}|),\]
\[s(|y_{i}^{+}-y_{k+1}^{+}|),s(|y_{i}^{-}-y_{k+1}^{-}|)\}\]
\[E_{3}=\begin{cases}1,&\text{if }pos(o_{i})\geq t_{1}||ori(o_{i})\geq t_{2},1 \leq i\leq k+1\\ 0,&\text{otherwise}\end{cases}\]
Finally, \(E_{3}\) encodes whether the newly placed object falls down or the compound object collapses/topples when the new object is placed on top.
### _Multi-Object Graph Affordance Network (MOGAN)_
The proposed MOGAN model, shown in Fig. 2, outputs the effect (\(E\)) expected to be generated when a new object (\(o\)) is placed on the compound object (\(T\)). As the compound object was formed by placing the objects one by one on top of each other, the depth images and the corresponding
Fig. 3: Visualization of the calculation of lateral spatial displacements: Imaginary rays are projected through the center of the new object. Red points illustrate the intersections with both the compounding object and newly added object. The faces of green object that the rays pass through create positive and negative regions. If the intersection point on the blue object remains in the positive region, the effect’s sign becomes positive.
autoencoder features (\(n_{1},n_{2},..n_{k}\)) were already collected and available for processing. The autoencoder features of the new object to be placed on the compound object are also processed and is represented by \(n_{k+1}\) in the figure. MOGAN processes the features of the objects in the compound object and the features of the new object and produces the effect of the placement action.
MOGAN consists of two components: an encoder and a decoder of a graph neural network (GNN). In order to build the layers of the encoder, GCNConv module from the PyTorch Geometric library is used. The encoder generates a latent representation for the input graph. The mean and maximum values of these latent representations are calculated and concatenated to the feature vector of the object to be placed on the compound object. The decoder with linear layers takes these concatenated values along with the hidden representation of the queried node to predict the effects between the node and the new object placed on the top of the tower. The network includes two GCNConv layers and three linear layers, totaling five layers. The parameter size for the network is 46.786. Leaky ReLU is utilized as the activation function. Mean Squared Error (MSE) loss and a custom sign loss are utilized as the loss functions. The sign loss, which is used for \(E_{1}\) and \(E_{2}\), penalizes predictions that do not align with the correct signs compared to the ground truth data.
### _Planning and Tasks_
We aim to provide a variety of tasks to demonstrate the prediction capacity of the MOGAN for planning to achieve different goals. The goals include obtaining object compounds of specified heights, structures, and sub-structures. A tree search algorithm is realized to discover the optimal plan to achieve a specific goal. At each iterative step, the graph representation of the existing object compound is generated, and the object that will be placed on the tower is encoded. Three MOGAN network predicts the effects \(E\) based on the graph representation of the compound and the feature vector of the new object. If predicted \(E_{3}\) indicates a fall/collapse, the current branch of the search operation is terminated.
In detail, six different tasks can be specified. The first two tasks correspond to building the tallest and shortest compounds/towers. In order to predict the height of the object compounds, the \(E_{1}\) effect predictions are summed up. The third and fourth tasks correspond to obtaining structures where the placed objects are required to enclose the top part of the object compound and become invisible in the compound (inserted inside). The accumulated \(E_{2}\) predictions are used for this purpose. The fifth task corresponds to building a tower of a specific height. Finally, the sixth task enables the selection of two objects from the set of objects that will be used in the object compound and puts constraints on their relative placements, such as maximizing or minimizing their relative distances.
### _Experimental Setup_
A manipulator robot and a set of objects with different shapes and sizes are used in the simulation and real world experiments. The Pybullet environment is used for simulating actions and interactions. A 7-DOF UR10 manipulator with a Robotiq 3-Finger Adaptive Robot Gripper is used in the real world. A custom gripper is attached to the wrist of the UR10 manipulator in the simulator in order to speed up the pick and place action executions. The objects that are commonly used in the simulator and the real world correspond to sticks, rings, cups, balls, and cubes as shown in Fig. 5 and Fig. 4. A subset of the inventory is spawned in a rectangular area at random positions. The positions of the objects can be acquired by a segmentation algorithm using the depth image of the scene. A Realsense depth camera is attached to the top of the table to obtain the depth images of the real objects. The depth images of the simulated scene are taken with a virtual depth camera. The depth image of the scene is segmented to acquire the depth images of the objects individually. In order to segment the depth image, the lowest values are grouped according to the pixel positions. The image is cropped according to the center pixel positions for each group. The positions of the objects are calculated using the center pixel positions and values and used during pick and place action executions.
## IV Experiments and Results
### _Baseline Model_
A baseline model is trained to examine the advantage of the graph neural networks. The features of the nodes are concatenated in a single tensor instead of representing them as a graph. The size of the tensor is the multiplication of the feature size and the maximum object number in a tower. The
Fig. 4: A PyBullet environment featuring a UR10 robot and various objects, including cubes, poles, spheres, cups, and rings.
Fig. 5: Various objects used in the real-world setup: sticks, rings, cups, cubes, and balls.
maximum object number extracted from the dataset is 14 in our case. The remaining part of the input tensor remains 0 for the smaller-sized object compounds. Two linear layers are utilized to encode the concatenated features. The effect is the concatenation of all effects for each node in a tower. The parameter size for the baseline model is 50178, which is close to the parameter size of our proposed model. Training and test results are compared with the MOGAN model in the Experiments and Results Section.
### _Results_
#### Iv-B1 Effect Prediction Accuracy
In this section, we analyze the prediction error of our model for the unseen test data and provide the results in Table II. The errors are grouped according to the compound object sizes to analyze the relation between compound object sizes and prediction errors. Effect 1 is the predicted height differences between two objects, as explained in the Method Section. The inventory contains objects with maximum, minimum, and mean height values of 17 cm, 1.5 cm, and 6.5 cm, respectively. The errors in Effect 1 predictions result in deviations of less than 1 cm in predicted height differences when the compound object size is 8 or less. If the object size exceeds 8, we observe a maximum error of 1.41 cm. Although these prediction errors do not significantly impact the majority of predictions due to the presence of considerably larger objects, they can lead to failures when predicting effects between smaller objects, such as small rings. The error for Effect 2 does not increase along with the compound object size. We can confidently state that our model is capable of predicting x and y displacements of objects without being affected by the number of objects. The ground truth value of Effect 3 is 1 when the tower collides, 0 otherwise. When we inspect the prediction errors for Effect 3, we see that it increases as the number of objects increases. However, the errors in Effect have minimal impact on the overall results due to the margin between ground truth values.
### _Simulation Experiments & comparison with Baseline_
We evaluate the generated plans for six different tasks, we sample 10 different configurations for each compound object size, ranging from 2 to 5 in the simulator. For the fifth task, which is to build a compound object with a desired height, we calculated possible height values for the sampled configuration, selected one as the goal, and compared it to the resulting height. For the last task, we randomly selected two objects from the sampled set of objects to maximize or minimize their distances. Please see the generated and executed plans for a number of sample tasks in Fig 6. Out of 300 planning tasks, our system was able to generate 283 successful plans, as shown in Table I. The success rate was observed to slightly drop when the number of objects increases. This was an expected result, as the number of objects in the compound increase, predicting the affordance of the compound object and how it is affected from placing
Fig. 6: A number of sample plan executions in the simulator. The task is (1) to build the shortest compound object using a cube, three different sized cups, and a sphere, (2) to build a compound object giving a constrain as maximizing the relational height differences of the rings, (3) to maximize the invisibility of objects, (4) to build the tallest compound object given a ring and pole.
another object on top becomes more difficult. Additionally, as the number of objects increases, the number of predictions done during the planning increases exponentially. One erroneous prediction among all the correct predictions may cause a failure in planning. It is important to note that our MOGAN model significantly outperformed the base model in planning, as shown in Table I, showing the effectiveness of using graph structures where the features of the objects in the compound are embedded in the nodes of the GNN for modeling multi-object affordances and for the multi-object planning problems.
### _Real-world Experiments_
In the real-world setup, we test our system's planning capacity with the first two tasks: building the shortest and the tallest compound objects. We sampled 5 different sets of objects for the compound object sizes 2, 3, and 4. A number of plan execution snapshots from sampled tasks are provided in Fig.7. Out of the 30 real-world planning tasks, 28 of the generated plans were found to be successful, as shown in Figure 8. The system is able to build desired compound objects 1) using the depth images from Realsense, 2) predicting effects with the MOGAN model, 3) planning an optimal path with the tree search algorithm, and 4) executing it with the UR10 manipulator. The success rate slightly decreases as the object number in the inventory increases. Along with the reasons explained, another reason for the failure is the unpredictability of the real-world systems. Because the objects we use are soft and deformable, unexpected results may arise during the open and close operations of the gripper. i.e., when the robot holds the pole, it grips it too tightly. As a result, when the gripper opens, the pole gets stuck between the fingers and does not fall.
## V Conclusion
In this research, we proposed a novel Multi-Object Graph Affordance Network, MOGAN, which models affordances of compound objects for manipulation and planning. We showed that our system was able to correctly predict the affordances of complex compound objects that include spheres, cups, poles, and several rings that enclose the poles. This prediction capability was effectively used to build different structures via planning structures of highly complex affordances. In the future, we plan to diversify the object inventory and action repertoire and investigate symbolic planning capabilities in this complex affordance setting.
Fig. 8: Plan success rates in the real world. The goals are to build the shortest and tallest compound objects. 5 trials were conducted for each set of different sizes.
Fig. 7: A number of snapshots from real-world planning experiments. In the first, second, and fourth images, the objective is to construct the shortest compound objects. In the third image, the goal is to create the tallest compound object. The system observes the scene, predicts the effects of each potential plan using MOGAN, and executes the optimal one. |
2302.14254 | Measurements of charm lifetimes at Belle II | We report on absolute lifetime measurements of charmed hadrons using the data
collected by the Belle II experiment between 2019 and 2021. The measured
lifetimes of $D^0$, $D^+$, and $\Lambda_c^+$ are the most precise to date and
consistent with previous measurements. Our result indicates that $\Omega_c^0$
is not the shortest-living singly charmed baryon. | N. K. Nisar | 2023-02-28T02:25:56Z | http://arxiv.org/abs/2302.14254v1 | # Measurements of charm lifetimes at Belle II
###### Abstract
We report on absolute lifetime measurements of charmed hadrons using the data collected by the Belle II experiment between 2019 and 2021. The measured lifetimes of \(D^{0}\), \(D^{+}\), and \(\Lambda_{c}^{+}\) are the most precise to date and consistent with previous measurements. Our result indicates that \(\Omega_{c}^{0}\) is not the shortest-living singly charmed baryon.
## 1 Introduction
Predictions of beauty and charm hadron lifetimes are achieved by the heavy quark expansion (HQE) model [1, 2, 3, 4, 5, 6]. The charm lifetime predictions are particularly challenging due to the significant higher-order corrections and spectator quark effects. So the charm lifetime measurements allow for HQE validation and refinement that increase the reliability and precision of Standard Model predictions in flavor dynamics. The best measurements of charm meson lifetimes date back to FOCUS [7] while LHCb recently reported precise measurements of charm baryon lifetimes, relative to \(D^{+}\) lifetime [8, 9, 10].
We report absolute lifetime measurements of the charm hadrons using the data collected by the Belle II detector [11], which is built around the interaction region (IR) of the SuperKEKB [12] asymmetric energy \(e^{+}e^{-}\) collider. SuperKEKB adopts a nano-beam scheme that squeezes the IR to achieve large instantaneous luminosity. The Belle II detector consists of a tracking system, a particle identification system, and an electromagnetic calorimeter kept inside a 1.5 T superconducting magnet. The outer layer consists of a dedicated muon and \(K^{0}_{L}\) detector. The details of the Belle II detector can be found in Ref. [11]. Excellent vertex resolution, precise alignment of the vertex detector, and accurate calibration of particle momenta in Belle II are crucial in the measurements of lifetimes.
## 2 Lifetime extraction
The proper decay times of charm hadrons are calculated as \(t=m(\vec{L}\cdot\hat{p})/p\), where \(m\) is the known mass of hadrons, \(\vec{L}\) is the flight length between the production and decay vertices, and \(p\) is the momentum of hadrons. Lifetimes are extracted by using unbinned maximum-likelihood fits to the \(t\) and its uncertainty, \(\sigma_{t}\), of the candidates populating the signal regions of data. The signal probability-density function (PDF) is the convolution of an exponential function in \(t\) with a resolution function that depends on \(\sigma_{t}\), multiplied by the PDF of \(\sigma_{t}\). The time constant of the exponential function will return the lifetime. The \(\sigma_{t}\) PDF is a histogram template derived directly from the signal region of the data. In all cases but \(D^{0}\), the template is obtained from the candidates in the signal region after having subtracted the distribution of the sideband data. Simulation demonstrates that for \(D^{+}\), \(\Lambda^{+}_{c}\), and \(\Omega^{0}_{c}\), a single Gaussian function is sufficient, whereas for \(D^{0}\), a double Gaussian function with a common mean is required.
## 3 \(D^{0}\) and \(D^{+}\) lifetimes
We measured \(D^{0}\) and \(D^{+}\) lifetimes using 72 fb\({}^{-1}\) of Belle II data using samples of reconstructed \(D^{0}\to K^{-}\pi^{+}\) and \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) decays, respectively. \(171\times 10^{3}\) signal candidates are reconstructed for \(D^{*+}\to D^{0}(\to K^{-}\pi^{+})\pi^{+}\) decays in the signal region: \(1.851<m(K^{-}\pi^{+})<1.878\) GeV\(/c^{2}\). In the \(D^{0}\) case, the per-mille-level fraction of background candidates in the signal region is neglected, and a systematic uncertainty is assigned for this. \(59\times 10^{3}\) signal candidates are reconstructed for \(D^{*+}\to D^{+}(\to K^{-}\pi^{+}\pi^{+})\pi^{0}\) decays in the signal region: \(1.855<m(K^{-}\pi^{+}\pi^{+})<1.883\) GeV\(/c^{2}\). For the \(D^{+}\) case, a sizable background contamination in the signal region is accounted for using the data sideband: \(1.758<m(K^{-}\pi^{+}\pi^{+})<1.814\) GeV\(/c^{2},1.936<m(K^{-}\pi^{+}\pi^{+})<1.992\) GeV\(/c^{2}\)
The background PDF consists of a zero-lifetime component and two exponential components, all convolved with the resolution function. The decay-time distributions of the data, with fit projections overlaid, are shown in Fig. 1. The \(D^{0}\) and \(D^{+}\) lifetimes are measured to be \(410.5\pm 1.1\pm 0.8\) fs and \(1030.4\pm 4.7\pm 3.1\) fs, respectively [13]. The errors are statistical and systematic (all relevant effects are studied as summarized in Table 1), respectively. The results are consistent with their respective world average values [14].
## 4 \(\Lambda_{c}^{+}\) lifetime
The most precise measurement of the \(\Lambda_{c}^{+}\) lifetime is reported by the LHCb experiment [8]. We report a preliminary result on the absolute measurement of the \(\Lambda_{c}^{+}\) lifetime in \(\Lambda_{c}^{+}\to pK^{-}\pi^{+}\) decays reconstructed using 207 fb\({}^{-1}\) of the Belle II data. We reconstruct \(116\times 10^{3}\) candidates for the decay \(\Lambda_{c}^{+}\to pK^{-}\pi^{+}\) in the signal region: \(2.283<m(pK^{-}\pi^{+})<2.290\) GeV\(/c^{2}\), with a background contamination of 7.5%. The \(\Lambda_{c}^{+}\) lifetime is extracted in the same way as the \(D^{+}\) lifetime. Background events in the signal region are constrained using data sideband (\(2.249<m(pK^{-}\pi^{+})<2.264\) GeV\(/c^{2}\), \(2.309<m(pK^{-}\pi^{+})<2.324\) GeV\(/c^{2}\)).
\begin{table}
\begin{tabular}{l c c} \hline Source & \(\tau(D^{0}\to K^{-}\pi^{+})\) [fs] & \(\tau(D^{+}\to K^{-}\pi^{+}\pi^{+})\) [fs] \\ \hline Resolution model & 0.16 & 0.39 \\ Backgrounds & 0.24 & 2.52 \\ Detector alignment & 0.72 & 1.70 \\ Momentum scale & 0.19 & 0.48 \\ \hline Total & 0.80 & 3.10 \\ \hline \end{tabular}
\end{table}
Table 1: Systematic uncertainties for \(D^{0}\) and \(D^{+}\) lifetimes.
Figure 1: Decay-time distributions of (top) \(D^{0}\to K^{-}\pi^{+}\) and (bottom) \(D^{+}\to K^{-}\pi^{+}\pi^{+}\) candidates in their respective signal regions with fit projections overlaid.
Decays of \(\Xi^{0}_{c}\to\pi^{-}\Lambda^{+}_{c}\) and \(\Xi^{+}_{c}\to\pi^{0}\Lambda^{+}_{c}\) may bias the measurement of the \(\Lambda^{+}_{c}\) lifetime, since the \(\Xi^{0}_{c}\) and \(\Xi^{+}_{c}\) have non-zero lifetimes and may shift the production vertex of the \(\Lambda^{+}_{c}\) away from the IR. A veto is applied to suppress such candidates, and a systematic uncertainty is assigned for the remaining contamination (details can be found in Ref. [15]). We measure the \(\Lambda^{+}_{c}\) lifetime to be \(203.20\pm 0.89\pm 0.77\) fs, where the uncertainties are statistical and systematic (summarized in the Table 2), respectively [15]. Our result is consistent with the current world average [14].
## 5 \(\Omega^{0}_{c}\) lifetime
The \(\Omega^{0}_{c}\) was believed to be the shortest-living singly charmed baryon that decays weakly. In 2018, LHCb measured a large value of \(\Omega^{0}_{c}\) lifetime [9], and this observation inverted the lifetime hierarchy of singly charmed baryons. LHCb confirmed their result in 2022 using a different data sample [10]. We performed the first independent measurement of \(\Omega^{0}_{c}\) lifetime using 207 fb\({}^{-1}\) of data collected at Belle II. We reconstructed 90 signal candidates in the signal region (\(2.68<m(\Omega^{-}\pi^{+})<2.71\) GeV\(/c^{2}\)) for the decay \(\Omega^{0}_{c}\to\Omega^{-}\pi^{+}\), where \(\Omega^{-}\to\Lambda^{0}(\to p\pi^{-})K^{-}\). It is a complex decay chain with two extra decay vertices in addition to the \(\Omega^{0}_{c}\) decay vertex.
\begin{table}
\begin{tabular}{l c} \hline Source & Uncertainty (fs) \\ \hline \(\Xi_{c}\) contamination & 0.34 \\ Resolution model & 0.46 \\ Non-\(\Xi_{c}\) background model & 0.20 \\ Detector alignment & 0.46 \\ Momentum scale & 0.09 \\ \hline Total & 0.77 \\ \hline \end{tabular}
\end{table}
Table 2: Systematic uncertainties for \(\Lambda^{+}_{c}\) lifetime.
Figure 2: Decay-time distributions of \(\Lambda^{+}_{c}\to pK^{-}\pi^{+}\) candidates in their (top) signal and (bottom) sideband regions with fit projections overlaid.
The lifetime is extracted by fitting the signal and sideband regions simultaneously. The signal region has a background contamination of 33% that is constrained using events in the sideband (\(2.55<m(\Omega^{-}\pi^{+})<2.65\,\mbox{GeV}/c^{2},2.75<m(\Omega^{-}\pi^{+})<2.85 \,\mbox{GeV}/c^{2}\)). The \(\Omega^{0}_{c}\) lifetime is measured to be \(243\pm 48\pm 11\) fs, where the uncertainties are statistical and systematic (summarized in Table 3), respectively [16]. The result is consistent with LHCb measurements and inconsistent with previous measurements at 3.4 standard deviations.
Figure 3: Decay-time distributions of \(\Omega^{0}_{c}\rightarrow\Omega^{-}\pi^{+}\) candidates in their (top) signal and (bottom) sideband regions with fit projections overlaid.
\begin{table}
\begin{tabular}{l c} \hline Source & Uncertainty (fs) \\ \hline Fit bias & 3.4 \\ Resolution model & 6.2 \\ Background model & 8.3 \\ Detector alignment & 1.6 \\ Momentum scale & 0.2 \\ Input \(\Omega^{0}_{c}\) mass & 0.2 \\ \hline Total & 11.0 \\ \hline \end{tabular}
\end{table}
Table 3: Systematic uncertainties for \(\Omega^{0}_{c}\) lifetime.
## 6 Conclusions
In conclusion, \(D^{0}\), \(D^{+}\), \(\Lambda_{c}^{+}\), and \(\Omega_{c}^{0}\) lifetimes are measured using the data collected by the Belle II experiment. The results on \(D^{0}\), \(D^{+}\), and \(\Lambda_{c}^{+}\) lifetimes are the most precise to date and are consistent with previous measurements. Our result on \(\Omega_{c}^{0}\) lifetime is consistent with the LHCb results [9, 10], and inconsistent at 3.4 standard deviations with the pre-LHCb world average [17]. The Belle II result, therefore, confirms that the \(\Omega_{c}^{0}\) is not the shortest-living weakly decaying charmed baryon.
|
2309.16731 | Merging automatic differentiation and the adjoint method for photonic
inverse design | Optimizing shapes and topology of physical devices is crucial for both
scientific and technological advancements, given its wide-ranging implications
across numerous industries and research areas. Innovations in shape and
topology optimization have been seen across a wide range of fields, notably
structural mechanics, fluid mechanics, and photonics. Gradient-based inverse
design techniques have been particularly successful for photonic and optical
problems, resulting in integrated, miniaturized hardware that has set new
standards in device performance. To calculate the gradients, there are
typically two approaches: implementing specialized solvers using automatic
differentiation or deriving analytical solutions for gradient calculation and
adjoint sources by hand. In this work, we propose a middle ground and present a
hybrid approach that leverages and enables the benefits of automatic
differentiation and machine learning frameworks for handling gradient
derivation while using existing, proven solvers for numerical solutions.
Utilizing the adjoint method, we turn existing numerical solvers differentiable
and seamlessly integrate them into an automatic differentiation framework.
Further, this enables users to integrate the optimization environment with
machine learning applications which could lead to better photonic design
workflows. We illustrate the approach through two distinct examples: optimizing
the Purcell factor of a magnetic dipole in the vicinity of an optical
nanocavity and enhancing the light extraction efficiency of a {\textmu}LED. | Alexander Luce, Rasoul Alaee, Fabian Knorr, Florian Marquardt | 2023-09-27T12:37:14Z | http://arxiv.org/abs/2309.16731v1 | # Merging automatic differentiation and the adjoint method for photonic inverse design
###### Abstract
Optimizing shapes and topology of physical devices is crucial for both scientific and technological advancements, given its wide-ranging implications across numerous industries and research areas. Innovations in shape and topology optimization have been seen across a wide range of fields, notably structural mechanics, fluid mechanics, and photonics. Gradient-based inverse design techniques have been particularly successful for photonic and optical problems, resulting in integrated, miniaturized hardware that has set new standards in device performance. To calculate the gradients, there are typically two approaches: implementing specialized solvers using automatic differentiation or deriving analytical solutions for gradient calculation and adjoint sources by hand. In this work, we propose a middle ground and present a hybrid approach that leverages and enables the benefits of automatic differentiation and machine learning frameworks for handling gradient derivation while using existing, proven solvers for numerical solutions. Utilizing the adjoint method, we turn existing numerical solvers differentiable and seamlessly integrate them into an automatic differentiation framework. Further, this enables users to integrate the optimization environment with machine learning applications which could lead to better photonic design workflows. We illustrate the approach through two distinct examples: optimizing the Purcell factor of a magnetic dipole in the vicinity of an optical nanocavity and enhancing the light extraction efficiency of a \(\mu\)LED.
Shape optimization Adjoint method Automatic differentiation Gradient-based optimization Light extraction efficiency Nanophotonic devices Finite-difference time-domain (FDTD) Outcoupling structures
## 1 Introduction
Optimization is a crucial aspect in the development of structural devices that dictate the physical properties of waves and fields in order to yield higher performance compared to those created using traditional approaches. The application domain for optimization for light is vast and rapidly evolving [1], encompassing numerous techniques that modify parameters or geometries based on specific update algorithms. Generally, physical systems subject to optimization do not exhibit a convex loss landscape, resulting in inherent limitations when seeking global optima. Consequently, most optimization algorithms can be categorized as either global or local optimization. Global optimization techniques for photonic optimization include evolutionary algorithms [2] and Bayesian optimization techniques [3, 4]. Although global optima are typically preferred over local optima, these algorithms face significant limitations regarding their
applicability. For instance, Bayesian optimization becomes increasingly expensive for large problems with many parameters and data points [5]. Similarly, evolutionary algorithms are often sample-inefficient and unsuitable for computationally expensive evaluations. Emerging machine-learning approaches offer new possibilities for global optimization and have demonstrated promising results in photonic optimization applications [6, 7, 8, 9, 10]. However, they cannot overcome the fundamental issue of the curse of dimensionality [11]. Conversely, identifying local optima in high-dimensional problems is much more manageable than obtaining a global minimum [12], as evidenced by the vast number of neural network parameters being optimized during training for deep learning and machine learning [13]. By utilizing gradient-based optimization and adaptive stepsize methods such as ADAM [14] or line-search [15], it is feasible to optimize numerous parameters simultaneously and achieve a local optimum. Gradient-based optimization is often applied in numerical optimization algorithms for device parameter, shape, or topology optimization, which is commonly referred to as inverse design [16, 17, 18, 19, 20]. To employ gradient-based optimization, it is essential to compute the gradients of the desired function with respect to a loss or target value. This computation can be challenging, as the loss often depends on the solution of the system of equations governing the underlying physical problem. The adjoint method [21, 22] allows for obtaining analytical gradients by deriving adjoint equations, which can then be manually integrated into numerical solvers and update equations for the geometry parameters [23, 18, 17, 24, 25]. However, this approach can be tedious and problem-specific, requiring new derivations for different physical settings or optimization targets. Automatic differentiation (AD) offers an alternative by automating the complex and elaborate process of deriving gradients [26]. With AD, one only needs to implement the forward function, provide a suitable parameterization, define functions for solving the governing partial differential equation (PDE), postprocess the PDE solution, and define the target/loss function. Although implementing functions for the postprocessing and the loss is typically straightforward, obtaining gradients of a physical solution for a PDE can be challenging with AD, especially when established numerical solvers do not support it. Consequently, an end-to-end approach using AD requires implementing a new solver directly within the AD framework to derive the PDE solution and perform backpropagation [27, 28, 29, 30]. This task can be impractical since migrating an existing, validated physical model to an AD solver and framework poses significant work overhead.
In this work, we propose a "hybrid approach" that merges the benefits of automatic differentiation (AD) with the applicability of the adjoint equation to established solvers. By utilizing analytically derived results from previous works on the physical adjoint problem, we directly integrate established numerical solvers into an AD framework. Importantly, the internal workings of the numerical solver must not be accessible to the user. Hence, we consider the adjoint computation to be an atomic step in the computational graph of the AD framework. This combination creates an end-to-end AD-enabled process incorporating established numerical solvers. The solvers are then seamlessly integrated into the computational graph of the AD framework, effectively rendering them auto-differentiable for the optimization. This approach allows users to leverage the functionality and efficiency of modern AD frameworks while selecting the optimal numerical solver for their specific problem, regardless of AD compatibility1.
Footnote 1: It is crucial for the solver to provide an interface that enables loading external geometries or parameters, and sources, as well as exporting numerical solutions, which typically is a supported functionality [31, 32].
We demonstrate the application of AD integration by performing shape optimization on two distinct photonic problems of scientific and engineering interest. In the first example, we aim to enhance the Purcell factor of a photonic nanocavity by deforming the cavity geometry. In the second example, we apply shape optimization to the outcoupling structure of a \(\upmu\)LED. In both cases, we implement the analytically derived equations for the shape gradient into PyTorch, while utilizing PyTorch-provided functionality for postprocessing and loss calculations [33].
## 2 Combining the adjoint method with autodifferentiation
A general optimization problem for photonic applications is the overarching goal to achieve the highest possible value of a physical property. The problem is typically described by a partial differential equation (PDE) \(A\) that governs the dynamics. Here, we consider only linear PDEs for brevity, but in general, the dynamics could also encompass non-linear equations or be described by an eigenvalue problem [21]. With a set of geometrical design parameters \(p\), the solution \(u\) of the physical system is given by the equation \(A_{p}u=b_{p}\) where \(b_{p}\) denotes source terms. The system under consideration is embedded in a simulation region \(\mathcal{D}\) on which the solution is computed. A figure of merit or loss \(J(u)\) must also be defined which evaluates the solution of the physical system given by the parameters \(p\). Typically, this loss functional is given by an integral over the computational domain \(J=\int_{\mathcal{D}}u(s)\mathrm{d}s\) or a sum over discrete physical properties of the solution. The optimization problem is formulated by \(\min_{p}J(u_{p})\) such that \(A_{p}u=b_{p}\). Typical examples are increasing the quality factor of a cavity, focusing the emission of light into a particular solid angle, or increasing the Purcell effect for an emitter [34, 2]. The problem of computing the optical characteristics for a given task is usually tackled by employing various numerical solvers such as rigorous coupled wave analysis (RCWA), finite difference time domain (FDTD), finite-difference frequency-domain (FDFD), or finite element method (FEM).
Depending on the type of problem, the approximations and discretizations performed by the software are particularly useful for a specific problem. As a result, many specialized types of solvers and software solutions exist today. However, the situation is usually not as straightforward as choosing a set of parameters that can be applied to the PDE and then be evaluated directly. For more complex problems, geometry generation and postprocessing of the raw solution of the PDE require additional work on top of finding the solution. The evaluation of the design of an optimization process can be separated into the following steps: the geometry definition Figure 1a), the numerical simulation of the physical problem (Figure 1b)), the postprocessing of the results (Figure 1c)), and the evaluation by a loss functional (Figure 1d)).
Optimizing a large set of parameters \(p\) by gradient-based optimization requires the gradient of the loss functional with respect to the design parameters \(\delta J(u)\). Obtaining gradients of a loss functional \(J\) with respect to the input parameters \(p\) is difficult at first glance since it involves computing the variation of the loss functional with respect to all input parameters. This would lead to a computational complexity that scales linearly with the number of input parameters. For large problems, this poses a heavy computational burden. Fortunately, the computational complexity can be improved to a constant dependency on the input parameters by either employing the adjoint method or using (backward mode) automatic differentiation to compute the parameter gradients. Both approaches are elucidated and combined in the following.
### Automatic differentiation
Backward mode automatic differentiation or backpropagation is the idea of applying the chain rule algorithmically to a numerically executed computation. The gradient is separated into atomic computations for which the derivatives are known analytically. For a simple computation \(y=f(g(x))\), the chain rule gives the derivative of \(\mathbf{y}\) with respect to \(\mathbf{x}\) as \(\frac{\partial y}{\partial x}=\frac{\partial f}{\partial g}\frac{\partial g}{ \partial x}\). The functions \(f\) and \(g\) can now be differentiated individually and the analytical solution of the respective derivative reused whenever the functions appear again. For arbitrarily big computations, it suffices to separate the computation into individual function applications and keeping track of the order in which all functions have been applied. This concept is known as the computational graph. Consider a complicated computational graph \(J=f_{G}(p)\) that represents solving the linear PDE of the aforementioned physical system \(A_{p}u=b_{p}\) and computing the loss of the system \(J(u)\). Computing \(\frac{\partial J(u)}{\partial p}\) is then reduced to tracing back the individual steps of the computational graph from the loss \(J\) to the parameters \(p\) and chaining the respective derivatives of the intermediate function applications. To distinguish which part of the computation is executed, any mathematical function of the AD framework provides a forward and a backward method. The normal function evaluation is applied when the forward method is invoked since the computational graph is traversed in the forward direction from the input to the loss. The derivative is computed by the backward method which receives the gradient information in backward order starting from the loss and propagating to the input values. Automatic differentiation is straightforward to use and many scientific AD frameworks are readily available such as Jax, PyTorch, Hips/autograd, and more [35, 33, 36, 37, 38]. The drawback of using AD frameworks for optimization is that the numerical solver for the PDE must support automatic differentiation and create a computational graph during the numerical solution for \(u\). Although this is a rapidly evolving field and a number of solvers employing automatic differentiation have been developed [28, 27, 30], many popular choices and industry-standard solvers do not support AD [31, 32].
### Adjoint Method
On the other hand, adjoint methods approach the gradient derivation from the manual side. Here, we briefly recall the basis of the adjoint method [21, 23, 18]. Consider again a system governed by the linear PDE \(A_{p}u=b_{p}\) with loss functional \(J\). The functional derivative of \(J(u)\) can be expressed by \(\frac{\mathrm{d}J}{\partial p}=\frac{\partial J}{\partial p}+\frac{\partial J }{\partial u}\frac{\partial u}{\partial p}\). The term \(\frac{\partial u}{\partial p}\) exhibits the undesired effect of linear scaling of the computational cost with the number of input parameters \(p\) if it is evaluated via finite differences. However, this term can be expressed by taking the derivative of the PDE wrt. to p \(\frac{\partial Au}{\partial p}-\frac{\partial b}{\partial p}=\frac{\partial A }{\partial p}u+A\frac{\partial u}{\partial p}-\frac{\partial b}{\partial p}=0\) which is than rearranged to
\[\frac{\partial u}{\partial p}=A^{-1}\left(\frac{\partial b}{\partial p}-\frac {\partial A}{\partial p}u\right). \tag{1}\]
Putting everything together, we are given an equation that splits into three parts - the derivative of the loss with respect to the parameter, the s.c. _adjoint solution_ and the _forward solution_:
\[\frac{\mathrm{d}J(u)}{\mathrm{d}p}=\frac{\partial J}{\partial p}+\underbrace{ \left(\frac{\partial J}{\partial u}A^{-1}\right)}_{\text{adjoint solution}}\underbrace{\left(\frac{\partial b}{\partial p}-\frac{\partial A}{ \partial p}u\right)}_{\text{forward solution}}. \tag{2}\]
Here, the name adjoint solution derives from the reformulation of the left side of Equation 2 to \(A^{\dagger}v=\frac{\partial J}{\partial u}\) where \(v\) is the solution of this adjoint PDE. For many linear PDE, the adjoint system equations \(A^{\dagger}\) are straightforward to derive. The boundary terms are given by the sensitivity of the loss functional with respect to the solution of the original PDE. The right side of Equation 2 is then comprised of the solution \(u\) and the sensitivity of the system matrix \(A\) and boundary conditions of the original PDE. The adjoint method is appealing due to its generality. It is applicable to physical optimization problems in many settings, also outside of photonics. In particular, it is possible to compute the forward and adjoint solution with many different types of numerical solvers as long as the adjoint system equations can be used by the solver. However, it poses an analytical overhead before the optimization since the derivation of the required equations is done by hand. Especially for complicated problems where the system equations have a complicated dependency on the parameters in \(\frac{\partial A}{\partial p}\) and \(\frac{\partial b}{\partial p}\) or if the loss functional involves lengthy and tedious postprocessing \(J=f\circ g\circ h\dots\) the adjoint method becomes impractical.
### Integrating the adjoint method into automatic differentiation
Interestingly, the advantages and disadvantages of AD and the adjoint method seem to complement each other. While the adjoint method is difficult to use but generally compatible with most numerical solvers, AD is easy to use but only applicable by using appropriate solvers. Here, we show how to combine both methods and leverage the advantages of each other to cancel their disadvantages.
The key idea is to incorporate the adjoint method directly into the computational graph of the AD framework. This integration provides the benefit of the AD framework's flexibility without the necessity of rewriting efficient numerical solvers. In order to integrate the adjoint method into the backward calculation of an AD framework, we need to identify the appropriate terms in the derivation. Fortunately, the forward and adjoint solution derivation is similar to the distinction between the forward and backward methods for AD. The forward method should receive the input parameters \(p\) that determine the system equations \(A_{p}\) and source terms \(b_{p}\) and return the solution of the PDE \(u\) back to the computational graph. During gradient computation, we crucially depend on the input from the backward methods to receive the _adjoint source_\(\frac{\partial J}{\partial u}\) with which the adjoint solution \(v\) can be computed. In the backward method, we should therefore receive gradient information from the loss function and any other postprocessing steps that were computed from the forward solution \(u\). Then the adjoint solution and the forward solution are multiplied (Equation 2) and the solution is returned back to the computational graph for further processing back to the root of the parameters. The remaining terms \(\frac{\partial A}{\partial p}\) and \(\frac{\partial b}{\partial p}\) depend on the discretization of the solver and the applied optimization scheme and must be treated accordingly if they are not part of the AD framework.
There are two popular schemes that are mostly used for the geometric optimization of photonics - topology and shape optimization. In topology optimization, the entire distribution of material is considered to change point-wise throughout the optimization domain. In shape optimization, the boundary \(\partial\Omega\) of a shape \(\Omega\) is continuously deformed but the shape remains connected during the deformation. Importantly, the approaches require different treatments of the gradients. We will focus on shape optimization in the following but similar steps can be applied for topology optimization [39]. The optimization target must be reformulated slightly since in shape optimization the target is to optimize over the possible geometrical shapes instead of the parameters. A shape \(\Omega\) is a connected region within the computational domain \(\mathcal{D}\subset\mathbb{R}^{n}\) with fixed optical material properties. The PDE can then be solved on \(\mathcal{D}\) which yields solution \(u_{\Omega}\in\mathbb{R}^{k}\). A loss functional \(J\) is then applied to evaluate the solution. Shape optimization derives how to deform the boundary of the shape \(\Omega\) to improve the loss \(J(u_{\Omega})=\int_{\mathcal{D}}\text{d}s\,u_{\Omega}\). By taking the variation of the loss functional to first order, we obtain [23, 20, 40]
\[\delta J_{\Omega}(\delta\Omega)=\int_{\partial\Omega}\text{d}s\,\delta\Omega \,\hat{n}\,\left(c_{1}-c_{2}\right)u_{\Omega}v_{\Omega}=\int_{\partial\Omega} \text{d}s\,\delta\Omega\,\hat{n}\,V_{\Omega}(s). \tag{3}\]
\(\delta\Omega\) denotes the variation of the shape which is equivalent to a test function in functional analysis. At iteration i, the variation deforms the shape \(\Omega_{i+1}=(\mathds{1}+\delta\Omega)(\Omega_{i})\). \(\hat{n}\) denotes the normal vector on the boundary, and \(V_{\Omega}(s)\) denotes the s.c. sensitivity field (also known as gradient field [17] or velocity field [41]). Since our goal is to minimize the loss functional, we see that this can be achieved by setting the geometry deformation to \(\delta\Omega=-\hat{n}\,V_{\Omega}(s)\). Then, the loss functional is guaranteed to decrease to first order in every iteration. The sensitivity field acts on the shape as a vector field that drags the boundary along the direction of the vector field [42, 41].
The forward and adjoint solutions \(u_{\Omega}\) and \(v_{\Omega}\) are directly inserted in the shape gradient. \(c_{1}\) and \(c_{2}\) are parameters of the computational domain inside and outside of the shape \(\Omega\). For photonic optimization, the parameters are given by the relative electric permittivity \(\varepsilon_{i}\) for the parallel component of the electric fields in \(u\) and \(v\) and \(1/\varepsilon_{i}\) for the normal component [17].
In many interesting scenarios, the functional \(J(u_{\Omega})\) has a complicated dependence on the solution from the postprocessing and the shape has parameter dependencies. Generally, this postprocessing dependence is described by a
function \(f:\mathbb{R}^{k}\times\mathcal{D}\mapsto\mathcal{F}\), where \(\mathcal{F}\) is a vector space, usually chosen to be \(\mathbb{R}^{m}\). The postprocessing, which can be arbitrarily complicated, acts on the solution \(u_{\Omega}\) before integrating the result of the postprocessing \(J(f(u_{\Omega(p)}))=\int_{\mathcal{F}}\text{\,dy\,}f(u_{\Omega(p)})\) to obtain the loss \(J\). An example of such a postprocessing function is the farfield transformation that is used in subsection 3.2 that projects the boundary values of the solution \(u_{\Omega}\) from the 2D computational domain \(\mathcal{D}\) on a linear farfield [43]. The chain rule for functionals [44, 45] for chaining functionals with ordinary differentiable functions allows us to separate the postprocessing and the geometry definition from the shape gradients
\[\delta J_{\Omega}(\delta\Omega)=\int_{\partial\Omega}\text{\,d}s\,\delta \Omega\,\hat{n}\,V_{\Omega(p)}(s)\frac{\partial\Omega(p)}{\partial p}. \tag{4}\]
The functional derivative can then be written as
\[\frac{\delta J_{\Omega}}{\delta p}=\hat{n}\left(c_{1}-c_{2}\right)u_{\Omega} \underbrace{\left(\frac{\partial J}{\partial f}\frac{\partial f}{\partial u _{\Omega(p)}}A^{-1}\right)}_{v_{\Omega}}\frac{\partial\Omega(p)}{\partial p}. \tag{5}\]
Here, we see again how to employ automatic differentiation on the adjoint source computation for the adjoint solution \(v_{\Omega}\) and then continue with the backpropagation of the shape. For illustrative purposes, the process is also depicted in Figure 1.
### Software considerations
On a practical level, integrating the adjoint method into an AD framework and thus making AD compatible with conventional solvers boils down to implementing a _differentiable simulation_ function that receives a numerical representation of the geometry boundary \(\partial\Omega\). The differentiable simulation is shown in pseudocode in subsection 5.2. Boundary support points that give rise to the shape \(\Omega\) are an ideal representation since they easily integrate with numerical AD frameworks. Here, we focus only on 2D shapes but this can be extended to 3D by modifying the equations appropriately. The differentiable simulation method starts by initializing the simulation by evaluating the geometry support points. Then, the solution of the simulation is computed and returned to the AD framework. The AD framework then undertakes the postprocessing of the solution, along with the evaluation of the loss functional. Taking the derivative of the loss functional up to the adjoint simulation is handled by the AD framework using backpropagation. Crucially, the AD framework returns the _adjoint source_\(\frac{\partial J}{\partial u}\), see Figure 1e-f), which is usually derived manually in former applications [17, 25, 18, 23].
The adjoint source is then passed to the _backward_ method of the _differentiable simulation_ function as the _gradient input_, shown in Figure 1g). It initializes the adjoint simulation and computes the adjoint solution \(v\), which is particularly easy for Maxwell's equations with linear materials due to their time-reversal properties [18]. For the Maxwell equations, the adjoint system is given by \(A^{\dagger}v_{\Omega}=TAT\,v_{\Omega}=\frac{\partial J}{\partial u}\) with \(T\), the time reversal operator. Then, computing the adjoint solution can be done by solving \(ATv_{\Omega}=T\frac{\partial J}{\partial u}\).
Taking the derivative of a function often involves retaining the results from the forward function evaluation. In the case of the adjoint method, most importantly the solution \(u_{\Omega}\) but it is useful to also save the boundary reference and other additional parameters for the solver. Since AD frameworks are often required to store the solution of the forward pass, the framework provides the functionality to store intermediate results which are necessary for the gradient calculation during backpropagation. Together with the forward solution, the backward method computes the sensitivity field for the given geometry with the shape calculus shown in Equation 5[46, 23, 18].
The sensitivity field will act as a gradient by deforming the geometry since \(\delta\Omega\) will decrease the loss functional to first order. The movement of the boundary is projected on the normal since the tangential movement of the boundary has no influence on the loss functional which is shown in Equation 5. We need to take into account that the geometry is represented by boundary support points where movement of the support points drags the connecting edge along. The edge displacement must be carefully taken into account which is detailed in subsection 5.1. The support point sensitivities are interpreted as the support point gradients and returned by the backward method. The AD framework continues with the backpropagation to the geometry parameters \(p\). In this way, it is particularly straightforward to create parameterized geometries which, for example, serve to introduce geometry constraints that ensure favorable properties such as manufacturability. To illustrate the core functionality of the _differentiable simulation_ function we present it in pseudo-code in subsection 5.2.
Finally, the shape can be updated via an optimizer depicted in Figure 1h). The optimizer uses the computed gradients in order to update the shape parameters with a stepsize estimated by the selected algorithm. Many different approaches exist and exhibit advantages and disadvantages. The simplest optimizer is gradient descent where the parameters are updated based on a stepsize \(\eta\) selected at the start following the rule \(p_{i+1}=p_{i}\pm\eta\nabla p\). However, many more refined
techniques exist such as quasi-newton methods [15] that estimate the hessian iteratively with gradient information or moment estimation methods such as ADAM [14] which approximate first and/or second-order moments of the stepsize.
## 3 Application examples
To showcase the integration of a conventional simulation into an AD framework, we apply it to two different problems of current interest. In the first example, we enhance the spontaneous emission rate for an optical nanocavity while in the second example, we increase the farfield intensity distribution within a given solid angle by optimizing the outcoupling structure of a 2D \(\upmu\)LED.
### Purcell effect optimization
In the example shown in Figure 2, we aim to increase the spontaneous emission rate for an optical nanocavity. Increasing the spontaneous emission rate is a research problem both for inverse design [47, 48] and classical approaches [49]. More precisely, our goal is to increase the Purcell factor, which is proportional to the time-averaged poynting vector on the boundary of the domain \(\max\frac{P}{P_{0}}=\max\int_{\partial\mathcal{D}}\hat{\mathbf{n}}\,\mathrm{ Re}\big{[}\mathbf{E}_{\Omega}\times\mathbf{H}_{\Omega}^{\dagger}\big{]}/2\,\mathrm{d}\)s where \(P_{0}\) is the dipole emission in free space,
Figure 1: Overview of a typical geometry optimization problem. **a)** The optimization starts with a parametrized initial shape \(\Omega(p)\) within a computational domain \(\mathcal{D}\). **b)** Then, the physical problem is solved using a numerical solver with the source distribution appropriate for the problem at hand. **c)** Next, the physical solution is evaluated in the domain or on its boundary and transformed during postprocessing, for example by projecting a recorded near field to the far field or selecting different waveguide modes. **d)** Finally, the result from postprocessing is evaluated and integrated by a loss functional \(J\). **e-f)** Differentiating the loss functional up to the solution from the numerical solver is easy due to automatic differentiation (AD). **g)** However, it becomes more difficult to obtain gradients for the simulation parameters from the simulation solution since many numerical simulations do not provide AD or it is not efficient to use. Taking the derivative with respect to the shape parameters is therefore difficult. By combining AD with the adjoint method, backpropagation computes the gradients up to the solution from the solver at which point the adjoint method is employed to continue with the gradient computation and passes the gradients further to the shape parameters. Computing the gradient of a given shape is then reduced to backpropagating through the computational graph of the AD framework, which is equivalent to backpropagation through the simulation itself by means of the adjoint method. **h)** Finally, the gradient of the shape is evaluated by AD, and the shape parameters are updated by an optimization algorithm. Then the next iteration begins with the updated shape.
which is used for normalization while \(\hat{n}\) denotes the boundary normals and \(\mathbf{E}_{\Omega}\) and \(\mathbf{H}_{\Omega}\) denote electric and magnetic fields on the boundary \(\Omega\). We employ the FDTD solver from Lumerical [31] to obtain the electric and magnetic field solutions for the selected wavelength of 600nm. For simplicity and to reduce the simulation time, we consider a 2D problem that is infinitely extended in the z-direction. In the center of the simulation domain, we place a magnetic dipole emitter with the magnetic current oriented in the x-direction and surround it with a torus structure with a real refractive index of \(n=3.5\). Both the inner and the outer torus boundary, shown in Figure 2-, are subject to optimization but are represented by a tensor of an AD framework. In the presented case, we employ PyTorch as AD framework. For the update scheme, we employ simple gradient descent but chose a physically inspired step size on a length scale for which we expected a change of the Purcell factor. We also decrease the step size proportionally to the Purcell factor. The optimization shows converging behavior after 150 iterations, in which the Purcell factor increased to \(\frac{P}{P_{0}}\approx 83\) starting from \(\frac{P}{P_{0}}\approx 0.4\), which is in the range of expected improvement. Since we use a simple gradient update scheme, the presented solution is highly dependent on the initial geometry and is potentially far away from a global maximum for the Purcell factor. The optimization also results in a dumbbell shape around the dipole which has been observed in other works, too [49, 50, 47, 51, 48].
### Shape optimization for spatially distributed dipole emission of a \(\mu\)LED
In the example shown in Figure 3, our primary objective is to enhance the farfield emission of a \(\mu\)LED (nanoscale light-emitting diode) within a specific solid angle, denoted as \(\Gamma\), also known as LEE (light extraction efficiency). The field of \(\mu\)LED development is rapidly advancing, attracting substantial research interest and holding considerable industrial relevance [52, 53, 54, 55]. The targeted optimization can be mathematically represented by the function
Figure 2: Overview of the gradient descent optimization results for the enhancement of the Purcell factor in a torus nanocavity. **a)** depicts the initial torus geometry. The dipole emitter sits in the center of the torus at \(\vec{r}=(0,0)^{T}\). The refractive index of the torus has a value of \(n=3.5+0i\). The blue contour represents the deformable boundary which is subject to optimization. The optimized torus geometry after 150 iterations is shown in **b)**. Geometry deformations during optimization led to a loss reduction, as demonstrated in **c)**. The loss exhibits a steady decline throughout the optimization process with slight oscillations which emerge if large parts of the boundary are close to a local minimum. For this optimization, we chose the negative Purcell factor for the loss with gradient descent. After the optimization, the Purcell factor was increased by about \(\times 220\). The deformations are computed via the adjoint method, see section 2. The sensitivity field is shown in **d)** together with the shape gradients that are obtained by evaluating the sensitivity field on the boundary of the deformable geometry.
\(\max_{\Omega}\text{LEE}_{\Gamma}=\max_{\Omega}P_{\Gamma}/P_{0}=\max_{\Omega}\frac{1 }{P_{0}}\int_{\Gamma}\int_{\Lambda}\text{Re}\big{[}E_{\Omega,\text{ Farfield}}\times H_{\Omega,\text{ Farfield}}^{\dagger}\big{]}/2\,\text{d}\Gamma\text{d}\lambda\). LEDs exhibit incoherent dipole emissions that originate from all over the quantum well region. To approximate this emission behavior, we distribute dipole emitters over the quantum well and compute the average emitted intensity within the solid angle \(\Gamma\) and normalize with respect to the injected power \(P_{0}\) into the simulation. By solving the emission problem for each individual dipole, we can account for the incoherent nature of the \(\mu\)LED's emission. To simplify the problem and reduce computational time, we adopt a two-dimensional FDTD model. The wavelength range of interest for this example is from 600 to 650 nanometers (\(\Lambda=[600,650]\) nm). The initial \(\mu\)LED model is depicted in Figure 3a). The \(\mu\)LED features a gold substrate passivated with a thin layer of silicon dioxide and a semiconductor material composed of gallium nitride, which is connected to the gold substrate via a thin layer of indium tin oxide. The quantum well region is made up of indium gallium phosphide. To enhance the light extraction efficiency within the solid angle \(\Gamma\), we deform the upper boundary of the \(\mu\)LED's top side, referred to as the outcoupling structure, which consists of gallium nitride. The emission then radiates into the air.
To enable the use of automatic differentiation, we represent the outcoupling structure geometry with a PyTorch tensor. The differentiable simulation receives a reference to this boundary representation and creates an STL file from the tensor that is imported to the solver after each iteration. Furthermore, the projection of the fields on the boundary of the \(\mu\)LED are projected into the farfield by a PyTorch implementation of the equivalence principle in order to use AD
Figure 3: Illustration of the \(\mu\)LED optimization process: **a)** presents the initial model, where dipole emitters are distributed within the quantum well region (green). The averaged farfield emission of the initial \(\mu\)LED and LEE into solid angle \(\Gamma=25^{\circ}\) is shown on the left plot of **b)**. For reference, **b)** also displays the farfield for a range of individual wavelengths and the total LEE. Using the adjoint method with shape calculus section 2 we obtain the averaged sensitivity field over the entire domain, which is shown in **c)**. By following the direction of the steepest descent and updating the boundary geometry according to the shape gradients for 30 iterations, we deform the boundary and decrease the loss over the optimization duration, see **d)**. To avoid obtaining a shape with very small features, we smooth the gradients locally over the deformable boundary. At the end of the optimization, the outcoupling structure of the \(\mu\)LED is shown in **a)** on the right. The corresponding farfield for the optimized \(\mu\)LED shown on the right in **b)** experiences a reshaping with an improvement of the overall LEE directed into \(\Gamma\) of \(\Delta\text{LEE}_{\pm 25}=0.0428\) and a total LEE improvement of \(\Delta\text{LEE}_{\pm 90}=0.0315\).
for the postprocessing [43]. Employing a standard gradient descent optimization technique, we run the optimization for 30 iterations. During the optimization, the outcoupling structure is continuously deformed following the mean sensitivity field computed by the adjoint method. The shape gradients and mean sensitivity field for the initial and final structure are shown in Figure 3 a) in the left and right figure respectively. The boundary approximation of the outcoupling structure is much finer than the mesh size of the solver, thus we interpolate the field solution to obtain smooth field values on the boundary. For optimization stability, we decay the stepsize after 5, 10, and 20 iterations from 2nm to 1nm, 0.5nm, and 0.1nm. By increasing the mesh resolution, after 10 and 20 iterations, we make sure to resolve the geometry changes sufficiently when the stepsize becomes small. To avoid creating small features on the outcoupling structure boundary, we smooth the shape gradients locally by sliding a Gaussian kernel over the boundary. Therefore, the standard deviation of the kernel controls the size of the features. This explains the oscillating gradients on the boundary after the optimization (see Figure 3 c) plot on the right), which are smoothed out after applying the Gaussian kernel. The final optimized uLED structure is presented in Figure 3 a), while the loss throughout the optimization process is depicted in Figure 3 d). As the main objective is proportional to the farfield intensity within the solid angle \(\Gamma\), we also provide the mean farfield at both the beginning and end of the optimization in Figure 3 b). Additionally, we show the farfield intensity for a range of wavelengths. After the optimization, the averaged farfield is focused in the target solid angle \(\Gamma\), and the \(\text{LEE}_{\Gamma}\) is increased by 0.0428, an improvement of \(63.01\%\) compared to the initial farfield.
## 4 Conclusion
In this work, we have presented an general approach to include existing numerical solvers into an automatic differentiation framework to simplify and make the optimization of photonic structures faster and more convenient, particularly with existing models and solvers. To this end, the adjoint method is the key to implementing the forward and backward methods for automatic differentiation which allows us to make conventional solvers end-to-end auto-differentiable. Due to our focus on continuous geometries, we presented the optimization in the context of shape optimization and computed shape gradients on geometry boundaries which enable gradient-based optimization algorithms to improve the optical characteristics of the geometry with respect to the loss function. Generally, the approach is also suitable for topology optimization.
We demonstrate the approach with two different physical problems, optimizing an optical nanocavity to increase the spontaneous emission rate and optimizing the outcoupling structure of a uLED to increase the light extraction efficiency into a solid angle in the farfield. For both cases, we show a significant reduction of the loss while employing Lumerical FDTD to solve the Maxwell equations.
Two key advantages of employing automatic differentiation (AD) for numerical optimization are parallelized multiphysics optimization and compatibility with machine learning. By integrating various physics solvers into AD, conducting joint optimization (e.g., thermal and optical optimization) becomes considerably more straightforward. Integrating the adjoint method into AD frameworks could also facilitate the development of novel AI applications for designing and optimizing optical devices.
#### Acknowledgments
We thank the supporters of this work, particularly Heribert Wankerl (ams-OSRAM Group) and Maike Stern (OTH Regensburg) for constructive feedback of this manuscript. Furthermore, we thank Daniel Grunbaum (ams-OSRAM Group) and Philipp Schwarz (ams-OSRAM Group & University Regensburg) for fruitful and helpful discussions. Finally, we thank Harald Laux (ams-OSRAM Group) for his organizational support.
#### Publication Funding Acknowledgments
This work was funded by the ams-OSRAM Group.
|
2309.13649 | Effects of surface roughness and top layer thickness on the performance
of Fabry-Perot cavities and responsive open resonators based on distributed
Bragg reflectors | Optical and acoustic resonators based on distributed Bragg reflectors (DBRs)
hold significant potential across various domains, from lasers to quantum
technologies. In ideal conditions with perfectly smooth interfaces and
surfaces, the DBR resonator quality factor primarily depends on the number of
DBR pairs and can be arbitrarily increased by adding more pairs. Here, we
present a comprehensive analysis of the impact of top layer thickness variation
and surface roughness on the performance of both Fabry-Perot and open-cavity
resonators based on DBRs. Our findings illustrate that even a small,
nanometer-scale surface roughness can appreciably reduce the quality factor of
a given cavity. Moreover, it imposes a limitation on the maximum achievable
quality factor, regardless of the number of DBR pairs. These effects hold
direct relevance for practical applications, which we explore further through
two case studies. In these instances, open nanoacoustic resonators serve as
sensors for changes occurring in dielectric materials positioned on top of
them. Our investigation underscores the importance of accounting for surface
roughness in the design of both acoustic and optical DBR-based cavities, while
also quantifying the critical significance of minimizing roughness during
material growth and device fabrication processes. | Konstantinos Papatryfonos, Edson Rafael Cardozo de Oliveira, Norberto Daniel Lanzillotti-Kimura | 2023-09-24T14:27:57Z | http://arxiv.org/abs/2309.13649v1 | Effects of surface roughness and top layer thickness on the performance of Fabry-Perot cavities and responsive open resonators based on distributed Bragg reflectors
###### Abstract
Optical and acoustic resonators based on distributed Bragg reflectors (DBRs) hold significant potential across various domains, from lasers to quantum technologies. In ideal conditions with perfectly smooth interfaces and surfaces, the DBR resonator quality factor primarily depends on the number of DBR pairs and can be arbitrarily increased by adding more pairs. Here, we present a comprehensive analysis of the impact of top layer thickness variation and surface roughness on the performance of both Fabry-Perot and open-cavity resonators based on DBRs. Our findings illustrate that even a small, nanometer-scale surface roughness can appreciably reduce the quality factor of a given cavity. Moreover, it imposes a limitation on the maximum achievable quality factor, regardless of the number of DBR pairs. These effects hold direct relevance for practical applications, which we explore further through two case studies. In these instances, open nanoacoustic resonators serve as sensors for changes occurring in dielectric materials positioned on top of them. Our investigation underscores the importance of accounting for surface roughness in the design of both acoustic and optical DBR-based cavities, while also quantifying the critical significance of minimizing roughness during material growth and device fabrication processes.
## 1 Introduction
Distributed Bragg reflector (DBR) devices based on semiconductor heterostructures are pivotal components in both fundamental and applied fields of photonics and nanophononics. A pair of DBRs enclosing an optical spacer constitutes a Fabry-Perot optical cavity, capable of shaping the optical local density of states. Over the past three decades, optical cavities have been used in a plethora of applications encompassing quantum technologies, optoelectronics, photonics, and spectroscopy [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Similarly, acoustic cavities utilizing the same DBR configurations can confine and enhance phononic fields, offering prospects for ultra-high frequency applications [13, 14, 15]. Furthermore, these cavities were recently used in optomechanics, and as platforms for simulating solid-state physics phenomena [16, 17, 18, 19, 20, 21, 22, 23, 24].
The widespread use of DBRs and their resonant structures emphasizes the importance of developing accurate tools for optimized device design. Current designs and evaluations of optical and nanophononic devices often focus solely on the number of DBR periods and materials [25]. However, the reality of material growth and fabrication always entails some surface roughness, even in samples produced using cutting-edge techniques. While the role of interface roughness has been discussed in some cases [26], the impact of surface thickness variation and surface roughness is commonly overlooked, presumably due to the assumption that its influence might be negligible. To justify this conjecture, one would have to assume surface roughness of negligible magnitude, combined with the fact that the confined mode is mainly localized within the cavity spacer. However, in reality, some variations of the top layer thickness often occur during device processing, or due to oxidation of the top layer of structure. Additionally, some degree of surface roughness is invariably present even when using state-of-the-art growth techniques [27, 28, 29]. The resulting top layer variations and roughness can significantly affect the performance of DBR resonators, which rely on precise definition of the layered structure to define the resonant frequencies.
In this study, we delve into the impact of top layer thickness and surface roughness on the phonon dynamics and quality factor (Q-factor) in Fabry-Perot (FP) cavities and nanoacoustic open resonators based on DBRs. Our investigation centers on GaAs/AlAs DBR superlattices, which are extensively utilized in both acoustic and optical micro-cavities. We examine both Fabry-Perot and open-cavity resonators, unraveling the influence that even small variations in the last layer thickness and roughness can exert on the Q-factor, which is the metric that quantifies energy dissipation in these resonators.
Moreover, open resonators are particularly useful for applications that demand environmental responsivity. What is more, additional materials can be deposited on them to form an acoustic resonator responsive to external stimuli. We explore practical scenarios where these cavities function as sensors with dielectric materials positioned atop them, allowing for a direct assessment of the impact of roughness on device performance. Specifically, we incorporate VO\({}_{2}\) and mesoporous SiO\({}_{2}\) materials, conducting a detailed exploration of the parameter space to assess the combined influence of roughness and number of DBR periods on the quality factor. Materials sensitive to external stimuli, such as mesoporous thin films reacting to humidity changes [30; 31; 32], and VO\({}_{2}\) responding to thermally or optically induced phase transitions [33; 34], undergo changes in their elastic properties under such stimuli [35; 36]. For instance, mesoporous thin films are known to adsorb liquids into their pores, and changes in the relative humidity lead to water condensation inside the pores, which alters their mechanical properties. On the other hand, VO\({}_{2}\) exhibits two phases -monoclinic and tetragonal-, and phase transitions can modify their mechanical properties. These changes influence the speed of sound, affecting the response of such resonators. Such materials and designs could potentially impact responsive nanoacoustic and nanophotonic devices.
In Section 2, we analyze the main sources and types of roughness and thickness variation in the top layer of GaAs/AlAs structures, and present the model that we employed to analyze their impact. The initial part of the study focuses on examining the influence of the top layer thickness on the responses of both Fabry-Perot resonators and open cavities, as elaborated in Section 3. Building upon this analysis, Section 4 delves into quantifying the impact of layer thickness and surface roughness on these resonators. In Sections 5 and 6, we introduce the dielectric materials mesoporous SiO\({}_{2}\) and VO\({}_{2}\) on top of a DBR and explore the resonator response as a function of their roughness. Within these sections, we also quantify the effectiveness of methods used to mitigate the effects of roughness, such as polishing or planarization.
## 2 Model Implementation
To simulate the structure, we employ a model based on the transfer matrix method (TMM). We consider free-strain boundary conditions and solve the standard 1D wave equation. Detailed implementation specifics of the TMM for studying transmission and reflectivity spectra in multi-layered superlattices are outlined in [37; 38]. We here extend this method to account for surface roughness, introducing a simplified model that simulates the effects of roughness on acoustic FP and open cavity resonators. Surface roughness can have various sources, resulting in either more (additive roughness) or less (subtractive roughness) material than originally designed. In GaAs/AlAs structures additive or subtractive roughness may arise during material growth or device processing, and as a result, it can vary in terms of its type and direction.
For example, a relatively small variation on the order of one monolayer (ML) (\(\sim 0.5\,nm\)), is commonly observed in high-quality epitaxially grown flat layers, such as MBE-grown GaAs/AlAs or InGaAs/InP structures [27; 28; 39]. This roughness tends to be discrete, usually resulting in 1-ML-thick plateaus in the lateral direction. Non-flat layers, like quantum dot (QD) nanostructures, frequently integrated within DBRs due to their exceptional optical and quantum properties, might contribute to nm-scale roughness in the layers grown on top of them. Larger roughness is expected when an additional layer is placed atop the MBE-grown cavity, or for samples grown using different techniques. In such cases, the roughness can vary significantly based on the specific technique and material, typically ranging from a few nanometers to a few tens of nanometers. Furthermore, additional roughness or width variations of the top layer might arise due to oxidation [27], or during device fabrication processes, such as etching or mask removal steps. This roughness' texture is random and continuous in the vertical direction, and it varies from sample to sample.
Given the diversity of roughness types, and the practical impossibility of knowing its precise structure in each sample, developing a full atomic-scale model for roughness becomes highly impractical. For this reason, in this work we have developed and implemented a model that approximates the roughness of one sample as a set of samples that exhibit planar surfaces and different thicknesses, averaging their simulated surface displacements. Specifically, we model the surface roughness by considering a normal distribution of thickness variation for the
top layer of the structure, with a standard deviation \(\sigma\). For each case, we average the resulting spectra over 2000 iterations of random thickness distributions with a specific \(\sigma\), while keeping the parameters of the other layers of the DBR constant. Subsequently, we fit the resonant modes using a Gaussian function and calculate the Q-factor by computing the ratio between the resonant frequency and the linewidth extracted from the fitting. Unless explicitly stated differently, we imply this averaging process when we refer to roughness simulation in this paper.
In the outlined approach, we work under the assumption that the inhomogeneous broadening approximation remains valid. Specifically, surface roughness introduces inhomogeneous broadening of the resonant modes, which influences the resonator's response and compromises its Q-factor. The studied structures are based on GaAs/AlAs DBR superlattices with flat interfaces, finalized with the top layer which contains a rough surface. The periodicity of the superlattice introduces Brillouin zone folding and miniband openings at frequencies \(\Omega_{m}=m\pi v/d\), where \(m\in\mathbb{N}\), and \(d\) and \(v\) represent the unit cell thickness and phonon group velocity, respectively [15; 40; 41]. Throughout our analysis, we consider appropriate thickness variations and roughness ranges, taking into account the typical values observed in practical samples. The material parameters for GaAs, AlAs, mesoporous SiO\({}_{2}\), and VO\({}_{2}\), utilized in our analysis, are drawn from relevant literature sources [27; 35; 42; 43].
## 3 Influence of Top Layer Thickness on Fabry-Perot and Open Resonators
Our study begins with an analysis of how variations in the top layer thickness impact the response of Fabry-Perot and open resonators. These fluctuations in top layer thickness are common in practical samples and can arise due to processes like oxidation of the top layer upon exposure to the atmosphere [27], or during device fabrication steps such as etching or mask removal [6]. Schematics of the investigated resonators are presented in Figure 1(a,b).
The first structure (Fig. 1(a)), is a conventional FP cavity, incorporating a \(\lambda/2\)-cavity between two GaAs/AlAs DBR superlattices, with \(\lambda\) representing the acoustic wavelength. This configuration results in a high-quality cavity for both photons and phonons due to the nearly identical refractive index contrast and acoustic impedance contrast of GaAs and AlAs [44]. Moreover, the closely matched lattice constants of these materials facilitate the growth of thick, high-quality layers using standard epitaxial techniques, making them ideal for applications involving high Q-factor microcavities.
The second design under investigation is the open-cavity resonator (Fig. 1(b)), formed by a \(3\lambda/2\) low-acoustic-loss spacer cavity on top of a single DBR. In the absence of surface roughness, in this arrangement, the free surface acts as a perfect mirror for phonons at the studied frequencies, since they cannot propagate toward the air. Consequently, when combined with a high-reflection bottom DBR, this arrangement generates a cavity with a high Q-factor, rendering it an ideal platform for our investigation.
Figure 1: Effects of top layer thickness variation on Fabry-Perot and open-cavity resonators. (a,b) Schematics of (a) Fabry-Perot resonator with a 16-period top DBR and a 20-period bottom DBR and (b) open cavity with 20 DBR periods and \(\lambda/2\) spacer. (c,d) Colormaps of the optical reflectivity as a function of the top layer thickness for (c) Fabry-Perot and (d) open cavity. (e,f) Colormaps of the acoustic displacement as a function of the top layer thickness for (e) Fabry-Perot and (f) open cavity.
We performed a comprehensive study of the parameter space to assess the combined effects of the top layer thickness and DBR periods on the quality factor. In all configurations, we maintain a constant difference of 4 GaAs/AlAs layer pairs between the top and bottom DBRs. This difference ensures a symmetric optical cavity due to the higher refractive index contrast at the DBR/air interface compared to the DBR/substrate interface. Figure 1(c) and (d) present colormaps of the optical reflectivity as a function of the top layer thickness, illustrating the dependence on the top layer thickness for both FP and open cavities, respectively. Figure 1(c) shows a minimal sensitivity of the FP frequency over the whole thickness range except for \(\lambda/2\) which leads to a small frequency shift. In the open cavity (Figure 1(d)), we observe a sharp frequency shift for a small thickness change around \(\lambda/2\), corresponding to a slope of 0.34 THz/nm. Additionally, the transmission is much weaker than the FP case because the optical quality factor is lower in the open cavity due to the missing upper high-reflectivity DBR.
Figures 1(e) and (f) depict colormaps of the acoustic displacement for a cavity with the same DBRs as Fig. 1(c) and (d). Notably, as shown in Fig. 1(e), the FP cavity exhibits a range between approximately 30 and 100 nm in which the mode frequency demonstrates minimal sensitivity to the thickness change, featuring a slope of \(\sim 0.1\) MHz/nm. This linear regime over a broad thickness range results in a frequency shift of the acoustic resonance of less than 10 MHz. Conversely, the open cavity exhibits a strong dependence of the acoustic resonance on the top layer thickness, with the acoustic mode frequency inversely proportional to the layer thickness. Around the midpoint of \(\lambda/2\) this curve exhibits a slope of \(\sim 18\) MHz/nm, significantly larger than the FP cavity case. In this regime, small fluctuations in the thickness of a rough open cavity are expected to induce a pronounced broadening of the acoustic resonance, which we will investigate further in the subsequent section.
Comparing optical and acoustic resonators, we observe two main differences. The first one is the anti-crossing observed in the acoustic Fabry-Perot (FP) resonator at 18.5 GHz (Fig.1(e)), which is not evident in the optical FP (Fig.1(c)). At a thickness of \(\lambda/2\), a second cavity due to reflection at the surface is formed in both cases. However, the acoustic cavity is much stronger, thus lifting the degeneracy of the modes. The second difference concerns the open cavity, where the transmission for the optical mode (Fig.1(d)) is much weaker than for the acoustic mode (Fig.1(f)). These differences arise from the fact that the free surface acts as a perfect mirror for acoustic phonons, whereas this is not the case for photons, as the electromagnetic field can penetrate into the air, creating a much higher quality factor cavity for phonons compared to photons.
## 4 Influence of Surface Roughness on the Quality Factor of Fabry-Perot and Open Resonators
We now delve into the influence of surface roughness on the Q-factor of both FP and open resonators near their resonant frequencies, considering various DBR configurations. Figure 2(a) focuses on Fabry-Perot resonators, analyzing five configurations with varying numbers of periods in the bottom DBR (ranging from 5 to 25 in 5-bilayer intervals). In all cases, the top DBR comprises 4 periods less than the bottom DBR (1, 6, 11, 15, and 21). Corresponding results for open cavities are shown in Figure 2(b). The two key parameters --roughness and the number of DBR layers-- exert a pronounced and predictable influence on the Q-factor.
For structures with flat surfaces (\(\sigma=0\) nm), the Q-factor is solely determined by the number of DBR periods. However, as surface roughness increases, the effectiveness of stacking additional layers significantly wanes. This is due to the upper Q-factor limit imposed by increased fluctuations in resonant frequencies, introduced by surface roughness. Although negligible for applications where ultrahigh Q-factors are not a requirement, this effect might have drastic consequences in specific applications, such as polaritonics and single photon sources [2, 3]. This effect is considerably more prominent in open-cavity resonators than in Fabry-Perot ones, as depicted in Figure 2. In essence, higher roughness dictates a reduced number of DBR periods required to attain the maximum Q-factor in the structure, consequently leading to diminished maximum Q-factor values.
For instance, an open resonator composed of GaAs-with its characteristic low roughness (approximately one monolayer)-would benefit from an increased number of DBRs. Specifically, assuming a 0-nm-roughness GaAs/AlAs acoustic Bragg mirror consisting of 20 periods achieves a Q-factor of 6459, while 25 periods a Q-factor of 24711, and continually rising as more layers are added. Conversely, 1-nm-roughness would result in a Q-factor of 476 for 20 DBR layers and 505 for 25 DBR layers or more (Q-factor saturates above 25 layers), demonstrating a significant reduction of the Q-factor even for such minor roughness. Additionally, when the same resonator possesses a 2 nm roughness on its top layer, or a material with this roughness is deposited on top of it, the Q-factor saturates already at 20 DBR layers and adding more layers fails to increase it, as indicated in Fig. 2(b).
A comparison between the FP and open cavity configurations, reveals that the open cavity Q-factor is more
sensitive to surface roughness than the FP cavity. Notably, even under relatively high surface roughness conditions (10 nm), the Q-factor of the FP cavity does not saturate; although it decreases to approximately 50 % of its maximum value. Overall, our study suggests that factoring in roughness becomes imperative when engineering high-quality photonic or acoustic resonators. Doing so, would enable precise Q-factor evaluations for each design, and prevent superfluous layer stacking. This, in turn, would improve device design, enhance operational efficiency, and minimize costs. When working with acoustic resonators at higher frequencies, and hence thinner layers, the effect of roughness is extremely critical.
Regarding the DBR unit cell layers' thicknesses, Fig. 2 presents results for \(\lambda/4-3\lambda/4\) (with respect to the acoustic wavelength \(\lambda\)). It is worth noting that analogous conclusions hold for \(\lambda/4-\lambda/4\) DBRs. Since the overarching findings remain similar, we present here detailed results solely for the former case. We note two differences when the \(\lambda/4-\lambda/4\) unit cell is used; first, the stop band becomes twice broader, and second, the mode's resonant frequency exhibits an increased dependency on the top layer thickness.
## 5 Influence of surface roughness on resonators responsive to external stimuli
We now proceed to analyze the effects of roughness on the responsivity of GaAs/AlAs DBR-based resonators when an additional top layer of a different material, responsive to external stimuli, is introduced. We focus solely on open-cavity resonators here, due to their anticipated exceptional responsivity to external changes. We simulate resonators with two types of spacers; mesoporous SiO\({}_{2}\), or VO\({}_{2}\), as their top layer. Our analysis encompasses three different roughnesses of 0 nm, 2 nm, and 5 nm, and two different phases for each material: 0 and 100% relative humidity for the mesoporous cavity, and monoclinic and tetragonal phase for the VO\({}_{2}\) cavity. The selected roughness values of 2-5 nm were based on minimal (optimal) realistic values achievable for optimized SiO\({}_{2}\) and VO\({}_{2}\) materials, while the 0 nm case serves as a reference point. In all these cases, the open-cavity DBR consists of 15 GaAs/AlAs pairs. This choice strikes a balance between structure complexity and performance based on the outcomes of Figure 2, for the considered roughness values. Specifically, for a 5-nm roughness, stacking additional DBR layers would have a minimal impact on the Q-factor thereby needlessly increasing the complexity of the structure. Conversely, employing fewer layers substantially diminishes the Q-factor.
We simulate the humidity in the mesoporous material by considering a weighted average for the material density and speed of sound, taken from a combination of dense SiO\({}_{2}\), air, and water. As for VO\({}_{2}\), the acoustic properties in its two phases are drawn from Reference [35]. Central findings are illustrated Figure 3, displaying the phonon displacement spectrum of the simulated structures. Figure 3(a) focuses on the mesoporous material, comparing its response to 0 % and 100 % relative humidity, for three different surface roughness values. As demonstrated, the cavity's response to humidity change is evident, with peak amplitude reduction and frequency shift as humidity rises [31, 45]. However, increased roughness leads to broader peaks, potentially making frequency shifts harder to resolve. For a 5 nm roughness, the largest considered, the shift remains resolvable.
Figure 2: Effects of roughness on the Q-factor of (a) a \(\lambda/2\) cavity Fabry-Perot resonator, and (b) a \(3\lambda/2\) open-cavity resonator for various DBR configurations. In both panels, the bottom DBR varies from 5 to 25 bilayers in intervals of 5, while in (a) the top DBR of the Fabry-Perot comprises 4 bilayers less than the bottom one in all cases.
Nonetheless, the trend suggests that roughness exceeding 5 nm -realistic for non-optimized SiO\({}_{2}\) samples- or smaller humidity changes, could render the system response insufficient.
Figure 3(b) reveals changes in the acoustic displacement spectra of VO\({}_{2}\) across its two phases, given the same three surface roughness values. Results show that minor roughness --around 2 nm or less-- yields well-defined peaks, making phase changes clearly resolvable. Conversely, roughness around 5 nm leads to larger broadening, affecting the resolution of frequency shifts. The broadening here is especially pronounced compared to the mesoporous case. This is due to the speed of sound being considerably larger in VO\({}_{2}\) compared to the mesoporous material. This means that a similar thickness change causes a larger frequency shift in VO\({}_{2}\), thus enhancing the roughness-induced inhomogeneous broadening. These outcomes indicate that surface roughness considerably impacts Q-factors of GaAs/AlAs DBR open cavities utilized for sensing. Surface roughness reduces the system's ability to resolve external changes, even for modest roughness levels, with similar or larger values expected in realistic samples. The subsequent section will explore methods to mitigate roughness impact and enhance device sensitivity.
## 6 Strategy to enhance Q-factor
As we have seen in the preceding sections, surface roughness can significantly reduce the quality factor of the cavity, and consequently its sensitivity to external changes. Although Section 5 showed that open-cavity resonators with rough surfaces can still resolve frequency changes induced by external stimuli, the effective reduction in Q-factor compared to an ideal no-roughness scenario remains substantial. It would, therefore be highly desirable to minimize the roughness as much as possible in practical structures, especially in those designed for sensing applications. [31] The "flattening" of rough surfaces can be achieved in various ways, most commonly via either polishing the surface or by depositing additional composite thin films. The latter is usually a much simpler approach, and it is preferred when adding such a layer does not have a negative effect on the other properties of the sample, while the former is preferred otherwise.
The polishing method removes the top rough part of the material by either rubbing it or by applying a chemical treatment resulting in a smooth surface, while the planarization method deposits a polymer resist on the top of the sample. As the polymer composite is spin-coated on the sample in liquid form, it tends to even out the surface roughness, before it is cured into a solid polymer with a smooth surface. We have studied the effect of such surface-reduction methods on our structure, and the results are summarized in Figure 4. To account for both methods, we have simulated two similar structures that both have an additional layer placed atop. We simulate the planarization method as follows: first, we consider a rough layer of 243.24 nm of monoclinic VO\({}_{2}\). Then, we add a 20-nm-thick (\(d_{2}\)) layer of a virtual material. The roughness of the two top layers is complementary in shape in such a way that the total thickness \(D\) remains constant at 263.24 nm, without any surface roughness. The added virtual material has the same elastic parameters as the monoclinic VO\({}_{2}\), as shown in the left schematic of Fig. 4.
Figure 3: Acoustic displacement spectrum of the structure containing 15 GaAs/AlAs DBR periods and a top layer sensitive to external stimuli. The top layers considered are (a) mesoporous SiO\({}_{2}\) at 0 % and 100 % relative humidity, and (b) VO\({}_{2}\) at the monoclinic and tetragonal phases. A roughness of \(\sigma=0\), 2, and 5 nm is considered in both cases.
The acoustic resonance corresponds to a monoclinic-VO\({}_{2}\)\(\lambda\)/2 cavity. In the second case, we assume the same structure as the previous case, but with the bottom layer in the tetragonal VO\({}_{2}\) phase (different mass density and speed of sound), as illustrated in the right schematic of Fig. 4).
Figure 4 illustrates the results for both material combinations, for different values of original roughness (i.e. before adding a planarization layer). Our original structures consisted of 20 GaAs/AlAs DBR periods with a thin VO\({}_{2}\) layer on top. In the case of 0-nm-roughness, both peaks have relatively high Q-factors. The structure with bottom and top layers with VO2 in the monoclinic phase is entirely insensitive to roughness as both layers have the same acoustic properties. For the other structure, as the simulated roughness progressively increases to 5 nm and 20 nm, the quality factor decreases. Even though the top surface is flat, there is still an effective roughness remaining in the second structure, in-between the last two layers, coming from the fact that the initial VO\({}_{2}\) layer was rough, and the second layer had different properties. However, there is still a clear improvement compared to the case when the additional layer is not deposited (see Fig 3). The acoustic mode associated with the monoclinic phase presents an improved quality factor due to the absence of surface roughness. In the case of the tetragonal phase, the impedance matching between the air and the two layers also increases the quality factor of the resonator with respect to the case where the roughness is directly present at the surface of the device.
## 7 Conclusions
In this study, we conducted an analysis of the impact of top layer thickness and surface roughness on the quality factor and performance of acoustic and optical resonators based on GaAs/AlAs DBRs. Our findings reveal that thickness inaccuracies appreciably influence the mode frequency, while even relatively small surface roughness, on the order of a few nanometers, significantly influences the quality factor. Consequently, these effects can have a substantial impact on the device performance of DBR microcavities. This important aspect is often overlooked in the design process, making it essential to consider surface roughness for optimal performance. Notably, our developed model allows us to determine the maximum achievable quality factor and the minimum number of DBR layers required to achieve it, for different surface roughness values. By doing so, we can eliminate the need for growing unnecessary additional layers that do not contribute effectively, leading to more efficient and cost-effective designs.
We observed that surface roughness affects acoustic cavities more prominently than their optical counterparts, although both are significantly influenced. Additionally, open-cavity designs are much more sensitive to roughness compared to FP cavities, which is expected due to the closer proximity of roughness to the cavity region, leading to a more pronounced interaction. Moreover, we explored practical case studies in which such microcavities were designed to serve as sensors. Our investigations provided valuable insights as to how surface roughness impacts these applications, and enabled us to assess strategies for mitigating its effects. We believe that the insights
Figure 4: Acoustic displacement spectrum of an open cavity with 20 GaAs/AlAs DBR periods, finalized with a rough VO\({}_{2}\) and a flat capping layer containing the elastic properties of VO\({}_{2}\) in the monoclinic phase. The black peak corresponds to the displacement spectrum of monoclinic VO\({}_{2}\), whereas the red, blue, and green peaks, to the tetragonal VO\({}_{2}\) with 0-nm, 5-nm, and 20-nm roughness, respectively.
obtained in this study have broad implications for the design of efficient optoelectronic, photonic, and acoustic devices.
## 8 Acknowledgments
The authors acknowledge funding from European Research Council Consolidator Grant No.101045089 (T-Recs). This work was supported by the European Commission in the form of the H2020 FET Proactive project No. 824140 (TOCHA).
|
2309.14126 | Non-equilibrium steady states of electrolyte interfaces | The non-equilibrium steady states of a semi-infinite quasi-one-dimensional
univalent binary electrolyte solution, characterised by non-vanishing electric
currents, are investigated by means of Poisson-Nernst-Planck (PNP) theory.
Exact analytical expressions of the electric field, the charge density and the
number density are derived, which depend on the electric current density as a
parameter. From a non-equilibrium version of the Grahame equation, which
relates the total space charge per cross-sectional area and the corresponding
contribution of the electric potential drop, the current-dependent differential
capacitance of the diffuse layer is derived. In the limit of vanishing electric
current these results reduce to those within Gouy-Chapman theory. It is shown
that improperly chosen boundary conditions lead to non-equilibrium steady state
solutions of the PNP equations with negative ion number densities. A necessary
and sufficient criterion on surface conductivity constitutive relations is
formulated which allows one to detect such unphysical solutions. | Markus Bier | 2023-09-25T13:29:09Z | http://arxiv.org/abs/2309.14126v2 | # Non-equilibrium steady states of electrolyte interfaces
###### Abstract
The non-equilibrium steady states of a semi-infinite quasi-one-dimensional univalent binary electrolyte solution, characterised by non-vanishing electric currents, are investigated by means of Poisson-Nernst-Planck (PNP) theory. Exact analytical expressions of the electric field, the charge density and the number density are derived, which depend on the electric current density as a parameter. From a non-equilibrium version of the Grahame equation, which relates the total space charge per cross-sectional area and the corresponding contribution of the electric potential drop, the current-dependent differential capacitance of the diffuse layer is derived. In the limit of vanishing electric current these results reduce to those within Gouy-Chapman theory. It is shown that improperly chosen boundary conditions lead to non-equilibrium steady state solutions of the PNP equations with negative ion number densities. A necessary and sufficient criterion on surface conductivity constitutive relations is formulated which allows one to detect such unphysical solutions.
Poisson-Nernst-Planck theory; non-equilibrium steady state; electrolyte interface; Gouy-Chapman model
## I Introduction
The dynamics of ions in external electric fields determines the properties of numerous important natural and technological processes such as the formation of a membrane potential in biological cells via ion channels [1; 2; 3; 4; 5; 6; 7], the charging and discharging of batteries by charge transfer reactions at electrodes [8; 9], the motion of colloids exploiting electrokinetic effects of ionic solvent components [10; 11; 12] as well as the suppression of electric currents by insulation fluids in high electric fields [13; 14; 15; 16].
Poisson-Nernst-Planck (PNP) theory [17; 18; 10] is an established and widely used theoretical framework in order to address these questions by considering the distributions of the electric field, the charge density the ion number densities etc. However, in studies of ion channels, high concentrated battery electrolytes and colloid migration steric effects of ions have been noted to be relevant [19; 20; 21] so that extensions of the PNP theory are currently under investigation [22; 23].
In contrast, ideal insulation fluids for high voltage applications would be void of ions and carefully prepared real insulation fluids contain only a few. Hence ionic steric effects should be weak in that type of systems such that PNP theory can be considered a well justified starting point [24]. However, in order to sustain the insulation property the ion concentration has to stay low for long times. Hence, understanding the mechanisms of charge generation in insulation fluids is an important topic which has been under investigation for many decades [25; 26; 27; 28; 29; 15; 29].
Unlike for nano-sized ion channels or colloids, the functioning of insulation fluids requires a truly macroscopic spatial extension of length scales of the order \(1\,\mathrm{cm}\) and above. Hence a clear separation of length scales occurs: Within distances of molecular size from (typically metallic) surfaces electrochemical processes can occur, which provide the most important sources for ion generation, as electric fields are strongest there. Further away from the surfaces an extended bulk fluid with smooth distributions of the electric field and ions is present. The smoothness of these distributions allows for a description in terms of (partial) differential equations and the smallness of the ion number densities justifies the applicability of the PNP equations (see Sec. II.2 below). Moreover, the macroscopic character of high-voltage insulation systems quite naturally motivates to study electric currents theoretically in terms of a simple one-dimensional model comprising a planar electrode in contact with a semi-infinite electrolyte solution, similar to the
Figure 1: A semi-infinite electrolyte solution of univalent cations (positive ions, red) and univalent anions (negative ions, green) dispersed in a dielectric continuum (solvent, yellow) is in contact with a planar electrode (blue). Charge transfer processes at the electrode surface and a non-vanishing electric field \(E_{\mathrm{bulk}}\) in the bulk give rise to a non-equilibrium steady state with a non-vanishing, spatially uniform charge current density \(j_{Q}\) (violet arrow).
and Chapman [30; 31; 32; 10; 33; 11] more than a century ago for conditions of thermodynamic equilibrium (see Fig. 1).
Surprisingly, such a simple _semi-infinite_ Gouy-Chapman model for the diffuse layer out of thermodynamic equilibrium has apparently _not_ yet been studied analytically, whereas analytical solutions have well been obtained for the mathematically more complicated situation of _finite_ one-dimensional electrolytes, involving Jacobi elliptic functions, perturbation expansions or simplifying approximations [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46].
It is the purpose of the present work to analytically solve the PNP equations (Sec. II.2) for non-equilibrium steady states of a semi-infinite quasi-one-dimensional univalent binary electrolyte solution (Sec. II.1). It turns out that the solutions can be represented in terms of elementary functions and the expressions are of similar complexity as the Gouy-Chapman results for thermodynamic equilibrium (Sec. II.3). Moreover, a generalisation of the Grahame equation, which for thermodynamic equilibrium expresses the surface charge density in terms of the surface potential [33; 10; 11], to non-equilibrium steady states is derived (Sec. II.4). This allows to discuss the dependence of the distributions of the electric field as well as of charge and ion number density on the electric current (Sec. III.1). Moreover, it is shown that the profiles as well as the Grahame relation and the differential capacitance of the space charge region reduce to the well-known Gouy-Chapman results for vanishing electric current (Sec. III.2). A novel feature of the considered semi-infinite model, which is shown to occur only out of equilibrium, is the occurrence of solutions of the PNP equations with negative ion number densities (Sec. III.3). This leads to the conclusion, that particular care is required in choosing appropriate boundary conditions in order to avoid such unphysical solutions (Sec. IV).
## II Model and Formalism
### Setup
Consider a semi-infinite univalent binary electrolyte solution, which is bounded by a single planar electrode (see Fig. 1). The solvent is described as a dielectric continuum of temperature \(T\) and dielectric constant \(\varepsilon\). Cations (positive ions, valency \(Z_{+}=1\), diffusion constant \(D_{+}\), number density \(\varrho_{+}\)) and anions (negative ions, valency \(Z_{-}=-1\), diffusion constant \(D_{-}\), number density \(\varrho_{-}\)) migrate in an electric field \(E\), which is oriented in normal direction of the electrode. Due to the planar symmetry, the electric field \(E(x)\) and the number densities \(\varrho_{\pm}(x)\) are functions of the distance \(x\geq 0\) from the electrode, but they are independent of the lateral position. Charge transfer processes close to the electrode surface allow for an electric current to occur in the system.
It is assumed that a sufficiently long waiting time has elapsed such that the system has attained a steady state in which the electric field \(E\) and the number densities \(\varrho_{\pm}\) are time-independent. Moreover, at large distances \(x\to\infty\) from the electrode these quantities are assumed to approach the constant bulk limits \(E(x)\to E_{\rm bulk}\) and \(\varrho_{\pm}(x)\to\varrho_{\rm bulk}/2\). Consequently, for \(E_{\rm bulk}=0\) the system is in thermodynamic equilibrium, whereas for \(E_{\rm bulk}\neq 0\) it is in a non-equilibrium steady state. In a non-equilibrium steady state a non-vanishing, spatially uniform electric current density \(j_{Q}\neq 0\) is present, which requires concomitant charge transfer processes to occur at the electrode surface.
### Governing equations
In order to quantify the steady states of the system described in the previous Sec. II.1 the governing equations are derived within PNP theory in the following.
Given ion number densities \(\varrho_{\pm}(x)\) the _total ion number density_
\[\varrho(x)=\varrho_{+}(x)+\varrho_{-}(x) \tag{1}\]
and the _charge density_
\[q(x)=e\big{(}\varrho_{+}(x)-\varrho_{-}(x)\big{)} \tag{2}\]
with the elementary charge \(e\) are defined. At large distances \(x\to\infty\) from the electrode these quantities approach the limits \(\varrho(x)\to\varrho_{\rm bulk}\) and \(q(x)\to 0\).
By means of Gauss' law [47] the derivative \(E^{\prime}(x)\) of the electric field is related to the charge density \(q(x)\):
\[\varepsilon_{0}\varepsilon E^{\prime}(x)=q(x), \tag{3}\]
where \(\varepsilon_{0}\) is the vacuum permittivity. This shows that approaching a constant bulk electric field \(E(x)\to E_{\rm bulk}\) in the limit \(x\to\infty\) implies local charge neutrality \(q(x)\to 0\) in the bulk.
The Nernst-Planck equations [10; 17; 18]
\[j_{\pm}(x)=D_{\pm}\big{(}-\varrho_{\pm}^{\prime}(x)\pm\beta e\varrho_{\pm}(x )E(x)\big{)}, \tag{4}\]
where \(1/\beta=k_{B}T\) with the Boltzmann constant \(k_{B}\) denotes the thermal energy, describe the current densities of ions due to diffusion in a density gradient and drift in the electric field. Note that no advection contribution occurs in the present quiescent solvent.
In general the time dependence of the ion number densities \(\varrho_{\pm}(x,t)\) is given by the continuity equations
\[\dot{\varrho}_{\pm}(x,t)=-j_{\pm}^{\prime}(x,t), \tag{5}\]
which describe conservation of the number of ions. However, a steady state is time-independent (\(\dot{\varrho}_{\pm}=0\)) such that the current densities \(j_{\pm}(x,t)\) are spatially uniform and time-independent:
\[0=j_{\pm}^{\prime}(x,t)\qquad\Leftrightarrow\qquad j_{\pm}(x,t)={\rm const}=j_{ \pm}. \tag{6}\]
The same is true for the _total number current density_
\[j_{N}=j_{+}+j_{-} \tag{7}\]
and the _charge current density_
\[j_{Q}=e\big{(}j_{+}-j_{-}\big{)}. \tag{8}\]
In order to simplify the later calculations the following _reduced current densities_ are introduced:
\[J_{N}= \frac{j_{+}}{D_{+}}+\frac{j_{-}}{D_{-}} =-\varrho^{\prime}(x)+\quad\beta q(x)E(x) \tag{9}\] \[J_{Q}= e\Big{(}\frac{j_{+}}{D_{+}}-\frac{j_{-}}{D_{-}}\Big{)}=-q^{ \prime}(x)+\beta e^{2}\varrho(x)E(x). \tag{10}\]
As \(\varrho^{\prime}(x)\to 0\) (due to \(\varrho(x)\to\varrho_{\rm bulk}\)), \(q(x)\to 0\) and \(E(x)\to E_{\rm bulk}\) for \(x\to\infty\), Eq. (9) implies \(J_{N}=0\), hence, from Eq. (9),
\[\frac{j_{+}}{D_{+}}=-\frac{j_{-}}{D_{-}}. \tag{11}\]
Consequently
\[j_{Q}=e\Big{(}1+\frac{D_{-}}{D_{+}}\Big{)}j_{+}=-e\Big{(}\frac{D_{+}}{D_{-}}+1 \Big{)}j_{-}, \tag{12}\]
which allows one to express the ionic current densities \(j_{\pm}\) in terms of the charge current density \(j_{Q}\). This leads to the relation
\[J_{Q}=\frac{ej_{+}}{D_{+}}-\frac{ej_{-}}{D_{-}}=\frac{2}{D_{+}+D_{-}}j_{Q}= \frac{j_{Q}}{D} \tag{13}\]
between the charge current density \(j_{Q}\) and the reduced charge current density \(J_{Q}\) with the _average diffusion constant_
\[D=\frac{D_{+}+D_{-}}{2}. \tag{14}\]
Similarly, as \(q^{\prime}(x)\to 0\) (due to \(q(x)\to 0\)), \(\varrho(x)\to\varrho_{\rm bulk}\) and \(E(x)\to E_{\rm bulk}\) for \(x\to\infty\), Eq. (10) leads to
\[J_{Q}=\beta e^{2}\varrho_{\rm bulk}E_{\rm bulk}. \tag{15}\]
By using Eq. (13) one infers
\[j_{Q}=D\beta e^{2}\varrho_{\rm bulk}E_{\rm bulk}=S_{\rm bulk}E_{\rm bulk} \tag{16}\]
with the _bulk conductivity_
\[S_{\rm bulk}=D\beta e^{2}\varrho_{\rm bulk}. \tag{17}\]
### Analytical solution
In the following the PNP equations (3)-(5) of the considered system are solved analytically.
In a first step inserting Eq. (3) in Eq. (9) yields
\[0 =-\varrho^{\prime}(x)+\beta\varepsilon_{0}\varepsilon E^{\prime}( x)E(x)\] \[=\Big{(}-\varrho(x)+\frac{\beta\varepsilon_{0}\varepsilon}{2}E(x )^{2}\Big{)}^{\prime}, \tag{18}\]
which implies
\[-\varrho(x)+\frac{\beta\varepsilon_{0}\varepsilon}{2}E(x)^{2}={\rm const}. \tag{19}\]
Evaluation of the constant by taking the limit \(x\to\infty\) leads to
\[\varrho(x)=\varrho_{\rm bulk}+\frac{\beta\varepsilon_{0}\varepsilon}{2} \big{(}E(x)^{2}-E_{\rm bulk}^{2}\big{)}. \tag{20}\]
By inserting Eqs. (3) and (20) in Eq. (10) one obtains the inhomogeneous non-linear ordinary differential equation for the electric field \(E(x)\)
\[\varepsilon_{0}\varepsilon E^{\prime\prime}(x) =\beta e^{2}\Big{(}\varrho_{\rm bulk}-\frac{\beta\varepsilon_{0} \varepsilon}{2}E_{\rm bulk}^{2}\Big{)}E(x)\] \[\quad+\frac{\beta^{2}e^{2}\varepsilon_{0}\varepsilon}{2}E(x)^{3 }-J_{Q}. \tag{21}\]
This equation resembles Eq. (A.9) of Ref. [48] and Eq. (48) of Ref. [35].
Introducing the excess electric field \(\Delta E(x)=E(x)-E_{\rm bulk}\) transforms the differential equation (21) to the homogeneous differential equation
\[\Delta E^{\prime\prime}(x) =\frac{\beta e^{2}}{\varepsilon_{0}\varepsilon}\big{(}\varrho_{ \rm bulk}+\beta\varepsilon_{0}\varepsilon E_{\rm bulk}^{2}\big{)}\Delta E(x) \tag{22}\] \[\quad+\frac{3}{2}\beta^{2}e^{2}E_{\rm bulk}\Delta E(x)^{2}+\frac{ 1}{2}\beta^{2}e^{2}\Delta E(x)^{3}.\]
With the Debye length \(1/\kappa\) defined by \(\kappa^{2}=4\pi\ell_{B}\varrho_{\rm bulk}\), where \(\ell_{B}=\beta e^{2}/(4\pi\varepsilon_{0}\varepsilon)\) is the Bjerrum length [10; 11; 49; 50], one can introduce the dimensionless _electric flux_ parameter
\[\eta=\frac{\beta ej_{Q}}{\kappa S_{\rm bulk}}=\frac{\beta eE_{\rm bulk}}{\kappa}, \tag{23}\]
which quantifies the deviation of a steady state from thermodynamic equilibrium (\(\eta=0\)) in terms of the charge current density \(j_{Q}\). The values \(\eta=\pm 1\) correspond to a charge current density \(j_{Q}=\pm S_{\rm bulk}\kappa/(\beta e)\) generated by a bulk electric field \(E_{\rm bulk}=\pm\kappa/(\beta e)\). The notion of electric flux \(\eta\) allows one to rewrite Eq. (22) in the form
\[\Delta E^{\prime\prime}(x) =\kappa^{2}(1+\eta^{2})\Delta E(x) \tag{24}\] \[\quad+\frac{3}{2}\beta e\kappa\eta\Delta E(x)^{2}+\frac{1}{2} \beta^{2}e^{2}\Delta E(x)^{3}.\]
Finally the transformation
\[y=\kappa x\sqrt{1+\eta^{2}},\qquad\widehat{E}(y)=\frac{\beta e\Delta E(x)}{\kappa} \tag{25}\]
is used to rewrite Eq. (24) in the form
\[\widehat{E}^{\prime\prime}(y)=\widehat{E}(y)+\frac{3\eta}{2(1+\eta^{2})} \widehat{E}(y)^{2}+\frac{1}{2(1+\eta^{2})}\widehat{E}(y)^{3}. \tag{26}\]
A first integral of Eq. (26) is found by multiplication with \(\widehat{E}^{\prime}(y)\) and the integration constant is fixed by using \(\widehat{E}^{\prime}(y)\to 0\) and \(\widehat{E}(y)\to 0\) for \(y\to\infty\):
\[\widehat{E}^{\prime}(y)^{2}=\widehat{E}(y)^{2}\Big{(}1+\frac{\eta}{1+\eta^{2}} \widehat{E}(y)+\frac{1}{4(1+\eta^{2})}\widehat{E}(y)^{2}\Big{)}. \tag{27}\]
It can be shown, that the expression inside the parentheses is bounded from below by \(1/(1+\eta^{2})\) for any value of \(\widehat{E}(y)\), hence
\[|\widehat{E}^{\prime}(y)|\geq|\widehat{E}(y)|/\sqrt{1+\eta^{2}}\qquad\mbox{ for all }y\geq 0. \tag{28}\]
Moreover, from Eq. (26) one infers that for \(y\to\infty\) the asymptotic behaviour of \(\widehat{E}(y)\) is monotonic, i.e. constant or exponential, but not oscillatory. Consequently, due to the limit \(\widehat{E}(y)\to 0\) for \(y\to\infty\), only three cases can occur: \(\widehat{E}(y)\) for \(y\geq 0\) is either (i) constantly zero (\(\widehat{E}(y)=0\)) or (ii) positive (\(\widehat{E}(y)>0\)) and strictly monotonically decreasing (\(\widehat{E}^{\prime}(y)<0\)) or (iii) negative (\(\widehat{E}(y)<0\)) and strictly monotonically increasing (\(\widehat{E}^{\prime}(y)>0\)). Otherwise if, say, \(\widehat{E}(y)\) approaches the limit \(\widehat{E}(y)\to 0\) for \(y\to\infty\) from above, but \(\widehat{E}(y)\) was not monotonically decreasing for all \(y\geq 0\), there would be a local maximum at some position \(y=y^{*}\) with \(\widehat{E}^{\prime}(y^{*})=0\) but \(\widehat{E}(y^{*})>0\), which contradicts Eq. (28). A similar contradiction arises from the assumption of a non-monotonically increasing behaviour when approaching the bulk limit from below.
In the above cases (ii) and (iii) \(\widehat{E}^{\prime}(y)\) and \(\widehat{E}(y)\) have opposite sign, so that Eq. (27) leads to
\[\widehat{E}^{\prime}(y)=-\widehat{E}(y)\sqrt{1+\frac{\eta}{1+\eta^{2}} \widehat{E}(y)+\frac{1}{4(1+\eta^{2})}\widehat{E}(y)^{2}}, \tag{29}\]
which relates the value \(\widehat{E}(y)\) and the derivative \(\widehat{E}^{\prime}(y)\). This equation is obviously true also for case (i).
For cases (ii) and (iii) separation of variables leads to the solution [51]
\[\widehat{E}(y)=\frac{2(1+\eta^{2})\widehat{E}_{0}}{\sinh(y)\sqrt{\widehat{E}_ {0}^{2}+\big{(}2(1+\eta^{2})+\widehat{E}_{0}\eta\big{)}^{2}}+\cosh(y)\big{(}2 (1+\eta^{2})+\widehat{E}_{0}\eta\big{)}-\widehat{E}_{0}\eta} \tag{30}\]
with the value \(\widehat{E}(0)=\widehat{E}_{0}\) at the electrode surface \(y=0\) playing the role of the integration constant. The solution \(\widehat{E}(y)\equiv 0\) of case (i) is obtained from Eq. (30) with \(\widehat{E}_{0}=0\).
### Grahame equation
An important result in the thermodynamic equilibrium theory of electrolyte interfaces is an expression for the surface charge density in terms of the surface potential, which is commonly referred to as _Grahame equation_[10; 11; 33]. Here an analogous expression is derived for steady state conditions.
Integration of the excess electric field \(\Delta E(x)\) leads to the _excess voltage_
\[\Delta U=\int\limits_{0}^{\infty}\mathrm{d}x\,\Delta E(x), \tag{31}\]
which measures deviations from local charge neutrality expressed by the charge density \(q(x)=\varepsilon_{0}\varepsilon\Delta E^{\prime}(x)\). It adds to the voltage required to sustain the bulk electric field \(E_{\mathrm{bulk}}\). Using Eqs. (25) and (29) one obtains [51]
\[\beta e\Delta U =\frac{1}{\sqrt{1+\eta^{2}}}\int\limits_{0}^{\infty}\mathrm{d}x \,\kappa\sqrt{1+\eta^{2}}\ \frac{\beta e\Delta E(x)}{\kappa}\] \[=\frac{1}{\sqrt{1+\eta^{2}}}\int\limits_{0}^{\infty}\mathrm{d}y \,\widehat{E}(y)\] \[=-\frac{1}{\sqrt{1+\eta^{2}}}\int\limits_{0}^{\infty}\mathrm{d}y \,\frac{\widehat{E}^{\prime}(y)}{\sqrt{1+\frac{\eta\widehat{E}(y)}{1+\eta^{2} }+\frac{\widehat{E}(y)^{2}}{4(1+\eta^{2})}}}\] \[=2\Big{(}\operatorname{arsinh}\Big{(}\frac{1}{2}\widehat{E}(0)+ \eta\Big{)}-\operatorname{arsinh}(\eta)\Big{)}. \tag{32}\]
Using Eq. (3) the total charge per cross-sectional area of the diffuse layer can be calculated by
\[\int\limits_{0}^{\infty}\mathrm{d}x\,q(x)=\varepsilon_{0}\varepsilon\int \limits_{0}^{\infty}\mathrm{d}x\,\Delta E^{\prime}(x)=-\varepsilon_{0} \varepsilon\Delta E(0). \tag{33}\]
This total charge per cross-sectional area of the space charge region is balanced by an _excess surface charge density_\(\Delta\sigma\) at the electrode surface, which has the same magnitude but the opposite sign: \(\Delta\sigma=\varepsilon_{0}\varepsilon\Delta E(0)\). It
expresses the excess of the _total surface charge density_
\[\sigma=\sigma_{\rm bulk}+\Delta\sigma \tag{34}\]
compared to the surface charge density \(\sigma_{\rm bulk}=\varepsilon_{0}\varepsilon E_{\rm bulk}\), which generates the bulk electric field \(E_{\rm bulk}\). By introducing the _saturation charge density_[52]
\[\sigma_{\rm sat}=\frac{e\kappa}{\pi\ell_{B}}=4\varepsilon_{0}\varepsilon \frac{\kappa}{\beta e} \tag{35}\]
one obtains the relation
\[\widehat{E}(0)=\frac{\beta e\Delta E(0)}{\kappa}=4\frac{\Delta\sigma}{\sigma_ {\rm sat}} \tag{36}\]
by means of which Eq. (32) can be rewritten as
\[\beta e\Delta U=2\Big{(}\operatorname{arsinh}\Big{(}2\frac{\Delta\sigma}{ \sigma_{\rm sat}}+\eta\Big{)}-\operatorname{arsinh}(\eta)\Big{)}. \tag{37}\]
Solving Eq. (37) for the excess surface charge density \(\Delta\sigma\) one obtains the _Grahame equation_
\[\Delta\sigma=\frac{\sigma_{\rm sat}}{2}\Big{(}\sqrt{1+\eta^{2}} \sinh\Big{(}\frac{\beta e\Delta U}{2}\Big{)}\] \[\qquad\qquad+\eta\Big{(}\cosh\Big{(}\frac{\beta e\Delta U}{2} \Big{)}-1\Big{)}\Big{)}. \tag{38}\]
The derivative of the excess surface charge density \(\Delta\sigma\) w.r.t. the excess voltage \(\Delta U\) leads to the _differential capacitance_ of the diffuse layer
\[C =\frac{\partial\Delta\sigma}{\partial\Delta U} \tag{39}\] \[=\frac{\beta e\sigma_{\rm sat}}{4}\Big{(}\sqrt{1+\eta^{2}}\cosh \Big{(}\frac{\beta e\Delta U}{2}\Big{)}\!+\!\eta\sinh\Big{(}\frac{\beta e \Delta U}{2}\Big{)}\Big{)}.\]
## III Results and discussion
### Steady state distributions
Some typical examples of steady state distributions of the electric field \(E(x)\), the cation number density \(\varrho_{+}(x)=(\varrho(x)+q(x)/e)/2\) and the anion number density \(\varrho_{-}(x)=(\varrho(x)-q(x)/e)/2\) are displayed in Fig. 2. These profiles correspond to the largely arbitrary choice of total surface charge density \(\sigma=2.5\,\sigma_{\rm sat}\) and electric flux \(\eta\in\{0,2,4,6,8,10\}\).
Hypothetical pure water of \(p\)H = 7, i.e. with total ion number density in the bulk of \(\varrho_{\rm bulk}=2\cdot 10^{-7}\,\)M, at \(T=300\,\)K has Bjerrum length \(\ell_{B}\approx 0.7\,\)nm and Debye length \(1/\kappa\approx 1\,\mu\)m so that \(\sigma_{\rm sat}\approx 7\cdot 10^{-5}\,\)C/m\({}^{2}\), which corresponds to an electric field \(E_{\rm sat}=\sigma_{\rm sat}/(\varepsilon_{0}\varepsilon)\approx 100\,\)V/mm at the electrode. Furthermore, assuming a bulk conductivity \(S_{\rm bulk}=5\cdot 10^{-6}\,\)S/m (see Ref. [53]) an electric flux \(\eta=1\) corresponds to a charge current density of \(j_{Q}\approx 0.1\,\)A/m\({}^{2}\). However, these values can vary in a wide range for various materials, and the purpose of the
Figure 2: Steady state distributions of (a) the electric field \(E\), (b) the cation number density \(\varrho_{+}\) and (c) the anion number density \(\varrho_{-}\) as functions of the normal distance \(x\) from the electrode surface for fixed total surface charge density \(\sigma=2.5\,\sigma_{\rm sat}\) and various values of the electric flux \(\eta\) defined in Eq. (23). These quantities approach their bulk values on a length scale \(\lambda(\eta)\), which decreases with increasing magnitude \(|\eta|\) of the bulk flux (see Fig. 3). For \(E_{\rm bulk}=0\) (thermodynamic equilibrium) the electric field \(E(x)\) within Gouy-Chapman theory is reproduced, whereas purely exponential solutions are approached for \(\eta\gg 1\).
present work is not to discuss a particular system in detail.
The electric field \(E(x)=E_{\rm bulk}+\Delta E(x)\) is obtained from the analytical solution Eq. (30) in conjunction with Eq. (25). The total number density \(\varrho(x)\) is obtained from Eq. (20) and the charge density \(q(x)\) is calculated via Eq. (3).
For \(\eta=0\) (thermodynamic equilibrium, \(j_{Q}=0\)) the bulk electric field vanishes, \(E_{\rm bulk}=0\) (see Eq. (23)), hence \(\Delta E(x)=E(x)\), and Eq. (30) reduces to (see Eq. (25))
\[\frac{\beta e\Delta E(x)}{\kappa}=\frac{\widehat{E}_{0}}{\sqrt{1+\Big{(}\frac {1}{2}\widehat{E}_{0}\Big{)}^{2}}\sinh(\kappa x)+\cosh(\kappa x)}, \tag{40}\]
which coincides with the electric field derived within Gouy-Chapman theory [30; 31; 32; 10; 33].
For \(|\eta|\gg 1\) Eq. (30) simplifies to
\[\frac{\beta e\Delta E(x)}{\kappa}\simeq\widehat{E}_{0}\exp\Big{(}-\frac{x}{ \lambda(\eta)}\Big{)} \tag{41}\]
with the length scale
\[\lambda(\eta)=\frac{1}{\kappa\sqrt{1+\eta^{2}}}, \tag{42}\]
i.e. the excess electric field \(\Delta E(x)\) becomes purely exponential. This is consistent with the fact that Eq. (26) reduces to the linear equation \(\widehat{E}^{\prime\prime}(y)=\widehat{E}(y)\) for \(|\eta|\to\infty\). It is remarkable that the governing equation (26) becomes simple far away from thermodynamic equilibrium.
According to Eq. (25) the position dependence of the excess electric field \(\Delta E(x)\) for arbitrary values of the electric flux \(\eta\) is determined by length scale \(\lambda(\eta)\) in Eq. (42):
\[\frac{\beta e\Delta E(x)}{\kappa}=\widehat{E}\Big{(}\frac{x}{\lambda(\eta)} \Big{)}. \tag{43}\]
In particular, according to Eq. (30), for \(x\gg\lambda(\eta)\), \(\Delta E(x)\) exhibits an exponential asymptotic decay on the length scale \(\lambda(\eta)\).
The dependence of length scale \(\lambda(\eta)\) on the electric flux parameter \(\eta\) is displayed in Fig. 3. For \(\eta=0\) (thermodynamic equilibrium, \(j_{Q}=0\)) this equals the Debye length \(\lambda(0)=1/\kappa\), whereas it decreases \(\lambda(\eta)\simeq 1/(\kappa|\eta|)\) for \(|\eta|\to\infty\). However, the detailed position dependence of \(\Delta E(x)\) is irrelevant from the practical point of view if \(|\eta|\) is so large that the length scale \(\lambda(\eta)\) is of molecular size or below.
In summary, with increasing electric flux \(|\eta|\sim|j_{Q}|\) the electric field changes from the Gouy-Chapman form in thermodynamic equilibrium to a purely exponential dependence far away from thermodynamic equilibrium. In parallel the corresponding relevant length scale \(\lambda(\eta)\) decays from the Debye length \(1/\kappa\) to zero.
### Space charge
At distances \(x\gg\lambda(\eta)\) the electric field and the densities become spatially uniform (see Fig. 2). Depending on the processes taking place at the electrode surface deviations from spatial uniformity can occur there, which are commonly referred to as the formation of _space charges_. In the theory of electrolyte solutions in thermodynamic equilibrium this space charge region is traditionally called the _diffuse layer_[33; 54; 10; 11].
In the notation of Sec. II.4 the amount of space charge per cross-sectional area is given by \(-\Delta\sigma\), where \(\Delta\sigma\) denotes the excess surface charge density, which is the charge per cross-sectional area on the electrode surface induced by the space charge. The relation of \(\Delta\sigma\) to the excess voltage \(\Delta U\) of the space charge region (see Eq. (31)) is called the Grahame equation (38), in analogy to the equation of the same name within the theory of electrolyte solutions in thermodynamic equilibrium [33; 10; 11].
Figure 4(a) displays this relation for electric flux \(\eta\in\{0,2,4,6,8,10\}\). For \(\eta=0\) (thermodynamic equilibrium, \(j_{Q}=0\)) the traditional Grahame equation is found, whereas for a given excess voltage \(\Delta U>0\) the amount of space charge increases upon increasing the electric flux \(\eta\). According to Eq. (38), for \(\eta\to\infty\) this increase is asymptotically proportional to \(\eta\). Hence the mean capacitance \(\Delta\sigma/\Delta U\) increases upon increasing the electric flux, which can be attributed to the decrease of the diffuse layer thickness \(\lambda(\eta)\) upon increasing \(|\eta|\).
Figure 4(b) displays the differential capacitance \(C\), i.e. the derivative of the excess surface charge \(\Delta\sigma\) w.r.t. the excess voltage \(\Delta U\), given in Eq. (39). For \(\eta=0\) (thermodynamic equilibrium, \(j_{Q}=0\)) the well-known Gouy-Chapman capacitance is found [10; 11], whereas \(C\) increases upon increasing the electric flux \(\eta\). From Eq. (39)
Figure 3: Decay length \(\lambda\) of correlations in a steady state with electric flux parameter of magnitude \(|\eta|\) (see Eq. (23)). For \(\eta=0\) (thermodynamic equilibrium) it is identical to the Debye length \(1/\kappa\).
one infers an asymptotically proportional dependence of \(C\) on \(\eta\) for \(\eta\to\infty\). This dependence on \(\eta\) can again be attributed to the decrease of the diffuse layer thickness \(\lambda(\eta)\) upon increasing \(|\eta|\).
Note that within PNP theory no packing effects due to finite molecular sizes of the ions are taken into account. This precludes the decrease of the differential capacitance \(C\) for large values of the excess voltage \(\Delta U\), which otherwise would set in once the inner Helmholtz plane is fully occupied such that additionally adsorbed ions have to be accommodated at larger distances from the electrode surface [19; 21; 55; 56].
### Surface conductivity models
Physically meaningful number densities \(\varrho_{\pm}(x)\) have to be non-negative, i.e. \(\varrho_{\pm}(x)\geq 0\), everywhere. Using Eqs. (1) and (2) to rewrite this condition as \(\varrho_{\pm}=(\varrho\pm q/e)/2\geq 0\), leads to \(e\varrho\geq\mp q\), which is equivalent to \(e\varrho\geq|q|\). The latter inequality in turn is equivalent to the two conditions
\[\varrho\geq 0\qquad\text{and}\qquad e|\varrho|\geq|q|. \tag{44}\]
Writing
\[\varrho =\varrho_{\text{bulk}}\Big{(}1+\eta\widehat{E}+\frac{1}{2} \widehat{E}^{2}\Big{)} \tag{45}\] \[q =-e\varrho_{\text{bulk}}\widehat{E}\sqrt{1+\eta^{2}+\eta\widehat {E}+\frac{1}{4}\widehat{E}^{2}} \tag{46}\]
by using Eqs. (20), (3) and (29) one can reformulate \(e|\varrho|\geq|q|\), i.e. \(e^{2}\varrho^{2}\geq q^{2}\), as
\[1+2\eta\widehat{E}\geq 0. \tag{47}\]
Moreover, if Eq. (47), i.e. \(e|\varrho|\geq|q|\), is fulfilled, one immediately finds from Eq. (45) and \(\widehat{E}^{2}\geq 0\) that
\[\varrho=\frac{\varrho_{\text{bulk}}}{2}(1+\underbrace{1+2\eta\widehat{E}}_{ \geq 0}+\widehat{E}^{2})\geq 0, \tag{48}\]
i.e. the second inequality in Eq. (44) implies the first. To summarise the above reasoning: The physical condition of non-negative number densities \(\varrho_{\pm}\) is fulfilled if and only if Eq. (47) holds.
Whereas Eq. (47) is fulfilled for \(\eta\widehat{E}\geq 0\), it might be violated for \(\eta\widehat{E}<0\). As \(\widehat{E}\) is monotonic (see Sec. II.3) Eq. (47) is fulfilled if and only if it holds at the electrode surface, i.e. \(1+2\eta\widehat{E}(0)\geq 0\). Using Eq. (36) the physical requirement Eq. (47) of non-negative number densities \(\varrho_{\pm}\) can be formulated in terms of the excess surface charge density:
\[1+8\eta\frac{\Delta\sigma}{\sigma_{\text{sat}}}\geq 0. \tag{49}\]
Figure 5 provides a graphical representation of Eq. (49) in the \(\Delta\sigma\)-\(\eta\)-plane. For \(\eta\neq 0\) (non-equilibrium steady state, \(j_{Q}\neq 0\)) condition Eq. (49) is equivalent to
\[\frac{\Delta\sigma}{\sigma_{\text{sat}}}\geq-\frac{1}{8\eta} \qquad\text{for $\eta>0$ and} \tag{50}\] \[\frac{\Delta\sigma}{\sigma_{\text{sat}}}\leq-\frac{1}{8\eta} \qquad\text{for $\eta<0$.} \tag{51}\]
If Eq. (50) is violated the PNP solution leads to \(\varrho_{-}(0)<0\), whereas if Eq. (51) is violated the PNP solution yields \(\varrho_{+}(0)<0\) (see the grey regions in Fig. 5). Note that the parameter range \(\eta,\Delta\sigma,\Delta U\geq 0\) of the examples in Secs. III.1 and III.2 has been chosen on purpose in order to fulfil condition Eq. (49).
For \(\eta=0\) (thermodynamic equilibrium, \(j_{Q}=0\)) condition Eq. (49) is fulfilled for all excess surface charge densities \(\Delta\sigma\), i.e. negative number density solutions within PNP theory can occur for non-equilibrium steady states, but not for states in thermodynamic equilibrium.
Figure 4: (a) Excess surface charge density \(\Delta\sigma\) and (b) differential capacitance of the diffuse space charge region close to the electrode as functions of the corresponding voltage drop \(\Delta U\) for various values of the electric flux \(\eta\) in Eq. (23). For \(\eta=0\) (thermodynamic equilibrium) these quantities are identical to those within Gouy-Chapman theory.
In order to demonstrate the applicability of the previous result consider the particular case of surface processes which give rise to a strictly linear constitutive relation between charge current \(j_{Q}\) and total electric field \(E(0)\) at the electrode surface:
\[j_{Q}=S_{\rm surf}E(0). \tag{52}\]
The proportionality factor \(S_{\rm surf}\) is called the _surface conductivity_ here. Using Eqs. (23) and (36) one can rewrite Eq. (52) of the linear surface conductivity model as
\[\frac{\Delta\sigma}{\sigma_{\rm sat}}=\frac{1}{4}\Big{(}\frac{S_{\rm bulk}}{S _{\rm surf}}-1\Big{)}\eta. \tag{53}\]
The case of a surface-to-bulk conductivity ratio \(S_{\rm surf}/S_{\rm bulk}=2\) is represented by the blue line labelled with "(1)" in Fig. 5. The fact that this line crosses over into the grey regions, where the conditions Eqs. (50) and (51) are violated, shows that the purely linear surface conductivity model cannot be applied under these conditions for too large electric fluxes. Obviously the same argument applies to any system with \(S_{\rm surf}>S_{\rm bulk}\) (high surface conductivity), because then the slope of the corresponding line is negative so that intersections with the unphysical grey regions occur for sufficiently large electric flux \(|\eta|\). It should be noted that in the grey regions of Fig. 5 no mathematical problems arise: Equations (30), (45) and (46) are the solutions of the PNP equations for steady states (see Sec. II.2). But these steady state solutions of the PNP equations may be physically meaningless due to the occurrence of negative number densities.
In systems with \(S_{\rm surf}\leq S_{\rm bulk}\) (low surface conductivity) the linear model Eq. (52) leads to a line Eq. (53) in Fig. 5 with non-negative slope, which does not intersect the grey regions, i.e. no negative number densities occur under these conditions. However, it is possible that other quantities exist, for which the PNP solutions exhibit unphysical properties. Moreover, whether the linear surface conductivity model Eq. (52), even if no unphysical values occur, is able to quantitatively describe real systems is an unrelated question.
In fact, the linear surface conductivity model Eq. (52) can be expected to be an acceptable description for sufficiently small surface fields only, because for large surface fields saturation of the charge density current \(j_{Q}\) sets in due to an exhaustion of ions. Such diffusion limited surface processes can be described by the constitutive relation [8; 9]
\[E(0)=-\operatorname{sign}(j_{Q})\frac{j_{Q\rm sat}}{S_{\rm surf}}\ln\Big{(}1- \frac{|j_{Q}|}{j_{Q\rm sat}}\Big{)}, \tag{54}\]
where \(j_{Q\rm sat}>0\) is the saturation charge current density and \(S_{\rm surf}\) is the differential conductivity in the limit of infinitesimal surface electric fields. In Fig. 5 the case of \(S_{\rm surf}/S_{\rm bulk}=2\) for two different values of the saturation charge current density \(j_{Q\rm sat}\) are shown by the curves labelled "(2)" and "(3)". The green curve "(2)" demonstrates that for sufficiently small \(|j_{Q\rm sat}|\) no negative ion number densities occur, although a highly conductive surface is present at weak surface fields. However, the violet curve "(3)" for too large \(|j_{Q\rm sat}|\) does exhibit unphysical solutions inside the grey regions. Hence, great care is required to choose appropriate surface conductivity models, which, in conjunction with the PNP equations, lead to physical solutions.
## IV Conclusions
In the present work the analytical solution Eq. (30) of the PNP equations for steady states (see Sec. II.2) of a semi-infinite univalent binary electrolyte solution in contact with a planar electrode (Fig. 1) has been derived. It can be expected to play a similar role as the Gouy-Chapman solution [10; 11; 30; 31; 32; 33] for thermodynamic equilibrium, to which the derived solutions reduce for the case of a vanishing charge current density (Fig. 2). The characteristic length scale of the electric field as well as the number and charge density profiles, which in thermodynamic equilibrium is given by the Debye length, decreases for non-equilibrium steady states upon increasing the magnitude of the charge current density (Fig. 3). The Grahame equation, which expresses the surface charge density at the electrode in terms of the voltage [10; 11; 33], is generalised to non-equilibrium steady states (Eq. (38)). The excess surface charge density at the electrode and the differential capacitance of the space charge layer for given excess voltage
Figure 5: Relations between excess surface charge density \(\Delta\sigma\) and electric flux \(\eta\) (see Eq. (23)) for three cases of surface conductivity models: (1) linear relation with surface-to-bulk conductivity ratio \(S_{\rm surf}/S_{\rm bulk}=2\) (blue line), (2) diffusion limited process with small saturation current (green curve) and (3) diffusion limited process with large saturation current (violet curve). The grey regions, bounded by red curves, correspond to unphysical conditions where solutions of the PNP equations (3)–(5) occur which exhibit negative values of the number densities \(\varrho_{\pm}(0)\) close to the electrode.
are found to vary with the current charge density of non-equilibrium steady states (Fig. 4). Finally it is found that, in contrast to the case of thermodynamic equilibrium within Gouy-Chapman theory [10; 30; 31; 32; 33; 11], the excess surface charge density may not take an arbitrary value for a given non-vanishing charge current density of a non-equilibrium steady state: Steady state solutions of the PNP equations exist which give rise to physically meaningless negative ion number densities (Fig. 5). A concise criterion is formulated which can serve to identify such unphysical solutions (Eq. (49)).
The most remarkable observation of the present work, i.e. the existence of steady state solutions of the PNP equations which are physically meaningless, calls for further investigation. Two approaches are conceivable: First, as Gauss' law Eq. (3) and the continuity equation (5) are unexceptionable for fundamental physical reasons, one could suggest to modify the Nernst-Planck equation (4) in order to avoid unphysical solutions. Second, one could try to keep the Nernst-Planck equation (4) unchanged, but require boundary conditions to fulfil Eq. (49). Whereas the second approach is the more pragmatic one, it will be the first one which provides more fundamental insight into the non-equilibrium properties of electrolyte solutions.
|
2303.00108 | Ranked Choice Voting And Condorcet Failure in the Alaska 2022 Special
Election: How Might Other Voting Systems Compare? | The August 2022 special election for the U.S. House of Representatives in
Alaska featured three main candidates and was conducted by the single-winner
ranked choice voting system known as "Instant Runoff Voting." The results of
this election displayed a well-known but relatively rare phenomenon known as
"Condorcet failure:" Nick Begich was eliminated in the first round despite
being more broadly acceptable to the electorate than either of the other two
candidates. More specifically, Begich was the "Condorcet winner" of this
election: Based on the Cast Vote Record, he would have defeated each of the
other two candidates in head-to-head contests, but he was eliminated in the
first round of ballot counting due to receiving the fewest first-place votes.
The purpose of this paper is to use the data in the Cast Vote Record to
explore the range of likely outcomes if this election had been conducted under
two alternative voting systems: Approval Voting and STAR ("Score Then Automatic
Runoff") Voting. We find that under the best assumptions available about voter
behavior, it is likely -- but not at all certain -- that Peltola would still
have won the election under Approval Voting, while Begich would almost
certainly have won under STAR Voting. | Jeanne N. Clelland | 2023-02-28T22:14:50Z | http://arxiv.org/abs/2303.00108v2 | Ranked choice voting and the center squeeze in the Alaska 2022 special election: how might other voting methods compare?
###### Abstract.
The August 2022 special election for U.S. House Representative in Alaska featured three main candidates and was conducted by by single-winner ranked choice voting method known as "instant runoff voting." The results of this election displayed a well-known but relatively rare phenomenon known as the "center squeeze:" The most centrist candidate, Mark Begich, was eliminated in the first round despite winning an overwhelming majority of second-place votes. In fact, Begich was the _Condorcet winner_ of this election: Based on the cast vote record, he would have defeated both of the other two candidates in head-to-head contests, but he was eliminated in the first round of ballot counting due to receiving the fewest first-place votes.
The purpose of this paper is to use the data in the cast vote record to explore the range of likely outcomes if this election had been conducted under two alternative voting methods: Approval Voting and STAR ("Score Then Automatic Runoff") Voting. We find that under the best assumptions available about voter behavior, the most likely outcomes are that Peltola would still have won the election under Approval Voting, while Begich would have won under STAR Voting.
Key words and phrases:Ranked Choice Voting, Approval Voting, STAR Voting, Alaska Special Election The author was partially supported by a Collaboration Grant for Mathematicians from the Simons Foundation.
## 1. Introduction
On August 16, 2022, Alaska held a special election to fill the seat of deceased U.S. House Representative Don Young. For the special general election, there were three candidates: Democrat Mary Peltola and Republicans Mark Begich and Sarah Palin. The election was Alaska's first statewide election conducted by Instant Runoff Voting (IRV), commonly referred to as Ranked Choice Voting (RCV).1 For this election, voters were allowed to rank all three candidates, and ballots were counted as follows:2
Footnote 1: “Ranked Choice Voting” is an umbrella term, referring to a variety of voting and tabulation methods for both single-winner and multi-winner elections. In this paper, we will use the more precise term “Instant Runoff Voting,” which refers to the specific single-winner method used for the Alaska election.
Footnote 2: Official results obtained from [https://www.elections.alaska.gov/results/22SSPG/RcvDetailedReport.pdf](https://www.elections.alaska.gov/results/22SSPG/RcvDetailedReport.pdf), accessed Nov. 4, 2022.
* Round 1: Only first-place rankings were counted. The results of this round are shown in Table 1. At the end of this round, the candidate with the fewest first-place votes (Begich) was eliminated. Ballots on which Begich was ranked first were "transferred" to their second-ranked candidate, if any. Any ballot with no second-choice candidate indicated was considered "exhausted" and was not counted in the second round. Of the 53,810 ballots on which Begich was ranked first, 11,290 were exhausted and the remainder were transferred as shown in Table 2.
* Round 2: After Begich's first-place ballots were transferred to their second-choice candidates, the votes were counted again. The results of this round are shown in Table 3. Since Peltola received more than 50% of the votes in Round 2, she was declared the winner.
Instant Runoff Voting is often billed as a solution to many of the problems of traditional, "choose one" plurality voting, also known as "first past the post voting." In plurality voting, voters may only vote for one candidate, and the candidate with the most votes wins, regardless of whether they win a majority of the votes. With IRV, the winning candidate must receive a majority of the ballots that are still active in the final round. Additionally, IRV can often eliminate the "spoiler effect" seen in many plurality elections, where a candidate with only a small degree of support can attract votes that would otherwise have gone to a different candidate, thereby changing the outcome of the election. With IRV, voters are frequently assured that they can safely rank their honest first choice candidate first, and then if that candidate is eliminated, they can still vote for their second choice candidate in the next round. IRV is also widely claimed to reduce polarization by encouraging candidates to appeal to a wider variety of voters in order to attract second-place rankings from supporters of other candidates.
However, no voting method is perfect. All of these supposed advantages of IRV can fail, and this election demonstrates how:
1. Peltola's final vote count of 91,266 represented 51.48% of votes that were still active in Round 2, but this statistic fails to take into account that 11,290 ballots from Round 1 were exhausted and not counted in Round 2. Peltola's 91,266 votes represent only 48.40% of the 188,582 ballots that were active in Round 1. Thus the oft-repeated claim that IRV "guarantees a majority for the winning candidate" is not true if by "majority" we mean "majority of all votes cast."
2. While voters who ranked Begich first had the opportunity to vote for their second choice candidate in Round 2, voters who ranked Palin or Peltola first never had their second choice vote considered. In particular, voters who were assured that they could safely rank Palin first and still have their second-place vote for Begich counted if Palin were eliminated never got to express their support for Begich.
3. When the full cast vote records were released, it became clear that a great deal of information about voter preferences was lost in the IRV tallying procedure. Among voters expressing a second choice, Begich won an overwhelming majority of second-place votes--but these
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Candidate** & **Votes** & **Percentage** \\ \hline Peltola & 75,799 & 40.19\% \\ \hline Palin & 58,973 & 31.27\% \\ \hline Begich & 53,810 & 28.53\% \\ \hline \end{tabular}
\end{table}
Table 1. First round results
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Transferred from** & **Transferred To** & **Votes** \\ \hline Begich & Peltola & 15,467 \\ \hline Begich & Palin & 27,053 \\ \hline \end{tabular}
\end{table}
Table 2. Transferred votes
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Candidate** & **Votes** & **Percentage** \\ \hline Peltola & 91,266 & 51.48\% \\ \hline Palin & 86,026 & 48.52\% \\ \hline \end{tabular}
\end{table}
Table 3. Second round results
votes were never counted. In fact, Begich was the _Condorcet winner_ of this election: Based on the preferences expressed on the IRV ballots, Begich would have defeated both Palin and Peltola in head-to-head contests. (More details will be given in Section 3.) This phenomenon, in which the Condorcet winner has broad support as a second choice candidate but is eliminated prior to the final round due to a lack of first-place votes, is referred to as a "center squeeze." When this happens, the candidates who remain in the final round are often the more polarizing candidates who have the strongest bases of first-choice supporters.
This election displayed other anomalies of IRV as well, including the "monotonicity paradox" and the "no-show paradox;" see [2] for a more thorough discussion.
The goal of this paper is to explore how this election might have played out under two other alternative voting methods: Approval Voting and STAR ("Score Then Automatic Runoff") Voting. We will use the actual cast vote records from the election and explore a range of possibilities for how the preferences expressed in these IRV ballots might translate to votes in each of these methods.
## 2. Approval and STAR Voting
Instant Runoff Voting is an _ordinal_ (also known as a _ranked_) voting method: Voters rank candidates in order of preference. Importantly, tied rankings are not allowed; if a voter assigns the same ranking to more than one candidate, that ballot is considered invalid and is not counted. In this Alaska election, voters could express a tied ranking for their second- and third-place candidates by ranking only their first-place candidate and leaving the other two rankings blank, but voters had no way to express a tied ranking for their first- and second-place candidates.
### Approval Voting
Approval Voting is the simplest example of a _cardinal_ (also known as a _score_ or _range_) voting method, in which voters give each candidate a score, the scores are added, and the candidate with the highest score wins the election. In Approval Voting, the ballot is similar to a plurality voting ballot, except that voters may vote for as many candidates as they like, and the candidate who receives the most votes wins. (So voters effectively give each candidate a score of either 0 or 1.) This can help eliminate the spoiler effect, as voters do not have to choose between voting for their true favorite candidate and a less favored candidate who is more likely to win.
Approval Voting does require some strategic thinking on the part of voters: After voting for their favorite candidate and declining to vote for their least-favorite, is it better to vote for intermediate candidates in order to minimize the chance of a least-favorite candidate winning, or to decline to support them in order to maximize the chance of a favorite candidate winning? If every voter decided to support only their first choice candidate--a strategy known as "bullet voting"--then the outcome would be the same as in a plurality election. In practice, the specific details of each election and each voter's attitudes towards the various candidates may lead to a variety of outcomes.
### STAR Voting
STAR stands for "Score Then Automatic Runoff;" it is a combination of cardinal and ordinal voting methods that was first introduced in 2014 [4]. In STAR Voting, voters give each candidate a score, typically in the range of 0-5. STAR ballots are tallied in two rounds: In the first round (the "score" round), all scores are added and the candidates with the top two scores advance to the second round. In the second round (the "automatic runoff" round), every ballot that gives one of the two final candidates a higher score than the other counts as one vote in favor of that candidate, just as in a standard plurality election between two candidates. (Any ballot that gave the final two candidates the same score is recorded as "no preference" in the runoff.) The winner of the runoff wins the election.
STAR Voting allows voters a greater range of expression than either IRV (unless there are more than 6 candidates) or Approval Voting, as they can more fully express their degree of support
for each candidate. Voters can express a range of ranked preferences by giving different scores to different candidates, and they can indicate tied preferences by giving the same score to multiple candidates. The runoff round incentivizes voters to use the full range of scores available (unless they truly have no preference between candidates), so that their vote will be counted in the runoff round.
### Condorcet winners and losers
Since IRV allows voters to rank all candidates, it is possible to determine from cast ballots how each voter would (presumably) vote in a theoretical head-to-head matchup between any pair of candidates, and consequently which candidate would win any such head-to-head election. If there is a candidate who would defeat all other candidates in head-to-head elections, that candidate is called the _Condorcet winner_ of the election. In practice, IRV elects the Condorcet winner most of the time, but as the Alaska election shows, this result is not guaranteed. This was also famously the case in an IRV election for Mayor in Burlington, VT in 2009; see, e.g., [3] for details.
Similarly, if there is a candidate who would lose all head-to-head elections, that candidate is called the _Condorcet loser_ of the election. In this Alaska election, Begich was the Condorcet winner and Palin was the Condorcet loser.3
Footnote 3: Note that it was not possible to determine the Condorcet winner/loser from the official election reporting; this determination requires a full knowledge of the cast vote record, including the record of second choice preferences for Palin and Peltola voters.
While IRV does not guarantee the election of the Condorcet winner, it _does_ guarantee that the Condorcet loser will _not_ be elected. The Condorcet loser _may_ survive until the final round--as indeed happened in the Alaska election--but by definition will lose to whichever other candidate survives until the final round.
By virtue of the automatic runoff step, STAR Voting also guarantees that the Condorcet loser will not be elected. In Approval Voting, however, this result is theoretically _not_ guaranteed. While it is extremely unlikely that the Condorcet loser would win any particular Approval Voting election, we will see in Section 4 how it is mathematically possible that, under precisely the right (or wrong?) circumstances, Palin could have won the Alaska election if it had been conducted by Approval Voting.
## 3. Cast vote record analysis
The complete cast vote record was downloaded from The Alaska Division of Elections website [1] at [https://www.elections.alaska.gov/election-results/e/](https://www.elections.alaska.gov/election-results/e/). Importantly, the cast vote record contains only the votes and rankings from ballots that were scanned electronically, while the official results also reflect votes and rankings from ballots that were counted manually. In order to gauge the impact of the missing manually counted votes, we applied the tallying procedure described at [https://www.elections.alaska.gov/election-results/](https://www.elections.alaska.gov/election-results/) under "Sample RCV Report and Definitions" to the ballots in the cast vote record and obtained the results shown in Table 4. (Compare with the official results in Tables 1 and 3.)
\begin{table}
\begin{tabular}{|l||c|c||c|c|} \hline
**Candidate** & **First Round Votes** & **Percentage** & **Second Round Votes** & **Percentage** \\ \hline Peltola & 74,119 & 40.39\% & 90,587 & 51.72\% \\ \hline Palin & 57,084 & 31.11\% & 84,548 & 48.28\% \\ \hline Begich & 52,317 & 28.51\% & N/A & N/A \\ \hline \end{tabular}
\end{table}
Table 4. IRV Results computed from cast vote record
As expected, there were slightly fewer votes in the cast vote record than in the reported results. Percentage-wise, results from the cast vote record are very similar to those from the official results, and we expect that our analysis based on the cast vote record will not be significantly affected by the absence of the manually counted ballots.
For our analysis of how this election might have played out under Approval or STAR Voting, we processed the cast vote record slightly differently. Since a relatively small number of write-in votes do not affect these methods in the same way as IRV, we ignored votes for write-in candidates entirely. We also disregarded skipped rankings--unlike the official counting process, which ignores all rankings below two successive skipped rankings.
There were 188,985 ballots that ranked at least one of the three main candidates. Of these, 234 were "overvotes" that gave 2 or 3 candidates their highest ranking. These votes are considered invalid in an IRV election, but we will include them in our Approval and STAR Voting models; they are shown in Table 5.
The remaining 188,751 ballots all gave their highest ranking to a single candidate. Of these, 55,965 indicated no preference between the other two candidates, while 132,786 ranked all three candidates. The frequencies of the various rankings are shown in Table 6 and depicted graphically in Figure 1. From this data, we see that among voters who ranked all three candidates, Begich was the overwhelming second choice of both Palin and Peltola voters. Indeed, Begich received dramatically more second-place votes overall (81,546) than either Palin (31,985) or Peltola (19,255), but since Begich was eliminated after Round 1, his second-place votes were never counted in the official results.
We can also use the data in Table 6 to compute the results of theoretical head-to-head matchups for all three pairs of candidates:
* **Begich vs. Palin:** Begich would receive votes from all ballots that ranked him first, plus votes that ranked Peltola first and Begich second. Likewise, Palin would receive votes from all ballots that ranked her first, plus votes that ranked Peltola first and Palin second. Thus we would have:
\begin{table}
\begin{tabular}{|l||c|c|c|} \hline & \multicolumn{3}{c|}{**First place candidate**} \\ \hline & Begich & Palin & Peltola \\ \hline
**Second place candidate** & & & \\ \hline None & 11,179 & 21,139 & 23,647 \\ \hline Begich & N/A & 34,117 & 47,429 \\ \hline Palin & 27,258 & N/A & 4,727 \\ \hline Peltola & 15,572 & 3,683 & N/A \\ \hline \end{tabular}
\end{table}
Table 6. Valid ranked votes in cast vote record
\begin{table}
\begin{tabular}{|l|c|} \hline
**Highest-ranked candidates** & **Votes** \\ \hline Begich, Palin, Peltola & 56 \\ \hline Begich, Palin & 86 \\ \hline Begich, Peltola & 62 \\ \hline Palin, Peltola & 30 \\ \hline \end{tabular}
\end{table}
Table 5. Overvotes with first-place ties in cast vote record
* Begich: 11,179 + 27,258 + 15,572 + 47,429 = 101,438
* Palin: 21,139 + 34,117 + 3,683 + 4,727 = 63,666
Thus Begin would defeat Palin with approximately 61.44% of the vote.
* **Begich vs. Peltola:*
* Begich would receive votes from all ballots that ranked him first, plus votes that ranked Palin first and Begich second. Likewise, Peltola would receive votes from all ballots that ranked her first, plus votes that ranked Palin first and Peltola second. Thus we would have:
* Begich: 11,179 + 27,258 + 15,572 + 34,117 = 88,126
* Peltola: 23,647 + 47,429 + 4,727 + 3,683 = 79,486 Thus Begin would defeat Peltola with approximately 52.58% of the vote.
* **Palin vs. Peltola:*
* Palin would receive votes from all ballots that ranked her first, plus votes that ranked Begich first and Palin second. Likewise, Peltola would receive votes from all ballots that ranked her first, plus votes that ranked Begich first and Palin second. Thus we would have:
* Palin: 21,139 + 34,117 + 3,683 + 27,258 = 86,197
* Peltola: 23,647 + 47,429 + 4,727 + 15,572 = 91,375 Thus Peltola would defeat Palin with approximately 51.46% of the vote.
These calculations show that for this election, Begich was the Condorcet winner and Palin was the Condorcet loser.
Figure 1. First and second place votes based on cast vote record
In the next two sections, we will explore a range of possibilities for how this election might have played out under either Approval or STAR Voting.
## 4. Possible election outcomes with Approval Voting
In order to model an Approval Voting election based on the cast vote record, we make the following assumptions:
* The 178 voters who overvoted by giving two candidates their highest ranking will vote for those two candidates and will not vote for the remaining candidate. (The 56 voters who gave all three candidates will not be considered since they expressed no preference among the candidates.)
* The 55,965 voters who ranked only one candidate will vote for that candidate and will not vote for the remaining two candidates.
* The 132,786 voters who ranked all three candidates will vote for their first-place candidate and will not vote for their third-place candidate. We will consider a range of possibilities for what percentage of these voters choose to vote for their second-place candidate.
The range of possible outcomes for this election is depicted numerically in Table 7 and graphically in Figure 2. For each candidate, the dark-colored bar in Figure 2 indicates their minimum level of support, based on first-place votes. The light-colored, shaded bars indicate the range of support that is potentially available from second-place votes, color-coded by the range available from each of the other candidates' first-place voters.
This chart shows that, in theory, any of the three candidates could win the election under exactly the right circumstances.
\begin{table}
\begin{tabular}{|l||c|c|} \hline & **Minimum votes** & **Maximum votes** \\ \hline Begich & 54,157 & 135,703 \\ \hline Palin & 59,055 & 91,040 \\ \hline Peltola & 75,895 & 95,150 \\ \hline \end{tabular}
\end{table}
Table 7. Range of Possible Approval Voting Outcomes
Figure 2. Range of Possible Approval Voting Outcomes
* **How Begich could win:** If a substantial portion of the Peltola and/or Palin voters who ranked Begich second opted to vote for him, then Begich could win easily, regardless of how many votes Palin and Peltola received from second-place rankings. Indeed, even if Begich received _no_ votes from voters who ranked him second to Palin, he would need votes from only 40,994 of the 47,429 voters who ranked him second to Peltola (about 87%) in order to exceed the _maximum_ possible number of votes for either Palin or Peltola.
* **How Peltola could win:** If all voters opted to bullet vote, then the outcome would be the same as for a plurality election and Peltola would win. Alternatively, if most Peltola and Palin voters opted to bullet vote, while most Begich voters opted to vote for their second-choice candidate, the outcome would likely be similar to the final total in the IRV election, and Peltola would likely still win the election.
* **How Palin could win:** If all voters declined to support a second-choice candidate from the opposite party of their first-choice candidate, then Peltola would receive no support from any second-place rankings. In this case, Begich and Palin would receive no votes from Peltola's first-place voters, but they could still receive support from each other's first-place voters. If most voters who ranked Begich second to Palin voters declined to support Begich while most voters who ranked Palin second to Begich voters opted to support Palin, it is theoretically possible that Palin could win the election.
Now, how likely are each of these scenarios? The outcomes hinge on what percentage of voters with each preference profile choose to vote for their second-place candidate, and this is difficult to predict in advance. Approval Voting has a limited track record in real-world political elections, with the best-known Approval Voting elections taking place in nonpartisan primary elections for municipal offices in St. Louis, MO and nonpartisan municipal elections in Fargo, ND since 2020. Results from these elections are available at [https://approval.vote/](https://approval.vote/). Here is a summary:
* In Fargo, Approval Voting was used in 2020 and 2022 to elect two City Commissioners in a single election each year, and in 2022 to elect the Mayor. For the City Commissioner elections, voters approved an average of 2.3 candidates (out of 7 running) in 2020 and an average of 3.1 candidates (out of 15 running) in 2022. For the 2022 Mayoral election, voters approved an average of 1.6 candidates (out of 7 running).
* In St. Louis, Approval Voting was used in 2021 for two-winner primary elections for Mayor, Comptroller, and 16 ward-based Aldermen. Of these, only the Mayoral and 7 of the Alderman elections featured 3 or more candidates. In each of the 6 Alderman elections with exactly 3 candidates, voters approved averages of 1.1 or 1.2 candidates; in the Alderman election with 6 candidates, voters approved an average of 1.4 candidates, and in the Mayoral election with 4 candidates, voters approved an average of 1.6 candidates. Note that in all of these elections, the average number of approvals was _less_ than the number of winning candidates (i.e., two).
While it is certainly not a given that a statewide, partisan election would follow the same pattern as a local, nonpartisan election, it appears that on average, voters generally prefer to support a relatively small number of candidates in an Approval Voting election.
For our hypothetical election, if we assume an extremely simple model where the percentage of voters who choose to vote for their second-choice candidate is the same for all candidate ranking patterns, then the threshold required for Begich to overtake Peltola is about 1.35 approvals per ballot. So in this model, if at least 35% of voters chose to vote for their second-ranked candidate, then Begich would win the election; otherwise Peltola would win. (Palin would never win with
this model, as the scenarios under which she could win would require her to receive many more second-place votes than the other two candidates.) Based on the data from Fargo and St. Louis, it seems unlikely that this threshold would be reached, and therefore most likely that Peltola would still win the election.
## 5. Possible election outcomes with STAR Voting
In order to model a STAR Voting election based on the cast vote record, we make the following assumptions. We start by assuming that voters will maximize the impact of their votes by giving their favorite candidate(s) 5 stars and their least favorite candidate(s) 0 stars.
* The 178 voters who overvoted by giving two candidates their highest ranking will give each of those two candidates 5 stars and will give the remaining candidate 0 stars. (The 56 voters who gave all three candidates will not be considered since they expressed no preference among the candidates.)
* The 55,965 voters who ranked only one candidate will give that candidate 5 stars and will give the remaining two candidates 0 stars.
* The 132,786 voters who ranked all three candidates will give their first-place candidate 5 stars and will give their third-place candidate 0 stars. We will consider a range of possibilities for how may stars, on average, these voters choose to give their second-place candidate.
There is one key difference between our models for STAR and Approval Voting: Since voters who ranked all three candidates indicated a preference between their second- and third-place candidates, for STAR Voting we will assume that they will give their second-place candidate _at least_ one star, so as to maintain their preference order for the runoff round. And, while there is the possibility that some of these voters might choose to give 5 stars to each of their top two candidates, the runoff round is a sufficiently strong disincentive to do so that we will assume that the _average_ score given to second-ranked candidates in each category is no more than 4 stars.
The range of possible outcomes for this election is depicted numerically in Table 8 and graphically in Figure 3. For each candidate, the dark-colored bar in Figure 3 indicates their minimum level of support, based on 5 stars from each first-place votes and 1 star from each second-place vote. The light-colored, shaded bars indicate the range of support that is potentially available from additional stars for second-place votes, color-coded by the range available from each of the other candidates' first-place voters.
Comparing with Figure 2 shows some important differences between the scoring outcomes for STAR and Approval Voting. In Approval Voting, second-place rankings can contribute anywhere from 0%-100% as much as first-place rankings. But in our STAR Voting model, the assumption that second-place rankings receive between 1 and 4 stars means that this range is reduced to 20%-80%. This 20% floor pays off handsomely for Begich, who received many more second-place rankings than either of the other two candidates.
\begin{table}
\begin{tabular}{|l||c|c|} \hline & **Minimum score** & **Maximum score** \\ \hline Begich & 352,331 & 596,969 \\ \hline Palin & 327,260 & 423,215 \\ \hline Peltola & 398,730 & 456,495 \\ \hline \end{tabular}
\end{table}
Table 8. Range of Possible STAR Voting Outcomes
* **How Begich could win:** Since Begich is the Condorcet winner, he would win the election as long as he made it to the runoff. If the Peltola and Palin voters who ranked Begich second (81,546 voters total) gave him an average of at least 1.87 stars instead of the assumed minimum of 1 star, that would give him at least 70,945 additional stars, for a total of 423,276 stars--just above Palin's theoretical _maximum_ of 423,215 stars. Absent a situation in which Begich's second-place voters gave him many fewer stars on average than Palin's second-place voters gave her, Begich would advance to the runoff and win the election regardless of how many stars Palin and Peltola received from second-place voters.
* **How Peltola could win:** Even if all voters gave their second-choice candidate the minimum of 1 star, Peltola would receive the highest score in the first round but would still lose the runoff to Begich. Peltola's only path to win the election would be for most of Begich's second-place voters to give him only 1 star and for a significant fraction of Palin's second-place voters to give her 3 or 4 stars, thereby allowing Palin to defeat Begich in the score round and advance to the runoff, where Peltola would defeat Palin.
As the Condorcet loser, Palin could not win this election under any circumstances.
Now, how likely are each of these scenarios? STAR Voting has an even more limited track record than Approval Voting in real-world political elections. As of this writing, it has only been used for some internal party elections in Oregon, although an effort is underway to introduce a 2024 ballot initiative to adopt STAR Voting statewide in Oregon. More information is available from the Equal Vote Coalition at [https://www.equal.vote/](https://www.equal.vote/).
But even with the uncertainty about how voters might choose to vote in a STAR Voting election, it seems clear that Begich has a much stronger path to victory here than in an Approval Voting election. The only scenario in which Begich might lose would require a significant asymmetry in the way that voters with different preferences choose to score their second-place candidates, with many more Begich voters choosing to score Palin highly than the reverse, and essentially no Peltola voters choosing to score Begich highly.
## 6. Conclusions
When implementing a new and unfamiliar voting method, it is almost impossible to predict in advance how voters might navigate the new procedures. Extensive voter education should be an essential component of implementation, but this is challenging and takes time. Voter behavior is
Figure 3. Range of Possible STAR Voting Outcomes |
2310.00030 | Cosmological observational constraints on the power law $f(Q)$ type
modified gravity theory | In modern cosmology, the curiosity of ultimately understanding the nature of
the dark energy controlling the recent acceleration of the Universe motivates
us to explore its properties by using some novel approaches. In this work, to
explore the properties of dark energy we adopt the modified $f(Q)$ gravity
theory, where the non-metricity scalar $Q$, emerging from Weyl geometry, plays
the dynamical role. For the function $f(Q)$ we adopt the functional form
$f(Q)=Q+ 6\gamma\,H_0^2(Q/Q_0)^n$, where $n,\, \gamma,\, H_0$ and $Q_0$ are
constants. Then, we test our constructed model against the various
observational datasets, such as the Hubble, and the Pantheon+SHOES samples, and
their combined sample, through the Markov Chain Monte Carlo (MCMC) statistical
analysis. We also employ the parameter estimation technique to constrain the
free parameters of the model. In addition, we use the constrained values of the
model parameters to explore a few implications of the cosmological model. A
detailed comparison of the predictions of our model with the $\Lambda$CDM model
is also performed. In particular, we discuss in detail some cosmographic
parameters, like the deceleration, the jerk, and the snap parameters, as well
as the behavior of the dark energy and matter energy densities to see the
evolution of various energy/matter profiles. The $Om$ diagnostics is also
presented to test the dark energy nature of our model, as compared to the
standard $\Lambda$CDM paradigm. Our findings show that the considered version
of the non-metric $f(Q)$ type modified gravity theory, despite some differences
with respect to the $\Lambda$CDM paradigm, can still explain the current
observational results on the cosmological parameters, and provide a convincing
and consistent account for the accelerating expansion of the Universe. | Sanjay Mandal, Sneha Pradhan, P. K. Sahoo, Tiberiu Harko | 2023-09-29T04:56:29Z | http://arxiv.org/abs/2310.00030v2 | # Cosmological observational constraints on the power law \(f(Q)\) type modified gravity theory
###### Abstract
In modern cosmology, the curiosity of ultimately understanding the nature of the dark energy controlling the recent acceleration of the Universe motivates us to explore its properties by using some novel approaches. In this work, to explore the properties of dark energy we adopt the modified \(f(Q)\) gravity theory, where the non-metricity scalar \(Q\), emerging from Weyl geometry, plays the dynamical role. For the function \(f(Q)\) we adopt the functional form \(f(Q)=Q+6\gamma\,H_{0}^{2}(Q/Q_{0})^{n}\), where \(n\), \(\gamma\), \(H_{0}\) and \(Q_{0}\) are constants. Then, we test our constructed model against the various observational datasets, such as the Hubble, and the Pantheon+SHOES samples, and their combined sample, through the Markov Chain Monte Carlo (MCMC) statistical analysis. We also employ the parameter estimation technique to constrain the free parameters of the model. In addition, we use the constrained values of the model parameters to explore a few implications of the cosmological model. A detailed comparison of the predictions of our model with the \(\Lambda\)CDM model is also performed. In particular, we discuss in detail some cosmographic parameters, like the deceleration, the jerk, and the snap parameters, as well as the behavior of the dark energy and matter energy densities to see the evolution of various energy/matter profiles. The \(Om\) diagnostics is also presented to test the dark energy nature of our model, as compared to the standard \(\Lambda\)CDM paradigm. Our findings show that the considered version of the non-metric \(f(Q)\) type modified gravity theory, despite some differences with respect to the \(\Lambda\)CDM paradigm, can still explain the current observational results on the cosmological parameters, and provide a convincing and consistent account for the accelerating expansion of the Universe.
**Keywords:**\(f(Q)\) gravity; dark energy; parameter estimation; cosmography; equation of state parameter.
###### Contents
* I Introduction
* II Brief review of the \(f(Q)\) gravity theory
* III The cosmological model
* III.1 The generalized Friedmann equations
* III.2 The equation of state of the dark energy
* III.3 The generalized Friedmann equations in the redshift space
* IV Observational Data
* IV.1 Cosmic Chronometer (CC) Sample
* IV.2 Type Ia Supernovae Sample
* IV.3 CC + Type Ia Supernovae Sample
* IV.4 Information Criteria and Model Selection Analysis
* V Cosmological applications
* V.1 Cosmographic parameters
* V.1 The Hubble parameter
* V.1.1 The deceleration, jerk and snap parameters
* V.1.2 Dimensionless density parameters
* V.1.3 \(Om\) Diagnostics
* VI Conclusion
## I Introduction
In present day cosmology, one of the primary objective is to explain the accelerating expansion of our Universe, an effect whose existence was extensively proven, and investigated, over the past two decades [1; 2]. To understand the accelerating phase of the
Universe, one must either modify Einstein's General Relativity, or add a new exotic component, called dark energy (DE) to the universe's energy budget. DE is an exotic fluid type component, having a negative pressure that causes gravity to behave in a repulsive manner at large cosmological scales [3]. The equation-of-state parameter \(\omega(z)\), defined as the ratio of the fluid's pressure to its energy density, is usually employed to characterize the dynamical features of DE. The most straightforward hypothesis to explain the cosmological observations is to assume that dark energy is a cosmological constant, with the parameter of the equation of state given by the redshift independent \(\omega=-1\). The cosmological constant, together with the assumption of the existence on the Universe of a called dark matter component are the conceptual basis of the \(\Lambda\)CDM cosmological paradigm. Alternative cosmological models that depart from the conventional \(\Lambda\)CDM model, but still predict an accelerating expanding Universe include braneworld models [4], K-essence, quintessence, and non-minimally coupled scalar fields [5; 6; 7; 8; 9], modified gravity [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20], anisotropic universes [21; 22; 23], interacting dark energy [24; 25; 26], and many others [27; 28; 29; 30; 31; 32; 33; 34].
Based on the equivalence principle, the view of the gravitational force as a manifestation of the curvature of the space-time became the dominant paradigm for the understanding of the gravitational force. This assumption implies that the gravitational interaction, and the geometry of the space-time, are completely determined by the nature of the matter fields. The Ricci scalar curvature plays a vital role in the curved space-time geometry. The Ricci scalar curvature \(R\) is the basic quantity from which the standard Einstein's general relativity has been built initially in a Riemannian geometry, where the torsion and the non-metricity do vanish. Although it is well known that Einstein's general relativity provides an outstanding description of the local gravitational phenomena, at the level of the Solar System, the theory has been theoretically challenged by specific observational evidence coming from the realization that the Universe is accelerating, and from the galactic phenomenology that is usually explained by postulating the existence of dark matter. These observations suggest that for explaining the gravitational dynamics and galactic and extra-galactic scales one should go beyond the standard formalism of general relativity.
The simplest way to construct extensions of general relativity is to include either an additional component in the Einstein-Hilbert Lagrangian, or to modify the structure of the Einstein-Hilbert gravitational Lagrangian (the Ricci scalar) itself. These approaches have led to many important extensions of general relativity, including \(f(R)\) gravity [35], \(f(G)\) gravity [36], \(f(P)\) gravity[37], Horndeski scalar-tensor theories[38] etc. However, from a general differential geometric perspective, by taking into account the affine properties of a manifold, the curvature is not the only geometric object that may be used within a geometrical framework to construct gravitational theories. Torsion and nonmetricity are two other essential geometric objects connected to a metric space, along with the curvature. They can be used to obtain the \(f(T)\) and the \(f(Q)\) gravity theories, respectively.
In the current paper, we are going to describe the current accelerated expansion of the Universe, and the observational data, through a specific modified gravity theory, the symmetric teleparallel gravitation theory, alternatively called \(f(Q)\) gravity. The \(f(Q)\) gravity was first proposed by Nester and Yo [39], and later extended by Jimenez et al. [40]. In \(f(Q)\) gravity the non-metricity \(Q\), originating from the Weyl geometric background, describes the gravitational interaction in a flat geometry, in which the curvature vanishes. \(f(Q)\) gravity was extensively used to investigate the cosmological evolution of the Universe. By considering the \(f(Q)\) Lagrangian of the theory as polynomial function in the redshift \(z\), Lazkoz et al.[41] obtained an important number of restrictions on \(f(Q)\) gravity. This investigation demonstrated that viable \(f(Q)\) models have coefficients comparable to those of the GR model, specifically the \(\Lambda\)CDM model. To investigate if this new formalism offers any workable alternatives to explain the Universe's late-time acceleration, the validity of various models at the background level was investigated. Several observational probes for the analysis have been employed, including the expansion rates of the early-type galaxies, Type Ia supernovae, Quasars, Gamma Ray Bursts, Baryon Acoustic Oscillations, and Cosmic Microwave Background distance priors. It turns out that the novel approach proposed in \(f(Q)\) gravity offers a different perspective on constructing modified, observationally reliable cosmological models.
The exploration of stellar models in the \(f(Q)\) modified gravity theory has been performed in [42], in which observational restrictions in the context of \(f(Q)\) gravity are obtained from the study of compact general relativistic objects. Focusing on a particular model in \(f(Q)\) gravity, Frusciante [43] found that while it is identical to the \(\Lambda\)CDM model at the background level, it exhibits novel and measurably different signatures at the level of linear perturbations. By examining the external and internal
solutions for compact stars, Lin and Zhai [44] investigated the application of \(f(Q)\) gravity to the static spherically symmetric configurations and illustrated the consequences of the \(f(Q)\) gravity theory. Mandal et al.[45] explored the dark energy parameters for the non-linear and power-law \(f(Q)\) models that depict the observable behavior of the cosmos. Jimenez et al.[46] investigated the modified gravity theories based on nonlinear extensions of the nonmetricity scalar, and they examined several interesting baseline cosmologies (including accelerating solutions related to inflation and dark energy), and assessed how cosmic perturbations behaved. Harko et al.[47] considered an extension of \(f(Q)\) gravity, by considered the effects of a non-minimal coupling between geometry and matter. Several cosmological applications of the theory were considered, by obtaining the generalized Friedmann equations (the cosmological evolution equations), and by imposing specific functional forms of the function \(f(Q)\), such as power-law and exponential dependence of the nonminimal couplings. A full theory in which nonmetricity couples to matter, called \(f(Q,T)\) gravity, where \(T\) is the trace of the matter energy-momentum tensor, was introduced and developed in [48] and [49]. Some astrophysical implications of the \(f(Q,T)\) theory were investigated in [50]. The inclusion of the torsion in the formalism of theories with geometry-matter coupling was considered in [51]. In addition, for studying various types of energy restrictions for the investigation of the logarithmic and polynomial functions in the \(f(Q)\) gravity, Mandal et al.[52] used cosmographic quantities to reconstruct the proper structure of the \(f(Q)\) function. The evolution of matter perturbations in the modified \(f(Q)\) gravity was investigated by Khyllep et al. [53], who also considered the power-law structure of the cosmic perturbations.
It is the goal of the present paper to consider a detailed investigation, in the framework of \(f(Q)\) gravity, of a specific cosmological model, obtained by assuming a simple power law form of the \(f(Q)\) function, \(f(Q)=Q+\gamma 6H_{0}^{2}(Q/Q_{0})^{n}\), where \(n,\gamma\) and \(Q_{0}=6H_{0}^{2}\) are constants. After writing down the generalized Friedmann equations, an effective dark energy model can be constructed. As for the parameter of the equation of state of the dark energy we assume a specific, redshift dependent form. In order to test the predictions of the model we have adopted several numerical techniques, including the MCMC fitting, which allow us to study the observational implications of this modified \(f(Q)\) gravity model, which gives us the possibility of constraining the cosmological model parameters, using various observational datasets.
This manuscript is organized in the following manner. We start with the presentation of the basic formulation of the \(f(Q)\) gravity in Section II. We present the basic assumptions and ideas of a specific \(f(Q)\) type cosmological model in Section III. Thereafter, in Section IV, we present the different observational samples, the numerical methods, and we present the data analysis outputs. Moreover, we discuss the obtained results in detail. In addition, in Section V, we explore the behavior in our model of various cosmological quantities, like the deceleration parameter, jerk and snap parameters, and the dark energy and dark matter densities, respectively. Finally, we discuss and conclude our results in Section VI.
## II Brief review of the \(f(Q)\) gravity theory
The basic idea of the \(f(Q)\) theory is that gravitational phenomena can be fully described in the Weyl geometry [39], in which the metric conditions is not anymore satisfied, and the covariant divergence of the metric tensor is given by
\[\nabla_{\lambda}g_{\mu\nu}=Q_{\lambda\mu\nu}, \tag{1}\]
where \(Q_{\lambda\mu\nu}\) is called the nonmetricity. The scalar nonmetricity, given by
\[Q\equiv-g^{\mu\nu}\left(L_{\ \beta\nu}^{\ \alpha}L_{\nu\alpha}^{\ \beta}-L_{\ \beta\alpha}^{\ \alpha}L_{\mu\nu}^{\ \beta}\right), \tag{2}\]
plays a fundamental role in the theory, where \(L_{\ \mu\nu}^{\ \lambda}\) is defined as,
\[L_{\ \mu\nu}^{\ \lambda}=-\frac{1}{2}g^{\lambda\gamma}\left(Q_{\mu\gamma\nu}+Q _{\nu\gamma\mu}-Q_{\gamma\mu\nu}\right). \tag{3}\]
Now, we introduce the action for the \(f(Q)\) gravity theory, given by [40],
\[S=\int\left[\frac{1}{2}f(Q)+\mathcal{L}_{m}\right]\sqrt{-g}d^{4}x, \tag{4}\]
where \(f(Q)\) is a general function of the non-metricity scalar \(Q\), \(g\) represents the determinant of the metric \(g_{\mu\nu}\), and \(\mathcal{L}_{m}\) is the matter Lagrangian density. The non-metricity tensor is given as,
\[Q_{\alpha\mu\nu}=\nabla_{\alpha}g_{\mu\nu}=-L_{\ \mu\lambda}^{\ \rho}g_{\rho\nu}-L_{\ \alpha\nu}^{\ \rho}g_{\rho\mu}. \tag{5}\]
The following two equations give the expressions of the non-metricity tensor's two independent traces
\[Q_{\alpha}=Q_{\ \alpha\ \beta}^{\ \beta},\ \ \tilde{Q}_{\alpha}=Q_{\ \alpha\beta}^{ \beta}, \tag{6}\]
while the deformation term is given by
\[L_{\mu\nu}^{\ \alpha}=\frac{1}{2}Q_{\mu\nu}^{\ \alpha}-Q_{(\mu\nu)}^{\ \
Moreover, the nonmetricity scalar \(Q\) is obtained as,
\[Q=-g^{\mu\nu}(L^{\alpha}_{\beta\nu}L^{\beta}_{\mu\alpha}-L^{\beta}_{\alpha\beta}L ^{\alpha}_{\mu\nu})=-P^{\alpha\beta\gamma}Q_{\alpha\beta\gamma}. \tag{8}\]
Here, \(P^{\alpha\beta\gamma}\) is the non-metricity conjugate, and is defined as
\[P^{\alpha}_{\mu\nu}=\frac{1}{4}\left[-Q^{\alpha}_{\mu\nu}+2Q^{ \alpha}_{(\mu\nu)}-Q^{\alpha}g_{\mu\nu}-\tilde{Q}^{\alpha}g_{\mu\nu}-\delta^ {\alpha}_{(\mu}Q_{\nu)}\right]. \tag{9}\]
The field equation of the \(f(Q)\) gravity theory is obtained by varying (4) with respect to \(g_{\mu\nu}\), and it takes the following form:
\[-\frac{2}{\sqrt{-g}}\nabla_{a}(\sqrt{-g}f_{Q}P^{\alpha}_{\mu\nu} )+\frac{1}{2}g_{\mu\nu}f \tag{10}\] \[+f_{Q}(P^{\alpha\beta}_{\nu}Q_{\mu\alpha\beta}-2P^{\alpha\beta}_ {\mu}Q_{\alpha\beta\nu})=\kappa T_{\mu\nu}, \tag{11}\]
where \(f_{Q}=\frac{\partial f}{\partial Q}\), and the energy-momentum tensor \(T_{\mu\nu}\) is given by
\[T_{\mu\nu} = -\frac{2}{\sqrt{-g}}\frac{\delta\sqrt{-g}\mathcal{L}_{m}}{ \delta\sqrt{g_{\mu\nu}}}, \tag{12}\]
By varying the action with respect to the affine connection, the following equation can be obtained:
\[\nabla_{\mu}\nabla_{\nu}(\sqrt{-g}f_{Q}P^{\mu\nu}_{\ \ \alpha})=0. \tag{13}\]
Within the framework of \(f(Q)\) gravity, the field equations guarantee the conservation of the energy-momentum tensor, and given the choice of \(f(Q)=Q\), the Einstein equations are retrieved.
## III The cosmological model
The standard Friedmann-Lemaitre-Robertson-Walker line element, which describes our flat, homogeneous, and isotropic Universe, is given by,
\[ds^{2}=-dt^{2}+a^{2}(t)(dx^{2}+dy^{2}+dz^{2}). \tag{14}\]
Here \(t\) is the cosmic time, and \(x\), \(y\), \(z\) denote the Cartesian co-ordinates. Moreover, \(a(t)\) is the cosmic scale factor. The Hubble parameter \(H(t)\) is defined by \(H(t)=\frac{\dot{a}}{a}\), where \(\dot{a}\) denotes the derivative of \(a\) with respect to the cosmic time \(t\). Moreover, we introduce the cosmological redshift \(z\) defined as \(1+z=1/a\).
### The generalized Friedmann equations
For the FLRW geometry we get the non-metricity scalar as \(Q=6H^{2}\). We consider the matter content of the Universe as consisting of a perfect and isotropic fluid, with energy-momentum tensor given by
\[T_{\mu\nu} = (p+\rho)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{15}\]
where \(p\) and \(\rho\) are the pressure and the energy density of the fluid, \(u_{\mu}\) is the four velocity vector normalized according to \(u^{\mu}u_{\mu}=-1\).
Now we are considering the splitting of \(f(Q)\) as \(f(Q)=Q+F(Q)\). By considering the FLRW metric, we get two Friedmann equations as [54; 55]
\[3H^{2}=\rho+\frac{F}{2}-QF_{Q}, \tag{16}\]
\[(2QF_{\rm QQ}+F_{Q}+1)\dot{H}+\frac{1}{4}(Q+2QF_{Q}-F)=-2p \tag{17}\]
where \(F_{Q}=\frac{dF}{dQ}\) and \(F_{\rm QQ}=\frac{d^{2}F}{dQ^{2}}\).
In the above equation (16), the energy density (\(\rho\)) can be written as \(\rho=\rho_{m}+\rho_{r}\) where \(\rho_{m}\), \(\rho_{r}\) are the energy density for dark matter and radiation, respectively. Similarly, we can write \(p=p_{r}+p_{m}\). The standard matter distribution satisfies the conservation equation given by,
\[\frac{d\rho}{dt}+3H(1+\omega)\rho=0. \tag{18}\]
In Eq. (18), the equation of state parameter (EoS) for matter, \(\omega\), takes different values for different matter sources, like baryonic matter, and radiation. As for the expression of \(Q\), and its time derivative, they are related to the Hubble parameter by the important relations
\[Q=6H^{2},\dot{Q}=12H\dot{H}. \tag{19}\]
### The equation of state of the dark energy
On the other hand, to describe the features of dark energy, due to the lack of precision of the current data, and our lack of theoretical understanding of dark energy, extracting the value of EoS of dark energy from observational data is particularly difficult. Under these circumstances, one must parameterize \(\omega_{de}\) empirically, usually using two or more free parameters, to probe the dynamical evolution of dark energy. The Chevallier-Polarski-Linder (CPL) model [56] is the most popular and thoroughly studied among all the parametrization forms of dark energy EoS. The simplest form of the CPL model can be written as,
\[\omega_{de}(z)=\omega_{0}+\omega_{a}\frac{z}{1+z}. \tag{20}\]
In the above expression, \(z\) is the redshift, \(\omega_{0}\) denotes the present-day value of EoS \(\omega(z)\), and \(\omega_{a}\) characterizes
its dynamics. The main reason for considering such a parametrization form is to resolve the divergence property of the linear form \(\omega(z)=\omega_{0}+\omega_{a}z\) at high redshifts.
In addition, the CPL parametrization has a number of advantages, as mentioned by Linder [57], including a manageable two-dimensional phase space, well-behaved and bounded behavior for high redshifts, high accuracy in reconstructing numerous scalar field equations of state, a straightforward physical interpretation, etc.
Though it has the above mentioned benefits, there are some drawbacks to the CPL model. The CPL model only properly describes the past expansion history, but cannot describe the future evolution, since \(|\omega_{de}(z)|\) increases and finally diverges as \(z\) approaches \(-1\). The EoS is bound between \(\omega_{0}+\omega_{a}\) and \(\omega_{0}\) from the infinite past to the present.
### The generalized Friedmann equations in the redshift space
In general, for isotropic and homogeneous spatially flat FLRW cosmologies in the presence of radiation, non-relativistic matter, and an exotic fluid with an equation of state \(p_{de}=\omega_{de}\,\rho_{de}\), the Friedmann equations (16), (17) becomes
\[3H^{2}=\rho_{r}+\rho_{m}+\rho_{de}, \tag{21}\]
\[2\dot{H}+3H^{2}=-\frac{\rho_{r}}{3}-p_{m}-p_{de}, \tag{22}\]
where \(\rho_{r},\rho_{m}\), and \(p_{m}\) are the energy densities of the radiation and matter components, \(p_{m}\) is the matter pressure, while \(\rho_{de}\) and \(p_{de}\) are the DE's density and pressure contribution due to the geometry, given by
\[\rho_{de}=\frac{F}{2}-Q\,F_{Q}, \tag{23}\]
\[p_{de}=2\dot{H}(2QF_{QQ}+F_{Q})-\rho_{de}. \tag{24}\]
In the following we assume that the matter pressure, be it baryonic, or dark matter, can be neglected. From Eqs. (21) and (22) we obtain immediately the global conservation equation
\[\frac{d}{dt}\left(\rho_{r}+\rho_{m}+\rho_{de}\right)+3H\left(\frac{4\rho_{r}}{ 3}+\rho_{m}+\rho_{de}+p_{de}\right)=0. \tag{25}\]
When there are no interactions between the three fluids, the energy densities satisfy the following differential equations
\[\dot{\rho}_{r}+4H\rho_{r} = 0, \tag{26}\] \[\dot{\rho}_{m}+3H\rho_{m} = 0,\] (27) \[\dot{\rho}_{de}+3H(1+\omega_{de})\rho_{de} = 0. \tag{28}\]
The dark energy equation of state \(\omega_{de}\) can be written as the function of \(F(Q)\) and its derivatives as
\[\omega_{de}=\frac{p_{de}}{\rho_{de}}=-1+\frac{4H(2QF_{QQ}+F_{Q})}{F-2QF_{Q}}. \tag{29}\]
From Eqs. (26) and (27), one can quickly get the evolution of the pressureless matter and of radiation, namely, \(\rho_{m}\propto\frac{1}{a(t)^{3}}\) and \(\rho_{r}\propto\frac{1}{a(t)^{4}}\).
Moreover, by using the relationship between redshift (\(z\)) and the universe scale factor \(a(t)\,\left[a(t)=\frac{1}{1+z}\right]\), we can represent the relationship between the redshift and the cosmic time as,
\[\frac{d}{dt}=\frac{dz}{dt}\frac{d}{dz}=-(1+z)H(z)\frac{d}{dz}. \tag{30}\]
Now, for the present cosmological study of the \(f(Q)\) gravity, we are considering one particular form of \(F(Q)\), with
\[F(Q)=6\gamma\,H_{0}^{2}\left(\frac{Q}{Q_{0}}\right)^{n}, \tag{31}\]
where \(H_{0}\), \(\gamma\), \(n\) and \(Q_{0}\) are constants. The motivation for choosing this form is that the Friedmann equations represent a system of ordinary differential equations, and we can find power-law and exponential types of solutions for these types of equations. Therefore, we have considered the power-law form in our study. With the adopted functional form of \(f(Q)\) we obtain first
\[\rho_{de} = \frac{F}{2}-Q\,F_{Q}=\frac{\alpha Q^{n}}{2}-QnaQ^{n-1}=\alpha \left(\frac{1}{2}-n\right)Q^{n}\] \[= 6\gamma\,H_{0}^{2}\left(\frac{1}{2}-n\right)\left(\frac{Q}{Q_{0} }\right)^{n}=6\gamma\,H_{0}^{2}\left(\frac{1}{2}-n\right)\left(\frac{H}{H_{0} }\right)^{2n},\]
where we have denoted \(\alpha=6\gamma\,H_{0}^{2}/Q_{0}^{n}\), and \(Q_{0}=6H_{0}^{2}\). Then for the derivative of the dark energy we obtain the expression
\[\dot{\rho}_{de} = n\,\alpha\,Q^{n-1}\left(\frac{1}{2}-n\right)\dot{Q} \tag{33}\] \[= 12n\gamma\,H_{0}^{2}\left(\frac{1}{2}-n\right)\left(\frac{H}{H_{ 0}}\right)^{2n}\frac{H}{H}.\]
We substitute now the expressions of the dark energy, and of its derivative, into the conservation equation (28), together with the CPL parametrization of the parameter of the dark energy equation of state. Hence, by also taking into account the relation between \(H\) and \(Q\), we obtain
\[2n\,\frac{\dot{H}}{H}+3H\left(1+\omega_{de}\right)=0, \tag{34}\]
leading, in the redshift space, to the first order differential equation
\[-2n\left(1+z\right)\frac{dH}{dz}+3H\left(1+\omega_{0}+\omega_{a}\frac{z}{1+z} \right)=0, \tag{35}\]
or
\[-n(1+z)\frac{d}{dz}H^{2}+3\left(1+\omega_{0}+\omega_{a}\frac{z}{1+z}\right)H^ {2}=0, \tag{36}\]
with the general solution given by
\[H^{2}(z)=C_{1}^{2}(1+z)^{\frac{3(1+\omega_{0}+\omega_{a})}{n}}e^{\frac{3\omega _{0}}{n(1+z)}}, \tag{37}\]
where \(C_{1}\) is an arbitrary constant of integration, which we determine so that \(H^{2}(0)=H_{0}^{2}\), giving \(C_{1}^{2}=H_{0}^{2}e^{-3\omega_{a}/n}\). Hence we obtain
\[H^{2}(z)=H_{0}^{2}(1+z)^{\frac{3(1+\omega_{0}+\omega_{a})}{n}}e^{-\frac{3 \omega_{0}z}{n(1+z)}}. \tag{38}\]
Now using (37) in (32), we obtain for the dark energy density \(\rho_{de}\) the expression
\[\rho_{de}(z)=3\gamma\left(1-2n\right)H_{0}^{2}(1+z)^{3(1+\omega_{0}+\omega_{a })}e^{\frac{-3\omega_{a}z}{(1+z)}}. \tag{39}\]
Alternatively, we can obtain the same result by using the considered equation of state, which gives first
\[\omega_{de} = \frac{P_{de}}{\rho_{de}}=\frac{2\dot{H}(2QF_{QQ}+F_{Q})-\rho_{de }}{\frac{F}{2}-QF_{Q}} \tag{40}\] \[= -1+\frac{4\dot{H}(2QF_{QQ}+F_{Q})}{F-2QF_{Q}}.\]
With the help of the CPL parametrization we successively obtain
\[-1-\frac{4Hn}{Q}=\omega_{0}+\omega_{a}\frac{z}{1+z}, \tag{41}\]
and
\[-\frac{2}{3}n(1+z)\frac{dH}{dz}\frac{1}{H}=-\left[1+\omega_{0}+\omega_{a}\frac {z}{1+z}\right], \tag{42}\]
respectively, with the solution of the above differential equation given again by Eq. (37).
Additionally, the matter density (\(\rho_{m}\)) and radiation density (\(\rho_{r}\)) can be written in terms of redshift function \(z\) as,
\[\rho_{m}\propto(1+z)^{3}\ ;\quad\rho_{r}\propto(1+z)^{4} \tag{43}\]
Consequently, the Friedmann equation (21) reads,
\[\begin{split}& 3H^{2}(z)=\rho_{r0}(1+z)^{4}+\rho_{m0}(1+z)^{3}+3 \gamma\left(1-2n\right)H_{0}^{2}(1+z)^{3(1+\omega_{0}+\omega_{a})}e^{\frac{-3 \omega_{a}z}{(1+z)}},\\ &\frac{H^{2}(z)}{H_{0}^{2}}=\Omega_{r0}(1+z)^{4}+\Omega_{m0}(1+z )^{3}+\gamma\left(1-2n\right)(1+z)^{3(1+\omega_{0}+\omega_{b})}e^{\frac{-3 \omega_{a}z}{(1+z)}}.\end{split} \tag{44}\]
In the equation (44), the suffix \(0\) represents the present day value of the corresponding quantity. \(H_{0}\) is the current Hubble value (at \(z=0\)) of our present Universe.
Finally, we are going to introduce the energy density parameters, defined as
\[\Omega_{m}=\frac{\rho_{m}}{3H^{2}},\ \ \Omega_{r}=\frac{\rho_{r}}{3H^{2}},\ \ \Omega_{de}=\frac{\rho_{de}}{3H^{2}} \tag{45}\]
Alternatively, we can obtain the same result by using the considered equation of state, which gives first
\[\omega_{de} = \frac{P_{de}}{\rho_{de}}=\frac{2\dot{H}(2QF_{QQ}+F_{Q})-\rho_{de }}{\frac{F}{2}-QF_{Q}} \tag{46}\] \[= -1+\frac{4\dot{H}(2QF_{QQ}+F_{Q})}{F-2QF_{Q}}.\]
With the help of the CPL parametrization we successively obtain
\[-1-\frac{4Hn}{Q}=\omega_{0}+\omega_{a}\frac{z}{1+z}, \tag{47}\]
and
\[-\frac{2}{3}n(1+z)\frac{dH}{dz}\frac{1}{H}=-\left[1+\omega_{0}+\omega_{a} \frac{z}{1+z}\right], \tag{48}\]
respectively, with the solution of the above differential equation given again by Eq. (37).
Additionally, the matter density (\(\rho_{m}\)) and radiation density (\(\rho_{r}\)) can be written in terms of redshift function \(z\) as,
\[\rho_{m}\propto(1+z)^{3}\ ;\quad\rho_{r}\propto(1+z)^{4} \tag{49}\]
Consequently, the Friedmann equation (21) reads,
\[\begin{split}& 3H^{2}(z)=\rho_{r0}(1+z)^{4}+\rho_{m0}(1+z)^{3}+3 \gamma\left(1-2n\right)H_{0}^{2}(1+z)^{3(1+\omega_{0}+\omega_{a})}e^{\frac{-3 \omega_{a}z}{(1+z)}},\\ &\frac{H^{2}(z)}{H_{0}^{2}}=\Omega_{r0}(1+z)^{4}+\Omega_{m0}(1+z )^{3}+\gamma\left(1-2n\right)(1+z)^{3(1+\omega_{0}+\omega_{b})}e^{\frac{-3 \omega_{a}z}{(1+z)}}.\end{split} \tag{50}\]
In the equation (44), the suffix \(0\) represents the present day value of the corresponding quantity. \(H_{0}\) is the current Hubble value (at \(z=0\)) of our present Universe.
Finally, we are going to introduce the energy density parameters, defined as
\[\Omega_{m}=\frac{\rho_{m}}{3H^{2}},\ \ \Omega_{r}=\frac{\rho_{r}}{3H^{2}},\ \ \Omega_{de}=\frac{\rho_{de}}{3H^{2}} \tag{51}\]
Alternatively, we can obtain the same result by using the considered equation of state, which gives first
\[\omega_{de} = \frac{P_{de}}{\rho_{de}}=\frac{2\dot{H}(2QF_{QQ}+F_{Q})-\rho_{de }}{\frac{F}{2}-QF_{Q}} \tag{52}\] \[= -1+\frac{4\dot{H}(2QF_{QQ}+F_{Q})}{F-2QF_{Q}}.\]
With the help of the CPL parametrization we successively obtain
\[-1-\frac{4Hn}{Q}=\omega_{0}+\omega_{a}\frac{z}{1+z}, \tag{53}\]
and
\[-\frac{2}{3}n(1+z)\frac{dH}{dz}\frac{1}{H}=-\left[1+\omega_{0}+\omega_{a}\frac{ z}{1+z}\right], \tag{54}\]
respectively, with the solution of the above differential equation given again by Eq. (37).
Additionally, the matter density (\(\rho_{m}\)) and radiation density (\(\rho_{r}\)) can be written in terms of redshift function \(z\) as,
\[\rho_{m}\propto(1+z)^{3}\ ;\quad\rho_{r}\propto(1+z)^{4} \tag{55}\]
Consequently, the Friedmann equation (21) reads,
\[\begin{split}& 3H^{2}(z)=\rho_{r0}(1+z)^{4}+\rho_{m0}(1+z)^{3}+3 \gamma\left(1-2n\right)H_{0}^{2}(1+z)^{3(1+\omega_{0}+\omega_{a})}e^{\frac{-3 \omega_{a}z}{(1+z)}},\\ &\frac{H^{2}(z)}{H_{0}^{2}}=\Omega_{r0}(1+z)^{4}+\Omega_{m0}(1+z)^{3 }+\gamma\left(1-2n\right)(1+z)^{3(1+\omega_{0}+\omega_{b})}e^{\frac{-3 \omega_{a}z}{(1+z)}}.\end{split} \tag{56}\]
In the equation (50), the suffix \(0\) represents the present day value of the corresponding quantity. \(H_{0}\) is the current Hubble value (at \(z=0\)) of our present Universe.
Finally, we are going to introduce the energy density parameters, defined as
\[\Omega_{m}=\frac{\rho_{m}}{3H^{2}},\ \ \Omega_{r}=\frac{\rho_{r}}{3H^{2}},\ \ \Omega_{de}=\frac{\rho_{de}}{3H^{2}} \tag{57}\]
Alternatively, we can obtain the same result by using the considered equation of state, which gives first
\[\omega_{de} = \frac{P_{de}}{\rho_{de}}=\frac{2\dot{H}(2QF_{QQ}+F_{Q})-\rho_{de }}{\frac{F}{2}-QF_{Q}} \tag{58}\] \[= -1+\frac{4\dot{H}(2QF_{QQ}+F_{Q})}{F-2QF_{Q}}.\]
With the help of the CPL parametrization we successively obtain
\[-1-\frac{4Hn}{Q}=\omega_{0}+\omega_{a}\frac{z}{1+z}, \tag{59}\]
and
\[-\frac{2}{3}n(1+z)\frac{dH}{dz}\frac{1}{H}=-\left[1+\omega_{
are maximized by using the probability function
\[\mathcal{L}\propto\exp(-\chi^{2}/2), \tag{46}\]
where \(\chi^{2}\) is the pseudo chi-squared function [58]. More details about the \(\chi^{2}\) function for various date samples are discussed in the following subsections.
### Cosmic Chronometer (CC) Sample
For the Cosmic Chronometer (CC) Sample, we used 31 points of Hubble samples, collected from the differential age (DA) approach in the redshift range \(0.07<z<2.42\). The complete list of this sample is collectively presented in [59]. The chi-square function for the Hubble sample is defined as
\[\chi^{2}_{CC}=\sum_{i=1}^{31}\frac{[H_{i}^{th}(\theta_{s},z_{i})-H_{i}^{obs}(z _{i})]^{2}}{\sigma_{CC}^{2}(z_{i})} \tag{47}\]
where \(H_{i}^{obs}\) denotes the observed value, \(H_{i}^{th}\) denotes the Hubble's theoretical value, \(\sigma_{z_{i}}\) denotes the standard error in the observed value, and \(\theta_{s}=(H_{0},\Omega_{m0},\,\omega_{0},\omega_{a},n,\gamma)\) is the cosmological background parameter space. In addition, we use the following _priors_ to our analysis, which we present in Table 1.
In our MCMC analysis, we used 100 walkers and 1000 steps to find out the fitting results. The \(1-\sigma\) and \(2-\sigma\) CL contour plot are presented in Fig. 1, and the numerical results are presented, for the CC sample, in Table 2. With the mean constrain value of the free parameters, we present the Hubble parameter profile for the CC sample, together with the \(\Lambda\)CDM behavior, in Fig. 2.
### Type Ia Supernovae Sample
Supernovae samples are a powerful indicator for exploring the background geometry and properties of the Universe. In this analysis, we adopt the largest SNe Ia sample published to date, the Pantheon+SHOES sample, which consists of 1701 light curves of 1550 spectroscopically confirmed SNe Ia across 18 different surveys [60]. The Pantheon+SHOES sample significantly increases the number of observations relative to the Pantheon data at low redshifts, and covers the redshift range \(z\in[0.00122,2.26137]\). It is the successor of Pantheon sample [61]. The chi-square function is defined as,
\[\chi^{2}_{SN}=\sum_{i,j=1}^{1701}\bigtriangledown\mu_{i}\left(C_{SN}^{-1} \right)_{ij}\bigtriangledown\mu_{j}. \tag{48}\]
Here \(C_{SN}\) is the covariance matrix [60], and
\[\bigtriangledown\mu_{i}=\mu^{th}(z_{i},\theta)-\mu_{i}^{obs}.\]
is the difference between the observed value of distance modulus, extracted from the cosmic observations, and its theoretical values, calculated from the model, with the given parameter space \(\theta\). \(\mu_{i}^{th}\) and \(\mu_{i}^{obs}\) are the theoretical and observed distance modulus, respectively.
The theoretical distance modulus \(\mu_{i}^{th}\) is defined as
\[\mu_{i}^{th}(z)=m-M=5\log D_{l}(z)+25, \tag{49}\]
where \(m\) and \(M\) are apparent and the absolute magnitudes of a standard candle, respectively. The luminosity distance \(D_{l}(z)\) is defined as
\[D_{l}(z)=(1+z)\int_{0}^{z}\frac{dz^{*}}{H(z^{*})}. \tag{50}\]
To run the MCMC code, we used the same _priors_, number of walkers, and steps, which have been used in the CC sample. The \(1-\sigma\) and \(2-\sigma\) CL contour plot is presented in Fig. 3, and the numerical results for the Pantheon+Shoes sample are presented in Table 2. With the mean constraint value of the free parameters, we present the distance modulus parameter profile with the Pantheon+SHOES sample and the \(\Lambda\)CDM model in Fig. 4.
### CC + Type Ia Supernovae Sample
To perform both the CC and Type Ia supernovae samples together, we use the following Chi-square function
\[\chi^{2}_{CC+SN}=\chi^{2}_{CC}+\chi^{2}_{SN}. \tag{51}\]
The marginalized constraints on the parameters included in the parameter space \(\theta\) are presented in Fig. 5. The numerical results are presented in Table 2.
### Information Criteria and Model Selection Analysis
This subsection will discuss the various statistical information criteria and the model selection procedures. For this purpose, we use the Akaike information criterion (AIC) [62], and the Bayesian information criterion (BIC) [63] to compare a set of models with their observational prediction given by dataset(s).
On the basis of information theory, the AIC addresses the problem of model adequacy. It is a Kullback-Leibler information estimator with the property of asymptotic unbiasedness. The AIC estimator is given under the
Figure 1: The marginalized constraints on the parameters \(H_{0},\Omega_{m0}\), \(\omega_{0},\omega_{a},n,\gamma\) of our model using the Hubble sample. The dark orange shaded regions presents the \(1-\sigma\) confidence level (CL), and the light orange shaded regions present the \(2-\sigma\) confidence level. The constraint values for the parameters are presented at the \(1-\sigma\) CL.
\begin{table}
\begin{tabular}{c c c c c c c} Model & \(H_{0}\) & \(\Omega_{m0}\) & \(\omega_{0}\) & \(\omega_{a}\) & \(n\) & \(\gamma\) \\ \hline \multicolumn{7}{c}{68\% **limits**} \\ \multicolumn{7}{c}{CC sample} \\ \(\Lambda\)CDM & \(68.80\pm 0.94\) & \(0.318\pm 0.034\) & - & - & - & - \\ Power-law & \(71.59\pm 0.54\) & \(0.292\pm 0.020\) & \(-1.005\pm 0.090\) & \(-0.00996\pm 0.0010\) & \(-0.3612\pm 0.0010\) & \(0.369\pm 0.046\) \\ \multicolumn{7}{c}{Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.33\pm 0.28\) & \(0.383\pm 0.022\) & - & - & - & - \\ Power-law & \(71.733^{+0.085}_{-0.068}\) & \(0.1899\pm 0.0069\) & \(-1.005\pm 0.010\) & \(-0.0100^{+0.0010}_{-0.0011}\) & \(-0.3616\pm 0.0010\) & \(0.4627\pm 0.0063\) \\ \multicolumn{7}{c}{CC+Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.66\pm 0.26\) & \(0.342\pm 0.019\) & - & - & - & - \\ Power-law & \(71.54^{+0.11}_{-0.093}\) & \(0.1971\pm 0.0068\) & \(-1.0284\pm 0.0096\) & \(-0.0181^{+0.011}_{-0.0082}\) & \(-0.343\pm 0.010\) & \(0.4871\pm 0.0098\) \\ \multicolumn{7}{c}{95\% **limits**} \\ \(\Lambda\)CDM & \(68.8^{+1.9}_{-1.8}\) & \(0.318^{+0.068}_{-0.063}\) & - & - & - & - \\ Power-law & \(71.6^{+1.0}_{-1.0}\) & \(0.292^{+0.040}_{-0.040}\) & \(-1.00^{+0.18}_{-0.18}\) & \(-0.00996^{+0.0020}_{-0.0020}\) & \(-0.3612^{+0.0020}_{-0.0020}\) & \(0.369^{+0.094}_{-0.089}\) \\ \multicolumn{7}{c}{Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.33^{+0.55}_{-0.54}\) & \(0.383^{+0.044}_{-0.044}\) & - & - & - & - \\ Power-law & \(71.73^{+0.16}_{-0.19}\) & \(0.190^{+0.013}_{-0.013}\) & \(-1.005^{+0.020}_{-0.019}\) & \(-0.0100^{+0.0022}_{-0.0021}\) & \(-0.3616^{+0.0020}_{-0.0019}\) & \(0.463^{+0.012}_{-0.012}\) \\ \multicolumn{7}{c}{CC+Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.66^{+0.50}_{-0.53}\) & \(0.342^{+0.038}_{-0.036}\) & - & - & - & - \\ Power-law & \(71.54^{+0.19}_{-0.22}\) & \(0.197^{+0.014}_{-0.014}\) & \(-1.028^{+0.020}_{-0.018}\) & \(-0.018^{+0.017}_{-0.018}\) & \(-0.343^{+0.019}_{-0.020}\) & \(0.487^{+0.020}_{-0.020}\) \\ \multicolumn{7}{c}{Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.33^{+0.55}_{-0.54}\) & \(0.383^{+0.044}_{-0.044}\) & - & - & - & - \\ Power-law & \(71.73^{+0.16}_{-0.19}\) & \(0.190^{+0.013}_{-0.013}\) & \(-1.005^{+0.020}_{-0.019}\) & \(-0.0100^{+0.0022}_{-0.0021}\) & \(-0.3616^{+0.0020}_{-0.0019}\) & \(0.463^{+0.012}_{-0.012}\) \\ \multicolumn{7}{c}{CC+Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.66^{+0.50}_{-0.53}\) & \(0.342^{+0.038}_{-0.036}\) & - & - & - & - \\ Power-law & \(71.54^{+0.19}_{-0.22}\) & \(0.197^{+0.014}_{-0.014}\) & \(-1.028^{+0.020}_{-0.018}\) & \(-0.018^{+0.017}_{-0.018}\) & \(-0.343^{+0.019}_{-0.020}\) & \(0.487^{+0.020}_{-0.020}\) \\ \multicolumn{7}{c}{Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.33^{+0.55}_{-0.54}\) & \(0.383^{+0.044}_{-0.044}\) & - & - & - & - \\ Power-law & \(71.73^{+0.16}_{-0.19}\) & \(0.190^{+0.013}_{-0.013}\) & \(-1.005^{+0.020}_{-0.019}\) & \(-0.0100^{+0.0022}_{-0.0021}\) & \(-0.3616^{+0.0020}_{-0.0019}\) & \(0.463^{+0.012}_{-0.012}\) \\ \multicolumn{7}{c}{CC+Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.66^{+0.50}_{-0.53}\) & \(0.342^{+0.038}_{-0.036}\) & - & - & - & - \\ Power-law & \(71.54^{+0.19}_{-0.22}\) & \(0.197^{+0.014}_{-0.014}\) & \(-1.028^{+0.020}_{-0.018}\) & \(-0.018^{+0.017}_{-0.018}\) & \(-0.343^{+0.019}_{-0.020}\) & \(0.487^{+0.020}_{-0.020}\) \\ \multicolumn{7}{c}{Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.66^{+0.50}_{-0.53}\) & \(0.342^{+0.038}_{-0.036}\) & - & - & - & - \\ Power-law & \(71.54^{+0.19}_{-0.22}\) & \(0.197^{+0.014}_{-0.014}\) & \(-1.028^{+0.020}_{-0.018}\) & \(-0.018^{+0.017}_{-0.018}\) & \(-0.343^{+0.019}_{-0.020}\) & \(0.487^{+0.020}_{-0.020}\) \\ \multicolumn{7}{c}{Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.66^{+0.50}_{-0.53}\) & \(0.342^{+0.038}_{-0.036}\) & - & - & - & - \\ Power-law & \(71.54^{+0.19}_{-0.22}\) & \(0.197^{+0.014}_{-0.014}\) & \(-1.028^{+0.020}_{-0.018}\) & \(-0.018^{+0.017}_{-0.018}\) & \(-0.343^{+0.019}_{-0.020}\) & \(0.487^{+0.020}_{-0.020}\) \\ \multicolumn{7}{c}{Pantheon+SHOES sample} \\ \(\Lambda\)CDM & \(72.33^{+0.55}_{-0.54}\) & \(0.383^{+0.044}_{-0.04
standard assumption of Gaussian errors, by [64, 65]
\[AIC=-2\ln\left(\mathcal{L}_{max}\right)+2k+\frac{2k\left(k+1\right)}{N_{tot}-k-1}, \tag{52}\]
where \(k\) is the number of free parameters in the proposed model, \(\mathcal{L}_{max}\) is the maximum likelihood value of the dataset(s) considered for analysis, and \(N_{tot}\) is the number of data points. For a large number of data points, the above formula reduces to \(AIC\equiv-2\mathcal{L}_{max}+2k\), which is a modified form of AIC. Therefore, the modified AIC criteria is convenient for all the cases [66].
The BIC is a Bayesian evidence estimator, given by
Figure 3: The marginalized constraints on the parameters \(H_{0}\),\(\Omega_{m0}\), \(\omega_{0},\omega_{a},n,\gamma\) of our model using Pantheon+Shoes sample. The dark blue shaded regions present the \(1-\sigma\) confidence level (CL), and light blue shaded regions present the \(2-\sigma\) confidence level. The constraint values for the parameters are presented at the \(1-\sigma\) CL.
[65; 66; 67],
\[BIC=-2\ln{(\mathcal{L}_{max})}+k\log(N_{tot}). \tag{53}\]
For a given set of comparable models, we aim to rank them according to their fitting qualities with respect to the observational dataset. We use the previously studied method, in particular, the relative difference between the IC value of the given models,
\[\Delta IC_{model}=IC_{model}-IC_{min}, \tag{54}\]
where \(IC_{min}\) is the minimum value of IC of the set of competing models. The \(\Delta IC\) value measures the compatibility and tension between the models. According to Jeffrey's scale [68], the condition \(\Delta IC\leq 2\) confirms the statistical compatibility of the two models, and the model most favored by the data. The condition \(2<\Delta IC<6\) indicates a mild tension between the two models, while the condition \(\Delta IC\geq 10\) suggests a strong tension. The outputs of these tests are presented in Table 3.
### Numerical results
In Tables 2 and 3, we have presented the numerical limits of the parameters \(H_{0}\), \(\Omega_{m0}\), \(\omega_{0}\), \(\omega_{a}\), \(n\), and of some cosmological parameters with the 68% and 95% confidence levels. The constraint values on the present Hubble parameter are \(71.59\pm 0.54\), \(71.733^{+0.085}_{-0.068}\), \(71.54^{+0.11}_{-0.093}\) with 68% CL for CC, Pantheon+SHOES, CC+Pantheon+SHOES sample respectively.
These results are consistent with recent studies (one can see the detailed discussion on \(H_{0}\) in the reference herein [69]). Furthermore, the parameters \(\omega_{0}\), \(\omega_{a}\) play an important role in identifying the nature of the CPL equation of state parameter/dark energy equation of state (EoS). This EoS reduces to \(\omega_{0}\) at \(z=0\), and the constraint values on it are \(-1.005^{+0.090}_{-0.090}\), \(-1.005^{+0.010}_{-0.010}\), \(-1.0284^{+0.0096}_{-0.0096}\) for the respective date samples. These values are very close to the \(\Lambda\)CDM model.
On the other hand \(\omega_{CPL}(z)\) shows the phantom type behaviour with the constraint values on \(\omega_{0}\), \(\omega_{a}\) for all datasets, i.e., \(\omega_{CPL}(z)<1\) always. From all these outputs, one can see that our findings confirm the existence of the present accelerated expansion of the Universe. In addition to this, we have presented the \(\chi^{2}_{min}\), the reduced \(\chi^{2}_{min}\), the AIC, BIC, \(\Delta\)AIC and \(\Delta\)BIC values in Table 3. From these results, we can estimate that the power law \(f(Q)\) type model is a good fit to the observational datasets, as compared with the \(\Lambda\)CDM model. However, it shows a mild tension between models as per the information criteria analysis. Our model shows mild tension compared to \(\Lambda\)CDM because the modified gravity model has more degrees of freedom in the parameter spaces than \(\Lambda\)CDM. To explore more about our model, we discuss some cosmological applications in the following Section.
Figure 4: The blue line represents the distance modulus profile of the power-law \(f(Q)\) model with the constraint values of \(H_{0},\Omega_{m0}\), \(\omega_{0},\omega_{a},n,\gamma\). The blue dots with the green bars represent the Pantheon+SHOES sample, and the black dotted line represents the distance modulus profile of the \(\Lambda\)CDM model.
## V Cosmological applications
In this Section, we shall discuss some cosmological applications of our theoretical \(f(Q)\) model, and we examine its current dynamical status. In this respect, we investigate the basic Cosmographic Parameters, the matter distribution profiles, and the dark energy types profiles, respectively.
Figure 5: The marginalized constraints on the parameters \(H_{0},\Omega_{m0}\), \(\omega_{0},\omega_{a},n,\gamma\) of our model using the Hubble+Pantheon sample. The dark-shaded regions present the \(1-\sigma\) confidence level (CL), and the light-shaded regions present the \(2-\sigma\) confidence level. The constraint values for the parameters are presented at the \(1-\sigma\) CL.
### Cosmographic parameters
The Cosmographic parameters are simply a Mathematical tool that considers the cosmic scale factor, and its derivatives. Using these parameters' behavior, one can investigate the present, low redshift behavior, and predict the future of the cosmological models. Therefore, we consider the profiles of the Hubble, deceleration, jerk and snap parameters to present the dynamic status of our model. Furthermore, we can write down the mathematical expressions for those parameters as follows;
\[q(z)=\Omega_{r}+\frac{1}{2}\Omega_{m}(z)+\frac{1+3\omega_{de}}{2}\Omega_{de}(z), \tag{55}\]
\[j(z)=q(z)(2q(z)+1)+(1+z)q^{\prime}(z), \tag{56}\]
\[s(z)=-(1+z)j^{\prime}(z)-2j(z)-3j(z)q(z). \tag{57}\]
Here, (\({}^{\prime}\)) represents one time derivative with respect to \(z\).
#### v.1.1 The Hubble parameter
In the previous Section, we have presented the evolution profile of the Hubble parameter with the constraint values of the free parameters. Here, we consider the ratio of \(H_{\rm Q}(z)/H_{\Lambda CDM}(z)\) in order to check the difference between both models. In Fig. 6 we plot the redshift dependence of this ratio. For low redshifts, like, for example, for \(z=0.2\), the difference between the two models is of the order of 0.0003%, 7.06%, and 5.58%, respectively, for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES samples.
The differences between the models increase for high redshift, so that for \(z=2.0\), the differences are of the order of 0.003%, 27.21%, and 22.98%, respectively, for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES samples, respectively.
#### v.1.2 The deceleration, jerk and snap parameters
Furthermore, we have depicted the profiles of the deceleration, jerk, and snap parameters with the constraint values of the free parameters for the various observational datasets in Figs. 7, 8, and 9, respectively.
The deceleration parameter.From the redshift profile of the deceleration parameter one can see clearly that our model's evolution started from the decelerated phase, and it is currently in an accelerating stage, after going through the matter-dominated era. In addition, we have found that the present values of the deceleration parameter \(q_{0}=-0.532,\ -0.717,\ -0.744\) for CC, Pantheon+SHOES, CC+Pantheon+SHOES, respectively, are aligned with the recent observational results [70, 71, 72].
Jerk and snap parameters.The evolution of the jerk and snap parameters are presented for the present model in Figs. 8 and 9, respectively. We have also obtained the parametric plot \(q-j\) for the redshift range \(z\in[-1,2.5]\) in Fig. 10. In addition, we have presented \(1-\sigma\) CL values of the deceleration, jerk, and snap parameters in Table 4. The present-day value of the jerk parameter for all the observational samples is close to the \(\Lambda\)CDM value.
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline \hline Model & \(\chi^{2}_{min}\) & red. \(\chi^{2}\) & AIC & \(\Delta\) AIC & BIC & \(\Delta\) BIC \\ \hline \multicolumn{7}{|c|}{CC} \\ \(\Lambda\)CDM & 16.07 & 0.64 & 20.07 & 0 & 22.93 & 0 \\ Power-law & 16.06 & 0.64 & 28.06 & 7.98 & 36.66 & 13.72 \\ \multicolumn{7}{|c|}{Pantheon+SHOES} \\ \(\Lambda\)CDM & 1696.84 & 1.0 & 1700.84 & 0 & 1719.15 & 0 \\ Power-law & 1683.20 & 0.99 & 1695.20 & 5.63 & 1727.83 & 8.6 \\ \multicolumn{7}{|c|}{CC+Pantheon+SHOES} \\ \(\Lambda\)CDM & 1712.9 & 1.0 & 1716.90 & 0 & 1735.28 & 0 \\ Power-law & 1699.33 & 0.99 & 1711.33 & 5.5 & 1744.07 & 8.79 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The corresponding \(\chi^{2}_{min}\) of the models for each sample and the information criteria AIC, BIC for the examined cosmological models, along with the corresponding differences \(\Delta IC_{model}=IC_{model}-IC_{min}\).
#### iv.2.3 Dimensionless density parameters
The energy density sources of our universe evolve in time, and play a major role in characterizing its past, present, and future. Here, we have presented the evolution profiles of the dark energy density and of the matter density in Figs. 11 and 12, respectively. From those Figures, one can observe that the matter energy dominated our Universe in the early time, whereas the dark-energy density dominates in the current phase. Dark energy is also responsible for the present acceleration of the Universe. The present-day values of the dark energy density are \(0.685^{+0.010}_{-0.013}\), \(0.8076^{+0.0037}_{-0.0036}\), and \(0.8064^{+0.0024}_{-0.0023}\) with \(1-\sigma\) error for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES, respectively. We also present
Figure 8: Evolution of jerk parameter \(j\) as a function of the redshift variable \(z\) for the constraint values of \(H_{0},\Omega_{m0}\), \(\omega_{0},\omega_{a},n\,\gamma\) for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES samples.(Here the profile of the jerk parameter for CC and Pantheon+SHOES samples overlaps each other.)
Figure 10: Parametric plot of \(q=q(j)\) in the redshift range \(z\in[-1,2.5]\) with the constraint values of \(H_{0},\Omega_{m0}\), \(\omega_{0},\omega_{a},n\,\gamma\) for the CC, Pantheon+SHOES, and the CC+Pantheon+SHOES samples. The orange, blue, and cyan color points represents the present value of the pair \((j_{0},q_{0})\) for respective samples.
Figure 6: Evolution of the ratio \(H_{Q}(z)/H_{\Lambda CDM}(z)\) as a function of the redshift variable \(z\) for the constraint values of \(H_{0},\Omega_{m0}\), \(\omega_{0},\omega_{a},n,\gamma\) for the CC, Pantheon+SHOES, and the CC+Pantheon+SHOES samples.
Figure 7: Evolution of the deceleration parameter as functions of the redshift variable \(z\) for the constraint values of \(H_{0},\Omega_{m0},\omega_{0},\omega_{a},n\,\gamma\) for CC, Pantheon+SHOES, CC+Pantheon+SHOES samples.
Figure 9: Evolution of the snap parameter \(s\) as a function of the redshift variable \(z\) for the constraint values of \(H_{0},\Omega_{m0},\omega_{0},\omega_{a},n\,\gamma\) for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES samples.
the constraint values of the matter density and of the dark energy density in Tables 2 and 4, for the 68% and 95% confidence levels. In addition, the energy densities satisfy the relation \(\Omega_{m}+\Omega_{de}\simeq 1\) for the entire period of their evolution. The dynamical profiles of the two fluids also suggests that dark energy will continue to dominate our Universe in the near future.
#### iv.2.4 Om Diagnostics
The \(Om\) diagnostic is used to analyze the difference between standard \(\Lambda CDM\) and other dark energy models. \(Om\) is more convenient than the state-finder diagnosis [73] as it uses only the first-order temporal derivative of the cosmic scale factor. This is because it only involves the Hubble parameter, and the Hubble parameter depends on a single time derivative of \(a(t)\). For the spatially flat Universe, it is defined as
\[Om(x)=\frac{\mathcal{H}(x)^{2}-1}{(1+z)^{3}-1},\;\;x=1+z,\mathcal{H}(x)=H(x)/H_ {0}, \tag{58}\]
where \(z\) is the redshift, and \(H_{0}\) is the present-day value of the Hubble parameter. For the dark energy model with the constant equation of state \(\omega\),
\[\mathcal{H}(x)=\Omega_{m0}x^{3}+(1-\Omega_{m0})x^{\delta},\;\;\delta=3(1+ \omega). \tag{59}\]
Now, we can rewrite \(Om(x)\) as
\[Om(x)=\Omega_{m0}+(1-\Omega_{m0})\frac{x^{\delta}-1}{x^{3}-1}. \tag{60}\]
For the \(\Lambda\)CDM model, we find
\[Om(x)=\Omega_{m0}, \tag{61}\]
whereas \(Om(x)<\Omega_{m0}\) in phantom cosmology with \(\delta<0\), while \(Om(x)>\Omega_{m0}\) in the quintessence models with \(\delta>0\). These results show that: \(Om(x)-\Omega_{m0}=0\), if dark energy is a cosmological constant [73].
In another way, we can say that the \(Om\) diagnostic gives us a _null test_ of the cosmological constant. As a consequence, \(\mathcal{H}(x)^{2}\) provides a straight line against \(x^{3}\) with a constant slope \(\Omega_{m0}\) for \(\Lambda\)CDM, a result which can be verified by using equation (59). For other dark energy models \(Om(x)\) is curved, because
\[\frac{d\mathcal{H}^{2}(x)}{dx}=\text{constant}. \tag{62}\]
Furthermore, for \(x_{1}<x_{2}\), \(Om(x_{1},x_{2})\equiv Om(x_{1})-Om(x_{2})=0\) in \(\Lambda\)CDM, whereas \(Om(x_{1},x_{2})\equiv Om(x_{1})-Om(x_{2})<0\) in phantom models, and \(Om(x_{1},x_{2})\equiv Om(x_{1})-Om(x_{2})>0\) in quintessence cosmology. This test helps us with the interpretation of the observational measurements, and also, provides us a null test for the \(\Lambda\)CDM model. In addition to this, one can check that \(Om(x)\to 0\) as \(z\rightarrow-1\) for quintessence, \(Om(x)\) diverges at \(z<0\), suggesting the 'big rip' future singularity for phantom cosmology, and \(\Lambda\)CDM approached towards the de Sitter spacetime at the late times.
We have examined the \(Om\) diagnostic profiles for our \(f(Q)\) model with the constraint values of the parameters. We have presented our results in Fig. 13. One can observe that at \(z=0\), \(Om(x_{1},x_{2})<0\), which means that the dark energy candidate of our model shows phantom-type behavior. But, in the late time, \(Om(x)\to 0\) when \(z\rightarrow-1\) the model has quintessence-like properties.
Figure 11: Profiles of the parameter of the dark energy density \(\Omega_{de}\) as functions the redshift variable \(z\) for the constraint values of \(H_{0},\Omega_{m0},\,\omega_{0},\,\omega_{a},n,\gamma\) for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES samples.
Figure 12: Profiles of the matter-energy density parameter \(\Omega_{m}\) as a function of the redshift variable \(z\) for the constraint values of \(H_{0},\Omega_{m0},\,\omega_{0},\,\omega_{a},n,\gamma\) for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES samples.
## VI Conclusion
In the present paper, we have investigated in detail the cosmological properties of a particular \(f(Q)\) gravity model, with the function \(f(Q)\) given by \(f(Q)=Q+6\gamma H_{0}^{2}(Q/Q_{0})^{n}\). The \(f(Q)\) theory is an interesting, and fundamental approach to the description of the gravitational phenomena, in which the gravitational interaction is fully characterized by the non-metricity of the space-time \(Q\), defined a general functional framework. \(f(Q)\) gravity is one important component of the "geometric trinity of gravity", and offers a full and convincing alternative to the curvature description the gravitational interaction, which is used in standard general relativity, and which was so successful in the description of the gravitational interaction. From a geometric and mathematical point of view, \(f(Q)\) gravity uses the Weylian extension of Riemann geometry, in which one of the fundamental prescription of this geometry, the metricity condition, is not valid anymore. The breaking of the metricity condition is thus the source of the gravitational phenomena, with the non-metricity scalar \(Q\) playing an analogous role to the one played by the Ricci scalar in general relativity. In an action formulation, for \(f(Q)=Q\), we exactly recover standard general relativity. In our study we have restricted our analysis to a specific form of the function \(f(Q)\), in which the deviations from standard general relativity are described by a power-law function in the non-metricity \(Q\). After writing down the field equations of the \(f(Q)\) theory in a general form, we have considered a specific dark energy model, in which the effective dark energy density, and its effective pressure, which are both geometric in their origin, are related by a linear, barotropic type equation of state, with a redshift dependent parameter of the EOS, \(\omega_{de}=\omega_{de}(z)\). For \(\omega_{de}\) we have adopted the first order CPL parameterizations, which can be extensively used for the observational testing of cosmological models. Moreover, we have restricted our basic model by imposing the energy conservation of each of the considered components of the Universe, radiation, matter, and dark energy, respectively. This procedure allows the determination of the expression of the Hubble function in terms of the three \(f(Q)\) model parameters \(H_{0}\), \(\gamma\), and \(n\), respectively. However, for a full comparison with the observational data, one must extend the parameter space by including the two parameters of the CPL equation of state of the dark energy.
To confront the power-law \(f(Q)\) model with observations, several datasets containing cosmological data have been used. In particular, we have analyzed the model with respect to the Cosmic Chronometer (CC) dataset, as well as with the Pantheon+SHOES database. As a first step in our investigation we have performed
Figure 13: Profiles of the \(Om\) diagnostic parameter as a function of \(1+z\) for the constraint values of \(H_{0}\), \(\Omega_{m0}\), \(\omega_{0}\),\(\omega_{a}\), \(n\), \(\gamma\) for the CC, Pantheon+SHOES, and CC+Pantheon+SHOES samples.
\begin{table}
\begin{tabular}{c c c c c} Model & \(q_{0}\) & \(j_{0}\) & \(s_{0}\) & \(\Omega_{de0}\) \\ \hline \multicolumn{5}{c}{CC sample} \\ \(\Lambda\)CDM & \(-0.523\pm 0.0345\) & \(1\pm(<{\cal O}(10^{-16}))\) & \(-0.431\pm 0.1035\) & \(0.682\pm 0.034\) \\ Power-law & \(-0.532^{+0.077}_{-0.070}\) & \(1.001^{+0.298}_{-0.258}\) & \(-0.439^{+0.469}_{-0.278}\) & \(0.685^{+0.010}_{-0.013}\) \\ & & Pantheon+SHOES sample & & \\ \(\Lambda\)CDM & \(-0.4255\pm 0.033\) & \(1\pm(<{\cal O}(10^{-16}))\) & \(-0.7235\pm 0.099\) & \(0.617\pm 0.022\) \\ Power-law & \(-0.717^{+0.017}_{-0.017}\) & \(1.006^{+0.035}_{-0.035}\) & \(0.108^{+0.075}_{-0.071}\) & \(0.8076^{+0.0037}_{-0.0036}\) \\ & & CC+Pantheon+SHOES sample & & \\ \(\Lambda\)CDM & \(-0.487\pm 0.0285\) & \(1\pm(<{\cal O}(10^{-16}))\) & \(-0.539\pm 0.0855\) & \(0.658\pm 0.019\) \\ Power-law & \(-0.744^{+0.015}_{-0.015}\) & \(1.06^{+0.023}_{-0.038}\) & \(0.198^{+0.011}_{-0.413}\) & \(0.8064^{+0.0024}_{-0.0023}\) \\ \end{tabular}
\end{table}
Table 4: Present-day values of the cosmological parameters \(q_{0}\), \(j_{0}\), \(s_{0}\) and \(\Omega_{de0}\) as predicted by the power law \(f(Q)\) model for different data samples with 68% confidence level.
an MCMC analysis of the model, and obtained the optimal values of the model parameters. Then, by using these values, we have considered the general cosmological properties of this particular \(f(Q)\) type theory. Generally, the MCMC analysis of all three combinations of data sets indicate a value of \(n\) which is of the order of \(n\approx-0.36\), or, approximately, \(n=-1/3\). Hence, the dependence of the function \(F(Q)\) on \(Q\) is of the form \(F(Q)\propto Q^{-1/3}\), that is, \(F\) decreases with the increase of the nonmetricity. This interesting result may raise the problem of the explanation of this particular value of \(n=-1/3\), obtained phenomenologically in the present work, by a more detailed theoretical approach.
The deviations from standard general relativity are described by the parameter \(\gamma\), which turn out to be important, with \(\gamma\) having values of the order \(\gamma\approx 0.45\). This indicate a large departure from the Riemannian geometry based general relativity (in the absence of a cosmological constant), but clearly indicates the possibility of the description of the dark energy in this \(f(Q)\) type model. The comparison with the observational data on the Hubble parameter indicates a very good concordance between the \(f(Q)\) model, \(\Lambda\)CDM and observations up to a redshift of \(z\approx 1\), with some deviations appearing at higher redshifts. The AIC analysis also confirms the existence of a mild tension between the present model and the \(\Lambda\)CDM predictions, but to obtain a definite answer to this question more observational data spreading on a larger redshift range are necessary. The values of two free parameters \(\omega_{0}\) and \(\omega_{a}\) of the CPL type equation of state parameter of the dark energy indicate that \(\omega_{0}\approx-1\), and hence at least at small redshifts the present model mimics a cosmological constant. The correction term \(\omega_{a}\), giving the higher order redshift corrections is very small, of the order of \(\omega_{a}\approx-0.01\), indicating that an effective cosmological constant, obtained from the Weyl geometric structure of the theory, gives the best description of the observational data.
We have also performed a detailed investigation of several other cosmological parameters by using the optimal values of the \(f(Q)\) model parameters. Our analysis indicate the presence of several important differences with respect to the \(\Lambda\)CDM model, differences whose relevance may be addressed once the precision and the number of observational data will significantly increase.
The \(f(Q)\) theory of gravity can also be extended to include, together with the ordinary matter, scalar or other physical fields in the action. The present power-law \(f(Q)\) model may have some other possible applications, like, for example, to consider inflation in the presence of both scalar fields and nonmetricity, an approach that may lead to the formulation of a new view on the gravitational, geometrical and cosmological processes that did shape and influence the dynamics of the very early Universe. Another major topic of research would be the investigation of structure formation in the power-law \(f(Q)\) theory which could be done with the use of a background cosmological metric, obtained by solving exactly or approximately the cosmological evolution equations. In this case the BAO, SNIa, and CMB shift parameter data could be investigated to obtain important physical and cosmological constraints for the power law \(f(Q)\) model. This approach may lead to a detailed investigation and analysis of the cosmic structure formation processes, by providing a new perspective on these processes, and on the role of Weyl nonmetricity. Another direction of research would be to obtain the Newtonian and the post-Newtonian approximations of the present power-law \(f(Q)\) gravity, and to find out what constraints the local classic Solar System tests impose on the free parameters of the theory, and if these constraints are consistent with the cosmological observations. The Newtonian and the post-Newtonian limits may also prove to be extremely useful in obtaining physical constraints from a large body of astrophysical observations.
To conclude, in our work we have developed a particular version of the \(f(Q)\) theory, with the functional form of \(f\) given by a simple power law function, and we have proven its consistency with the cosmological observations, and as an important theoretical tool for the understanding of the accelerating expansion of the Universe. The obtained results also suggests the necessity of the study of further extensions and generalizations of this simple \(f(Q)\) type model. Our results have shown that the present poser-law model may represent an interesting geometric alternatives to dark energy, going below the Riemannian mathematical structure of general relativity, and in which the non-metric properties of the space-time may offer the clue for a deeper understanding of the gravitational interaction. In the present study we have proposed some basic theoretical tools, and observational/statistical procedures for the investigation of the basic geometric aspects of gravity, from a different perspective than the Riemannian one, and of their cosmological applications.
## Acknowledgements
S.M. acknowledges Transilvania University of Brasov for Transilvania Fellowship for postdoctoral research. SP & PKS acknowledges the National Board for Higher
Mathematics (NBHM) under the Department of Atomic Energy (DAE), Govt. of India for financial support to carry out the Research project No.: 02011/3/2022 NBHM(R.P.)/R & D II/2152 Dt.14.02.2022. The work of TH is supported by a grant of the Romanian Ministry of Education and Research, CNCS-UEFISCDI, project number PN-III-P4-ID-PCE-2020-2255 (PNCDI III).
|
2309.12557 | Triple-View Knowledge Distillation for Semi-Supervised Semantic
Segmentation | To alleviate the expensive human labeling, semi-supervised semantic
segmentation employs a few labeled images and an abundant of unlabeled images
to predict the pixel-level label map with the same size. Previous methods often
adopt co-training using two convolutional networks with the same architecture
but different initialization, which fails to capture the sufficiently diverse
features. This motivates us to use tri-training and develop the triple-view
encoder to utilize the encoders with different architectures to derive diverse
features, and exploit the knowledge distillation skill to learn the
complementary semantics among these encoders. Moreover, existing methods simply
concatenate the features from both encoder and decoder, resulting in redundant
features that require large memory cost. This inspires us to devise a
dual-frequency decoder that selects those important features by projecting the
features from the spatial domain to the frequency domain, where the
dual-frequency channel attention mechanism is introduced to model the feature
importance. Therefore, we propose a Triple-view Knowledge Distillation
framework, termed TriKD, for semi-supervised semantic segmentation, including
the triple-view encoder and the dual-frequency decoder. Extensive experiments
were conducted on two benchmarks, \ie, Pascal VOC 2012 and Cityscapes, whose
results verify the superiority of the proposed method with a good tradeoff
between precision and inference speed. | Ping Li, Junjie Chen, Li Yuan, Xianghua Xu, Mingli Song | 2023-09-22T01:02:21Z | http://arxiv.org/abs/2309.12557v1 | # Triple-View Knowledge Distillation for Semi-Supervised Semantic Segmentation
###### Abstract
To alleviate the expensive human labeling, semi-supervised semantic segmentation employs a few labeled images and an abundant of unlabeled images to predict the pixel-level label map with the same size. Previous methods often adopt co-training using two convolutional networks with the same architecture but different initialization, which fails to capture the sufficiently diverse features. This motivates us to use tri-training and develop the triple-view encoder to utilize the encoders with different architectures to derive diverse features, and exploit the knowledge distillation skill to learn the complementary semantics among these encoders. Moreover, existing methods simply concatenate the features from both encoder and decoder, resulting in redundant features that require large memory cost. This inspires us to devise a dual-frequency decoder that selects those important features by projecting the features from the spatial domain to the frequency domain, where the dual-frequency channel attention mechanism is introduced to model the feature importance. Therefore, we propose a Triple-view Knowledge Distillation framework, termed TriKD, for semi-supervised semantic segmentation, including the triple-view encoder and the dual-frequency decoder. Extensive experiments were conducted on two benchmarks, i.e., Pascal VOC 2012 and Cityscapes, whose results verify the superiority of the proposed method with a good tradeoff between precision and inference speed.
Semantic segmentation, semi-supervised learning, knowledge distillation, channel-wise attention, triple-view encoder.
## I Introduction
Semantic segmentation predicts the pixel-level label for image, and has many applications, e.g., autonomous driving [1] and scene understanding [2]. While much progress has been achieved by previous fully-supervised methods, it requires large human costs for pixel-level labeling of a huge number of images. This encourages the exploration of semi-supervised semantic segmentation [3], which employs only a few labeled images and a plentiful unlabeled images during model training. Existing methods are roughly divided into two categories, including _self-training_[4] and _consistency regularization_[3]. The former employs the labeled samples to build a teacher network that generates the pseudo-labels of unlabeled samples, which are added to the training set for learning a student network. It progressively updates the initial model by iteration, and the pseudo-labels with possible noise may result in the noisy training samples, thus obtaining an inferior student network. By contrast, the latter emphasizes the output consistency of the model under various perturbations, including image-level [5], feature-level [6], and network-level [3, 7] perturbation. This work belongs to the latter and focuses on the network-level perturbation.
Existing network-level methods usually train two models with the same structure but different initialization, which can be regarded as co-training [8]. In particular, the output of one model supervises the other model as the pseudo-label, but it cannot guarantee the diverse and complementary outputs from the two models, due to their possible inconsistency during training. Moreover, semantic segmentation desires a large receptive field to capture the global context of image, which is not well satisfied by previous methods [3, 7] that only adopt Convolutional Neural Network (CNN) [9] as the backbone. Worse still, convolutional networks have the inductive bias, i.e., locality and translation equivariance [10, 11], which limits the receptive field of the model and constrains its ability of capturing the global context by reflecting the long-range pixel dependency [11].
Therefore, to enhance the feature diversity and enlarge the receptive field, as depicted in Fig. 1, we adopt the tri-training strategy and introduce the triple-view neural networks as the encoder, which consists of pure ConvNet [12], pure Vision Transformer (ViT) [10], and hybrid ConvNet-ViT [13]. Among them, pure ConvNet reveals the image details by capturing the local spatial structure, facilitating the segmentation of those small or tiny objects; pure ViT models the global semantic context of image by employing the long-range pixel-level dependency; hybrid ConvNet-ViT takes advantage of the two complementary structures that enco
Fig. 1: Motivation of Tri-KD. Left: two-view encoder with the same backbone; right: triple-view encoder (ours) using three kinds of backbones. The outputs of the former tend to be less diverse than that of the latter.
global spatial relations of image pixels. Meanwhile, we adopt the knowledge distillation [14] skill to transfer the knowledge from ConvNet and ViT to hybrid ConvNet-ViT by feature learning, i.e., empowering low-level features with the ability of reflecting the local details and high-level features capturing the global context of image. During inference, we only use the hybrid ConvNet-ViT as the encoder, which saves lots of computational sources allowing to be easily deployed.
Moreover, existing semi-supervised semantic segmentation methods often use the fully-supervised model [15, 16, 17] for training, but they desire expensive computational overheads. In addition, many of them [4, 17] use U-Net [18] or Feature Pyramid Network (FPN) [19] as the decoder. For example, U-Net based methods simply concatenate the features from both encoder and decoder, but it possibly suffers from feature redundancy that takes up large memory and has adverse effect on segmentation. This inspires the FPN based methods to do the \(1\times 1\) 2D convolution on the encoding features to reduce the redundancy by dimensionality reduction. However, such convolution operation may cause the missing of important spatial information, such as low-level details and high-level semantics, since it regards the features across all channels equally when reducing feature dimension.
To overcome this shortcoming, we develop the Dual-Frequency (DF) decoder, which models the importance of encoding features before dimensionality reduction. In particular, DF decoder projects the features from the spatial domain to the frequency domain by the fast Fourier transform [20], and the feature importance is modeled by computing the channel-wise confidence scores of the dual encoding features. It utilizes the Gaussian high or low pass filter to drop out those less informative features. By this means, the model keeps those more contributing features and thus abandons the inferior features, allowing the model to be more lightweight.
Therefore, this work presents a Triple-view Knowledge Distillation (termed **TriKD**) framework for semi-supervised semantic segmentation, as shown in Fig. 2. Specifically, TriKD mainly consists of the triple-view encoder and the dual-frequency decoder.
The main contributions are summarized as follows:
* A triple-view knowledge distillation framework with dual-frequency decoding is developed for semi-supervised semantic segmentation, by considering both local spatial details and the global context of images.
* The triple-view encoder employs pure ConvNet, pure ViT, and hybrid ConvNet-ViT to learn both local and global features. Besides, the knowledge distillation strategy is applied to transfer the complementary semantics from the former two modules to hybrid ConvNet-ViT.
* The dual-frequency channel attention mechanism is used to model the importance of both the low and the high frequency features by using fast Fourier transform with Gaussian low/high pass filter.
* Empirical studies were conducted on two benchmarks, including Pascal VOC 2012 dataset [21] and Cityscapes [22]. Both quantitative and qualitative results have verified the promising segmentation performance of the proposed framework with much less inference time.
The rest of this paper is organized as follows. Section II reviews some closely related works and Section III introduces our semi-supervised semantic segmentation framework. Then, we report the experimental results on two benchmarks to demonstrate the advantage of our method in Section IV. Finally, we conclude this work in Section V.
## II Related Work
Recent years have witnessed remarkable progress in semantic segmentation. Here, we briefly discuss two settings, involving _supervised_ and _semi-supervised_ scenarios.
### _Supervised Semantic Segmentation_
Most supervised semantic segmentation works adopt deep learning models, and the milestone is the Fully-Convolution Network (FCN) [23] which is applied to semantic segmentation and outperforms the traditional methods. In recent, many variants of FCN have emerged, e.g., Chen _et al._[16, 24] present the Atrous Spatial Pyramid Pooling (ASPP) module to enlarge the receptive field of convolutional layer; Zhao _et al._[15] devise the pyramid pooling module to segment the objects with various sizes. While they can capture multi-scale context, it is still hard to model the global semantics as convolution layers often reveal the local spatial patterns of the data. This motivates Wang _et al._[25] to develop the non-local block using self-attention mechanism [26] to model the long-range context among image pixels.
Due to the inductive bias in CNN [10, 11], recent works tend to adopt the sequence-to-sequence models to capture the long-range dependency for image, such as Zheng _et al._[11] employ vanilla ViT [10] as encoder to learn multi-level features, which are upsampled progressively with feature
Fig. 2: Overall framework of the Triple-view Knowledge Distillation (TriKD) framework for semi-supervised semantic segmentation. “CE” denotes cross-entropy loss, “decoder” adopts the dual-frequency strategy.
aggregation by decoder; Wang _et al._[27] use the channel-wise cross fusion transformer to model the multi-scale context, which reduces the semantic gap between encoder and decoder. However, the above methods require very expensive human labeling costs since all pixels of training images are to be annotated in fully-supervised semantic segmentation, e.g., finer-labeling one image in Cityscapes [22] database costs 1.5 hours on average.
### _Semi-supervised Semantic Segmentation_
In semi-supervised setting, there are only a few labeled samples and a large number of unlabeled samples for semantic segmentation. Existing methods are generally divided into two groups, including _self-training_ and _consistency regularization_.
**Self-training**. It expands the training set by generating pseudo-labeled samples. In particular, the self-training methods use the labeled samples to train a teacher network, which is employed to yield the pseudo-labels for unlabeled samples; the pseudo-labeled samples are added to the training set to learn a student network. For example, Yuan _et al._[28] propose the distribution-specific batch normalization to alleviate the statistical bias problem due to the large distribution difference incurred by strong data augmentation; Yang _et al._[5] apply the strong data augmentations on unlabeled samples to circumvent the overfitting issue of noisy labels as well as decoupling similar predictions between teacher network and student network, and performed re-training via giving priority to reliable unlabeled samples with high holistic prediction stability; Wang _et al._[4] argue that each pixel matters to the model, and employ the prediction entropy to divide the pixels to reliable and unreliable sets, which are both used to train the model; Ke _et al._[29] adopts the three-stage solution to extracting pseudo-mask information on unlabeled data and enforces segmentation consistency in a multi-task fashion. However, the self-training methods iteratively learn the model by itself, and the possible noisy pseudo-labeled training samples make it difficult to obtain a good student network.
**Consistency regularization**. It applies various perturbations to the model during training and constrains the outputs with different perturbations to be consistent, such that within-class samples are closer and between-class samples are pushed far away, avoiding the overfitting to some degree. Generally, the typical perturbations are categories into image-level, feature-level, and network-level types. For image-level type, Zou _et al._[30] do both strong and weak data augmentation on images and then feed them to the same network, and the output of the weak branch is used to supervise the strong branch since training the weak branch is more stable; French _et al._[31] adopt the CutMix data augmentation to combine two images into one by rectangular mask and add them to the training set. For feature-level type, Ouali _et al._[6] present the Cross-Consistency Training (CCT) strategy, which trains the model using the labeled data by the primary encoder, and then feeds the encoding features with various perturbations to corresponding decoders, whose outputs are imposed with the consistency constraint. For network-level type, Tarvainen _et al._[32] obtain two augmented images by adding Gaussian noise, and one image is directly input to the model and update network parameters by using the Exponential Moving Average (EMA) skill, leading to the EMA network; the other image is fed into EMA network and its output is treated as the target of the original network output, while the consistency constraint is imposed on the two outputs by the least squares error loss. In addition, Ke _et al._[7] propose the Guided Collaborative Training (GCT) method which feed one image to two networks with the same architecture but different initialization to yield two outputs, and one supervises the other. Furthermore, Chen _et al._[3] improve GCT by presenting the Cross Pseudo Supervision (CPS) approach, which adopts the one supervises the other strategy with the cross-entropy loss as the consistency constraint; Fan _et al._[33] train two paralleled networks via the intersection supervision using high-quality pseudo labels and union supervision using large-quantity pseudo labels; Wang _et al._[34] adopt a two-branch co-training framework that enforces two sub-nets to learn distinct features by a discrepancy loss. Nevertheless, many previous methods consider two-view co-training with the same backbone, leading to less diverse features, and fail to select those important features at the decoding stage, leading to large computational costs and lower inference speed.
## III The Proposed Method
This section describes the proposed Triple-view Knowledge Distillation (TriKD) framework as shown in Fig. 2, which consists of triple-view encoder and dual-frequency decoder. Triple-view encoder includes pure ConvNet, pure ViT, and hybrid ConvNet-ViT, where the former two act as teacher networks and the last one is student network for knowledge distillation during training. Dual-frequency decoder adopts the channel attention to select those important features in both the low and the high frequency domain. During inference, only the hybrid ConvNet-ViT is used for encoding features, so as to speed up segmentation.
### _Problem Definition_
Semi-supervised semantic segmentation task gives the pixel-level predictions by employing \(N_{l}\) labeled images \(\mathcal{D}_{l}=\{(\mathbf{X}_{i}^{l},\mathbf{Y}_{i}^{l})\}_{i=1}^{N_{l}}\) and \(N_{u}\) unlabeled images \(\mathcal{D}_{u}=\{\mathbf{X}_{j}^{u}\}_{j=1}^{N_{u}}\), where \(N_{l}\ll N_{u}\), \(\{\mathbf{X}_{i}^{l},\mathbf{X}_{j}^{u}\}\in\mathbb{R}^{H\times W\times 3}\) denotes the \(i\)-th labeled and the \(j\)-th unlabeled RGB image, respectively; \(H\) denotes the height and \(W\) denotes the width; \(\mathbf{Y}_{i}^{l}\in\mathbb{R}^{H\times W\times C}\) is the ground-truth label map of the \(i\)-th image and \(C\) is the class number. For brevity, we omit the subscripts, i.e., \(\{\mathbf{X}^{l},\mathbf{Y}^{l},\mathbf{X}^{u}\}\) denote one labeled image and its label map, and one unlabeled image, respectively.
### _Triple-View Encoder_
To learn diverse and complementary features, we employ the tri-training strategy [35] to train the segmentation model by using three different backbones for feature encoding. In particular, pure ConvNet (ResNet [12]) reveals the local patterns of the data, pure ViT (DeiT [36]) respects the global structure of the data, and hybrid ConvNet-ViT (TinyViT [13]) inherits
the merits of the former two by adopting the knowledge distillation skill. For unlabeled data, we impose the consistency regularization on the predictions from the three networks.
**ConvNet**. Due to the limited receptive field of convolution, it is common to stack multiple convolution layers and pooling layers to enlarge the receptive field. For pure ConvNet, we use ResNet [12] as the backbone, which has four stages with downsampling at each stage. When one image goes through ResNet, we have four feature maps \(\mathbf{F}_{s}^{cnn}\in\mathbb{R}^{H_{s}\times W_{s}\times C_{s}^{cnn}}\), where the subscript \(s\in\{1,2,3,4\}\) denotes the stage, the superscript \(cnn\) denotes the ConvNet, \(H_{s}\times W_{s}\) denotes the resolution of feature map (height\(\times\)width), \(C_{s}^{cnn}\) denotes the channel number. Here, \(\{H_{1},W_{1},C_{1}^{cnn}\}=\{\frac{H}{4},\frac{W}{4},256\}\), \(\{H_{s+1},W_{s+1},C_{s+1}^{cnn}\}=\{\frac{H_{s}}{2},\frac{W_{s}}{2},2\cdot C_{ s}^{cnn}\}\).
**ViT**. Generally, it is crucial to capture the long-range pixel dependency to understand large objects and global scene in image. To achieve this, we adopt the light DeiT [36] as the backbone of ViT, which contains multiple self-attention layers to model the global pixel relations. Following [11], one image is divided into many small patches with the size of \(P\times P\times 3\), each of which is linearly projected to a feature embedding vector \(\in\mathbb{R}^{C^{it}}\). The number of small patches is computed by \(N^{vit}=\frac{H\cdot W}{P2}\), i.e., the length of patch sequence, and one image has \(N^{vit}\) feature embedding vectors, which compose the patch matrix \(\mathbf{F}_{0}^{vit}\in\mathbb{R}^{N^{vit}\times C^{cit}}\) as the input of ViT. Here, we empirically set \(P\) to 16 and \(C^{vit}\) to 768. The backbone DeiT consists of multiple encoding stages, and each stage contains a series of stacked transformer layers. The transformer layer includes Multi-head Self-Attention (MSA) [11] module and Multi-Layer Perceptron (MLP) [37]. At the \(r\)-th stage, it learns the features by computing
\[\hat{\mathbf{F}}_{r}^{vit}=MSA(LN(\mathbf{F}_{r-1}^{vit}))+\mathbf{F}_{r-1}^{ vit}, \tag{1}\]
\[\mathbf{F}_{r}^{vit}=MLP(LN(\hat{\mathbf{F}}_{r}^{vit}))+\hat{\mathbf{F}}_{r}^{ vit}, \tag{2}\]
where \(r\in\{1,2,\cdots,12\}\) indexes the encoding stage, \(LN(\cdot)\) denotes layer normalization, and the feature size keeps the same across stages, i.e., \(\mathbf{F}_{r}^{vit}\in\mathbb{R}^{N^{vit}\times C^{cit}}\).
**Hybrid ConvNet-ViT**. To take advantage of both ConvNet and ViT, we adopt TinyViT [13] as the backbone of hybrid ConvNet-ViT. In particular, it comprises four encoding stages, where the first stage is convolution layer and the remaining stages are transformer layers. We define the feature map of the first stage is \(\mathbf{F}_{1}^{hyb}\in\mathbb{R}^{H_{1}\times W_{1}\times C_{1}^{hyb}}\), where "hyb" denotes hybrid ConvNet-ViT and \(C_{1}^{hyb}=64\) denotes the feature channel _a.k.a._ embedding dimension. The feature maps of the remaining three stages are represented by \(\mathbf{F}_{k}^{hyb}\in\mathbb{R}^{L_{k}^{hyb}\times C_{k}^{hyb}}\), where \(C_{k+1}^{hyb}=2\cdot C_{k}^{hyb}\), \(k=1,2\) indexes the stage, \(C_{k}^{hyb}=448\), and \(L_{k}^{hyb}\) denotes the sequence length of feature embedding. To keep the feature size consistency with that of ResNet, we follow [13] to add the downsampling operation between pairwise stages, and do serialization to satisfy \(L_{2}^{hyb}=H_{2}\cdot W_{2},L_{3}^{hyb}=H_{3}\cdot W_{3},L_{4}^{hyb}=H_{4} \cdot W_{4}\).
### _Knowledge Distillation Scheme_
We adopt the knowledge distillation [14] scheme to transfer the knowledge learned from the larger network to the smaller one. In particular, we regard pure ConvNet and pure ViT as teacher network while hybrid ConvNet-ViT is taken as student network. Here the knowledge denotes the ability of capturing local spatial patterns by convolution network, and that of modeling the global long-range dependency of pixels by vision transformer. In another word, we expect hybrid ConvNet-ViT learns both the local and global pixel structures of the image from teacher networks at the cost of much smaller model size and lower computations, allowing it to be readily deployed.
As shown in Fig. 2, both the labeled and unlabeled images are fed into the triple-view encoder to yield three feature maps, i.e., the local feature map by pure ConvNet, the global attention map by pure ViT, and the distillation feature map by hybrid ConvNet-ViT. To learn the knowledge from teacher network, we impose the consistency constraints on the low-level feature maps by the spatial loss (\(\ell_{2}\)-loss) and on the high-level feature maps by the attention loss (Kullback-Leibler divergence), respectively. The former helps student network to reveal the local structure of images while the latter allows student network to capture semantic features by modeling the global context.
**Low-level distillation**. To capture the local spatial relations of pixels, we conduct the distillation between the low-level layers of pure ConvNet and that of student (hybrid) network. In particular, we adopt the convolution operation with size of \(1\times 1\) to make the channel number of feature map \(\mathbf{F}_{1}^{hyb}\) from student network be the same as that of pure ConvNet at the first stage, followed by batch normalization \(BN(\cdot)\) and a nonlinear function, resulting in the low-level distilled feature \(\tilde{\mathbf{F}}_{1}^{hyb}\in\mathbb{R}^{H_{1}\times W_{1}\times C_{1}^{cnn}}\), i.e.,
\[\tilde{\mathbf{F}}_{1}^{hyb}=ReLU(BN(Conv2D(\mathbf{F}_{1}^{hyb})), \tag{3}\]
where \(Conv2D(\cdot)\) denotes \(1\times 1\) convolution, and \(ReLU(\cdot)\) is Rectified Linear Unit which is an activation function. Naturally, the low-level distilled feature is expected to share with the local structure of the image encoded by pure ConvNet. This is achieved by imposing the consistency constraint on the low-level features, i.e., the spatial loss \(\mathcal{L}_{spa}\):
\[\mathcal{L}_{spa}=\frac{1}{H_{1}\cdot W_{1}\cdot C_{1}^{cnn}}\sum_{c=1}^{C_{1} ^{cnn}}\|\mathbf{F}_{1,c}^{cnn}-\tilde{\mathbf{F}}_{1,c}^{hyb}\|_{F}^{2}, \tag{4}\]
where \(\|\cdot\|_{F}\) denotes the Frobenius norm of matrix, and \(c\) indexes the channel feature map \(\{\tilde{\mathbf{F}}_{1,c}^{hyb},\mathbf{F}_{1,c}^{cnn}\}\in\mathbb{R}^{H_{1} \times W_{1}}\). With this constraint, the low-level hybrid layer is able to learn the spatial knowledge from the corresponding ConvNet layer.
**High-level distillation**. To capture the global relations of pixels, we conduct the distillation between the high-level layers of pure ViT and that of student (hybrid) network. In particular, we first compute their attention maps \(\mathbf{A}\) by Multi-head Self-Attention [11] and then impose the consistency constraint on them. Mathematically, the query \(\mathbf{Q}\) and the key \(\mathbf{K}\) values are computed by (subscripts are omitted for brevity)
\[\{\mathbf{Q}^{vit},\mathbf{K}^{vit}\} =\{\mathbf{F}^{vit}\mathbf{W}_{Q}^{vit},\mathbf{F}^{vit}\mathbf{W}_{ K}^{vit}\}\in\mathbb{R}^{N^{vit}\times d^{vit}}, \tag{5}\] \[\{\mathbf{Q}^{hyb},\mathbf{K}^{hyb}\} =\{\mathbf{F}^{hyb}\mathbf{W}_{Q}^{hyb},\mathbf{F}^{hyb}\mathbf{W}_ {K}^{hyb}\}\in\mathbb{R}^{L^{hyb}\times d^{hyb}}, \tag{6}\]
where \(\{\mathbf{W}_{0}^{it},\mathbf{W}_{0}^{vit}\}\in\mathbb{R}^{C^{it}\times d^{vit}}\) and \(\{\mathbf{W}_{0}^{hyb},\mathbf{W}_{0}^{hyb}\}\in\mathbb{R}^{C^{hyb}\times d^{hyb}}\) are learnable parameters, the head dimensions are \(\{d^{vit},d^{hyb}\}=\{\frac{C^{it}}{m^{vit}},\frac{C^{hyb}}{m^{hyb}}\}\) where \(\{m^{vit},m^{hyb}\}=\{12,14\}\) are the head numbers.
and the corresponding attention maps are computed by
\[\mathbf{A}^{vit} =Softmax(\frac{\mathbf{Q}^{vit}(\mathbf{K}^{vit})^{\top}}{\sqrt{ d^{vit}}})\in\mathbb{R}^{N^{vit}\times N^{vit}}, \tag{7}\] \[\mathbf{A}^{hyb} =Softmax(\frac{\mathbf{Q}^{hyb}(\mathbf{K}^{hyb})^{\top}}{\sqrt{ d^{hyb}}})\in\mathbb{R}^{L^{hyb}\times L^{hyb}}, \tag{8}\]
where \(Softmax(\cdot)\) is the softmax function. Hopefully, the high-level distilled feature is expected to model the global structure revealed by pure ViT. This is achieved by imposing the consistency constraint on the high-level features, i.e., the attention loss \(\mathcal{L}_{att}\):
\[\mathcal{L}_{att}=\frac{1}{N^{vit}\cdot N^{vit}}KL(\mathbf{A}^{vit}||Upsample( \mathbf{A}^{hyb})), \tag{9}\]
where \(KL(\cdot||\cdot)\) denotes the Kullback-Leibler divergence loss, and \(Upsample(\cdot)\) denotes bilinear interpolation for making the dimension of \(\mathbf{A}^{hyb}\) be the same as that of \(\mathbf{A}^{vit}\). By this means, the high-level features can well capture the global context of the image pixels by distilling the semantics from the high-level transformer layer of ViT.
### _Decoder_
To obtain the predicted label map, existing methods always employ convolution-like U-Net [18] or Feature Pyramid Network (FPN) [19] as the decoder, by either concatenating the features of encoder and decoder along the channel dimension or reducing the feature dimension via \(1\times 1\) convolution for abandoning the redundancy. Thus, all features are equally treated during the decoding period, which may possibly include some redundancy content. This results in the failure of well capturing the structural context of the image, such as the low-level details and the high-level semantics. To address this problem, we model the hierarchical feature importance by developing the dual-frequency channel attention mechanism for a FPN-like decoder based on UPerNet (Unified Perceptual Parsing Network) [38], and we shortly call it dual-frequency decoder.
Essentially, the idea of devising the dual-frequency decoder originates from the fact that low-frequency signal describes the main part of an image and high-frequency signal describes its details. According to [39], neural networks tend to learn low-frequency signal while neglecting high-frequency signal, and low-level features involve rich high-frequency signals while high-level features involve plentiful low-frequency signals. Therefore, inspired by [40], we consider the channel importance of hierarchical features by projecting the features from the spatial domain to the frequency domain, and obtain the enhanced features by channel-wise attention mechanism for decoding.
Here are the details of decoder. As shown in Fig. 3, it has four stages and involves Pyramid Pooling Module (PPM) [15], multi-scale feature Fusion module (Fuse), Low-frequency Channel Attention module (LCA), and High-frequency Channel Attention module (HCA). In particular, PPM accepts the features of the last encoding stage, while LCA and HCA are applied to capture the global context and the local details of images before using \(1\times 1\) convolution to obtain the low-dimensional features. The hierarchical feature maps of the four decoding stages are fused by multi-scale feature fusion module, resulting in the fused feature map, which generates the predicted label map after upsampling.
For our triple-view encoder, it desires some feature processing before passing through the decoder. Specifically, the four-stage features of pure ConvNet can be directly used; the features at the last three encoding stages of hybrid network
Fig. 3: Architecture of dual-frequency decoder. Take pure ConvNet for example.
undergo the deserialization [13]; the features at the third, sixth, ninth, 12-th transformer layers of pure ViT also undergo the deserialization, i.e., reshape 2D feature to 3D one with a size of \(\frac{H}{P}\times\frac{W}{P}\times C^{vit}\), which are then passed to \(1\times 1\) Conv and \(3\times 3\) Conv as well as upsampling.
Note that we keep the fourth stage that uses PPM to capture the rich context of high-level feature by the feature map \(\hat{\mathbf{F}}_{4}\in\mathbb{R}^{H_{4}\times W_{4}\times C_{4}}\) (superscripts omitted) and do dimensionality reduction on the features of the remaining stages. As shown in Fig. 3, the low-level features of the first and the second stages, i.e., \(\{\mathbf{F}_{1}^{cnn},\mathbf{F}_{2}^{cnn}\}\), are passed to the HFA module for modeling the global context, while the high-level features of the third stage, i.e., \(\mathbf{F}_{3}^{cnn}\), are passed to the LFA module for modeling the local details. This requires first projecting the features from the spatial domain to the frequency domain using Fast Fourier Transform (FFT) [20], and then using \(fftshit(\cdot)\) in PyTorch to move the low or high frequency signal to the middle, leading to the spectral map. These spectral maps pass through Gaussian Low-Pass or High-Pass Filter (GLPF or GHPF) [41] to obtain the low-frequency feature maps \(\mathbf{F}_{3,low}^{cnn}\) and the high-frequency feature maps \(\{\mathbf{F}_{1,high}^{cnn},\mathbf{F}_{2,high}^{cnn}\}\). Thus, the high-frequency signals in the low-level feature map keep the image details and the low-frequency signals in the high-level feature map carry the image context on a whole. Thereafter, we do the Global Average Pooling (GAP) operation on the high-frequency and the low-frequency signals to obtain the corresponding vectors \(\{\mathbf{z}_{1}^{cnn}\}_{high}\in\mathbb{R}^{C_{1}^{cnn}},\mathbf{z}_{2,high }^{cnn}\in\mathbb{R}^{C_{2}^{cnn}},\mathbf{z}_{3,low}^{cnn}\in\mathbb{R}^{C_{3 }^{cnn}}\}\), for each of which the length equals the channel number. We model the channel importance by applying the \(1\times 1\) 2D convolution and the sigmoid function \(\sigma(\cdot)\) to the above vectors, whose entries acts as the importance scores of different channels. Hence, we can obtain the weighted feature map \(\mathbf{R}^{cnn}\), i.e.,
\[\mathbf{R}^{cnn}=\sigma(Conv2D(\mathbf{z}^{cnn}))\otimes\mathbf{F}^{cnn}\in \mathbb{R}^{H\times W\times C^{cnn}}, \tag{10}\]
where the subscripts are omitted, and the important channels have larger weights than those less important ones in the feature map. Later, the low-level and the high-level feature maps are concatenated with the corresponding weighted feature maps by the skip connection. At last, we use the \(1\times 1\) 2D convolution to reduce the channel dimension, resulting in the low-dimensional feature representations \(\{\hat{\mathbf{F}}_{1}^{cnn}\in\mathbb{R}^{H_{1}\times W_{1}\times C_{1}^{cnn }},\hat{\mathbf{F}}_{2}^{cnn}\in\mathbb{R}^{H_{2}\times W_{2}\times C_{2}^{ cnn}},\hat{\mathbf{F}}_{3}^{cnn}\in\mathbb{R}^{H_{3}\times W_{3}\times C_{3}^{ cnn}}\}\), where \(\{\hat{C}_{1}^{cnn},\hat{C}_{2}^{cnn},\hat{C}_{3}^{cnn}\}=\{128,256,256\}\). During the multi-scale feature fusion, we apply the \(1\times 1\) 2D convolution to match the dimension of the previous feature map with that of the current one.
Similar procedures are applied for the pure ViT and the student network, where \(\{\hat{C}_{1}^{vit},\hat{C}_{1}^{vit},\hat{C}_{3}^{vit}\}=\{\hat{C}_{1}^{hyb}, \hat{C}_{2}^{hyb},\hat{C}_{3}^{hyb}\}=\{48,96,192\}\). In this way, we obtain the predicted label maps \(\{\hat{\mathbf{Y}}^{l,cnn},\hat{\mathbf{Y}}^{l,vi},\hat{\mathbf{Y}}^{l,hyb}\} \in\mathbb{R}^{H\times W\times C}\) and \(\{\hat{\mathbf{Y}}^{u,cnn},\hat{\mathbf{Y}}^{u,vi},\hat{\mathbf{Y}}^{u,hyb}\} \in\mathbb{R}^{H\times W\times C}\) for the labeled images \(\mathbf{X}^{l}\) and the unlabeled images \(\mathbf{X}^{u}\), respectively.
### _Loss Function_
To optimize the semi-supervised segmentation model, we adopt three kinds of losses, including the segmentation loss, the distillation loss, and the Cross Pseudo Supervision (CPS) [3] loss. Among them, the segmentation loss is applied to labeled samples, the CPS loss is applied to unlabeled samples, and the distillation loss is applied to all training samples.
**Segmentation loss**. For labeled images, we adopt the Cross-Entropy (CE) loss to compute the segmentation loss of the predicted label maps and ground-truth ones, i.e.,
\[\{\mathcal{L}_{seg}^{cnn},\mathcal{L}_{seg}^{vit},\mathcal{L}_{seg }^{hyb}\}=-\frac{1}{N_{l}\cdot HW}.\] \[\Big{\{}\sum_{i=1}^{N_{l}}\mathbf{Y}_{i}^{l}\log\hat{\mathbf{Y}}_{ i}^{l,cnn},\sum_{i=1}^{N_{l}}\mathbf{Y}_{i}^{l}\log\hat{\mathbf{Y}}_{i}^{l, vit},\sum_{i=1}^{N_{l}}\mathbf{Y}_{i}^{l}\log\hat{\mathbf{Y}}_{i}^{l,hyb} \ \Big{\}}, \tag{11}\]
where \(HW=H\cdot W\) and \(\mathcal{L}_{seg}=\mathcal{L}_{seg}^{cnn}+\mathcal{L}_{seg}^{vit}+\mathcal{L}_{ seg}^{hyb}\).
**CPS loss**. For unlabeled images, we adopt the CPE loss to make the predicted label maps be consistent with different encoders. In particular, we apply the argmax function to the predicted label map to obtain the one-hot label map of one encoder as the supervision of another encoder, and adopt the CE loss to compute the cross pseudo supervision loss below:
\[\mathcal{L}_{cps}^{cnn}=-\frac{1}{N_{u}\cdot HW}\cdot\sum_{j=1}^{N_{u}}(\hat{ \mathbf{Y}}_{j}^{u,vit}+\hat{\mathbf{Y}}_{j}^{u,hyb})\cdot\log\hat{\mathbf{Y}}_{ j}^{u,cnn},\] \[\mathcal{L}_{cps}^{vit}=-\frac{1}{N_{u}\cdot HW}\cdot\sum_{j=1}^{N_{u }}(\hat{\mathbf{Y}}_{j}^{u,cnn}+\hat{\mathbf{Y}}_{j}^{u,hyb})\cdot\log\hat{ \mathbf{Y}}_{j}^{u,vit},\] \[\mathcal{L}_{cps}^{hyb}=-\frac{1}{N_{u}\cdot HW}\cdot\sum_{j=1}^{N_{u }}(\hat{\mathbf{Y}}_{j}^{u,cnn}+\hat{\mathbf{Y}}_{j}^{u,vit})\cdot\log\hat{ \mathbf{Y}}_{j}^{u,hyb},\]
and \(\mathcal{L}_{cps}=\mathcal{L}_{cps}^{cnn}+\mathcal{L}_{cps}^{vit}+\mathcal{L}_{ cps}^{hyb}\).
**Distillation loss**. For all training samples, we use the sum of the spatial loss \(\mathcal{L}_{spa}\) and the attention loss \(\mathcal{L}_{att}\) as the distillation loss, such that the student network inherits the merits of the teacher networks including modeling the locality property by pure ConvNet and the global context by pure ViT, i.e., \(\mathcal{L}_{kd}=\lambda_{1}\mathcal{L}_{spa}+\lambda_{2}\mathcal{L}_{att}\), where the regularization constants are equally set to 0.5.
**Total loss**. To optimize the objective of our Triple-view Knowledge Distillation (TriKD) framework, we compute the total loss as follows:
\[\mathcal{L}=\mathcal{L}_{seg}+\mathcal{L}_{kd}+\lambda\mathcal{L}_{cps}, \tag{12}\]
where the constant \(\lambda\) is empirically set to 0.1.
## IV Experiment
This section shows extensive experiments of semantic segmentation on two benchmark data sets. All experiments were conducted on a machine with four NVIDIA RTX 3090 graphics cards, and our model was compiled using PyTorch 1.10.0, Python 3.9, and CUDA 11.4.
### _Datasets and Evaluation Metrics_
**Pascal VOC 201211**1. It has 1464 training images, 1449 validation images, and 1456 test images, which involve 20
object categories and one background class. Following [3, 4], we add the 9118 training images of the Segmentation Boundary Database [42] to augment the training set as "VOCAug".
**Cityscapes2**[22]. It focuses on semantic understanding of urban street scenes, and the images are taken from 50 cities by several months in daytime with good/medium weather conditions. Each annotated image is the 20th image from a 30 frame video snippets (1.8s), and there are 5000 annotated images with high-quality dense pixel annotations involving 10 object classes. This dataset has 2975 training images, 500 validation images, and 1525 test images.
Footnote 2: [https://www.cityscapes-dataset.com/](https://www.cityscapes-dataset.com/)
**Evaluation Metrics**. Following [3, 4], we adopt the commonly used metric, i.e., mIoU (mean Intersection over Union), to evaluate the semantic segmentation performance of the compared methods. It is the average IoU score of the predicted and the ground-truth semantic region across all classes. Note that empirical results were obtained on the validation set, and ablation studies were conducted on VOCAug and Cityscapes.
### _Experimental Settings_
**Backbone**. Our TriKD framework adopts the triple-view encoder that employs ResNet101 [12] for pure ConvNet, DeiT-B [36] for pure ViT, and TinyViT [13] for hybrid ConvNet-ViT. To show precision-speed trade off, we use two variants of TinyViT, i.e., TinyViT-11M for TriKD and TinyViT-21M for TriKD\({}^{*}\) with different model sizes.
**Training Phase**. All encoders initialize with the weights of the models pre-trained on the ImageNet [43] dataset, and the weights of other layers use the Kaiming initialization [44] in PyTorch. The initial learning rate \(\ell_{r}\) is set to 0.001 for VOC and 0.005 for Cityscapes with the polynomial learning rate decay, where \(\ell_{r}\) is multiplied by \((1-\frac{iter}{iter_{max}})^{power}\) with \(power=0.9\). We adopt the stochastic gradient descent algorithm to update model parameters, the momentum is set to 0.9, and the weight decay is set to 0.0001 for VOC with 80 epochs and 0.0005 for Cityscapes with 300 epochs. The batch size is 8, which decides the maximum iteration with the epochs. For data augmentation, we adopt random horizontal flipping, randomly scaling images by 0.5 to 2 factors, random cropping by \(515\times 512\), and random rotation in \([-10^{\circ},10^{\circ}]\). The pixel values of all images are normalized to \([0,1]\).
**Inference Phase**. We normalize the test image and feed it to the hybrid ConvNet-ViT of the trained model, which yields the predicted label map \(\hat{\mathbf{Y}}\) by the corresponding decoder.
### _Compared Methods_
We compare two groups of state-of-the-art semi-supervised semantic segmentation methods, i.e., 1) _self-training_ group including SDA (Strong Data Augmentation) [28], ST++ (advanced Self-Training) [5], U\({}^{2}\)PL (Using Unreliable Pseudo-Labels) [4], and TSS (Three-Stage Self-training) [29]; 2) _consistency regularization_ group, including image-level perturbation methods PseudoSeg [30] and CutMix [31], feature-level perturbation method CCT (Cross-Consistency Training) [6], and network-level perturbation methods MT (Mean Teacher) [32], GCT (Guided Collaborative Training) [7], CPS (Cross Pseudo Supervision) [3], CPCL (Conservative Progressive Collaborative Learning) [33], and CCVC (Conflict-based Cross-View Consistency) [34]. Our Tri-KD belongs to the latter group and has four versions, including vanilla TriKD and supervised TriKD (TriKD\({}_{s}\), S-11M) with TinyViT-11M, as well as the two (TriKD\({}^{*}\), TriKD\({}_{s}^{*}\), S-21M) with TinyViT-21M, among which the supervised version means only using labeled samples for training.
* Our method with TinyViT-21M (the bottom row) consistently enjoys the most promising performance in terms of mIoU, compared to the state-of-the-art alternatives across all datasets with varying label ratios. For example, TriKD\({}^{*}\) outperforms the second best CCVC [34] by 2.83% on VOCAug with the label ratio of \(1/4\) in Table II. This demonstrates the advantages of the proposed triple-view knowledge distillation framework.
* Our approach has the best precision-speed trade off among all methods. While the performance of TriKD achieves or surpasses the best candidate, it requires much less model parameters and computations. From Table I and Table IV, TriKD has comparable or better segmentation performance with only one-third (18.28M) or half (29.02M) of the model parameters of the most competitive alternative CCVC [34] (62.62M). Meanwhile, the required computations are one-sixth (48.86G) or one-seventh (38.74G) of that of CCVC (296.06G) on VOC [21], and one-sixth (116.88G) or one-eighth (85.48G) of that of CCVC (647.40G) on Cityscapes [22], in terms of FLOPs. Moreover, the inference speed of our method achieves 62\(\sim\)78 fps on VOC and 23\(\sim\)49 fps on Cityscapes, which are 2\(\sim\)3 times faster than that of the best candidate. This verifies that our method strikes a good balance between precision and speed, which allows it to be deployed in highly-demanding environment.
* With the help of plentiful unlabeled data, the segmentation performances are boosted a lot on all the datasets. The largest gains are obtained on VOC [21] in Table I, i.e., from 44.32% to 67% with TinyViT-11M and from 55.22% to 70.29% with TinyViT-21M at the ratio of \(1/16\). This validates the fact that a good many unlabeled images may provide complementary spatial knowledge to the model training besides the pixel-level semantics by labeled images. In addition, the segmentation performance is continuously improved with the increasing ratio of labeled samples from \(1/16\) to \(1/2\).
### _Ablation Studies_
We examine the components of our TriKD framework, including triple-view encoder, decoder, Knowledge Distillation (KD) loss, and hyper-parameters \(\{\lambda_{1},\lambda_{2},\lambda\}\). The backbone of student network adopts TinyViT-11M with the label ratio of \(1/2\), and the remaining parameters are the same as training.
**Triple-view encoder**. We investigate the performance when adopting different backbones for teacher network and student network. The results are shown in Table V, where "hybrid" denotes the hybrid ConvNet-ViT, "Params" and "FLOPs" are computed for the inference phase which only uses student network. From the table, we observe that when both teacher and student networks employ the same backbone ConvNet (_row_\(1\)), it achieves the second best results at the cost of 3.13 times more parameters and 2.35 times larger FLOPs, which are much more worse (i.e., 5.51 times parameters and 4.83 times FLOPs) for adopting ViT (_row_\(2\)) as the backbone. When using hybrid ConvNet-ViT as student network, ConvNet performs the best when only using one teacher network (_row_\(4\)), which indicates that convolutional networks distill the knowledge in a better way than ViT. Most importantly, the performance tops among all settings when using both ConvNet and ViT as teacher networks (_bottom row_), which validates the superiority of transferring both local and global knowledge to a lightweight student network.
**KD loss**. We examine the effects of two knowledge distillation losses, i.e., the spatial loss \(\mathcal{L}_{spa}\) and the attention loss \(\mathcal{L}_{att}\), and show the results in Table VI. Compared to the baseline without knowledge distillation, the spatial loss boosts the performance by 0.91% and 0.82% in terms of mIoU on VOCAug [21] and Cityscapes [22], respectively. The attention loss brings about less gains than that of VOCAug, which
suggests that the local spatial structure plays a more important role than the global one for semantic segmentation. When considering both the locality property and the global context by introducing the two losses, the performance is upgraded by 1.71% and 1.52% on VOCAug [21] and Cityscapes [22], respectively, which verifies the advantage of our KD loss.
**Decoder**. To show the superiority of our dual-frequency decoder, we make the record of the baseline (\(row\)\(1\)) that using UPerNet [38] as decoder in Table VII. From the table, we see that our decoder slightly outperforms the baseline at much less cost, only 46.0% parameters and 17.5% FLOPs of the baseline. This validates the merits of the dual-frequency design.
**Loss hyper-parameter**. To examine the contributions of the spatial loss, the attention loss, and the cross pseudo supervision loss, we respectively vary the hyper-parameters \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda\) from 0 to 1 at an interval of 0.1 in Table VIII (when one hyper-parameter changes, the rest keep still). From the results, it can be found that the performance achieves the best when \(\lambda_{1}\) and \(\lambda_{2}\) are both 0.5, i.e., the spatial loss and the attention loss contributes equally to the model; simultaneously, it obtains the most promising result when \(\lambda\) is 0.1 for the cross pseudo supervision loss, which indicates that the weak supervision should not be over-emphasized.
### _Qualitative Results_
We randomly chose some images from VOCAug [21] dataset and Cityscapes [22] dataset, and visualize their segmentation results by using different colors to mark the semantic categories in Fig. 4 and Fig. 5, respectively. For better view, we use the dashed rectangle to highlight the difference zone. As depicted in the two figures, we see that our TriKD method enjoys the most satisfying performance on the two benchmarks compared to its supervised version and CPS [3]. In particular, it can well discriminate the tiny objects such as the vehicle wheel (\(bottom\)\(row\) in Fig. 4) and lamp pole (\(row\)\(3\&4\) in Fig. 5) when there exists confusion or crack by the alternatives. The reason may be that the local spatial patterns are preserved by the student network via the knowledge distillation scheme when the global pixel-level context is considered with the help of rich unlabeled samples.
## V Conclusion
This paper studies the semi-supervised semantic segmentation task from the perspective of knowledge distillation. To make the student network capture both the local and global context from the teacher network, we design the triple-view encoder by adopting both pure ConvNet and pure ViT as the backbones of teacher networks. Specifically, the introduced spatial loss and attention loss guarantee the low-level local spatial relations and the high-level global semantics to be distilled to the student network. Meanwhile, we modeling the feature channel importance by the channel-wise attention mechanism
Fig. 4: Visualization on VOCAug [21]. (a) CPS; (b) TriKD\({}_{s}^{*}\); (c) TriKD\({}^{*}\).
Fig. 5: Visualization on Cityscapes [22]. (a) CPS; (b) TriKD\({}_{s}^{*}\); (c) TriKD\({}^{*}\).
in the frequency domain. During the inference, we adopt the lightweight student network and dual-frequency decoder for segmentation, which allows our approach to achieve the most promising performances on two benchmarks.
|
2309.04321 | Composite scalar bosons masses: Effective potential versus
Bethe-Salpeter approach | Ten years ago the $125$ GeV Higgs resonance was discovered at the LHC[1,2],
if this boson is a fundamental particle or a particle composed of new strongly
interacting particles is still an open question. If this is a composite boson
there are still no signals of other possible composite states of this scheme, a
possible solution to this problem was recently discussed in Refs.[30,31], where
it is argued that the Higgs boson can be a composite dilaton [30]. In this
work, considering an effective potential for composite operators we verify that
the potential responsible for a light composite scalar boson of $O(120)GeV$,
behaves like $\propto \Phi^4$ suggesting that if the Higgs boson is a composite
scalar it may be a composite dilaton. | A. Doff | 2023-09-08T13:35:23Z | http://arxiv.org/abs/2309.04321v1 | # Composite scalar bosons masses: Effective potential versus Bethe-Salpeter approach
###### Abstract
Ten years ago the 125 GeV Higgs resonance was discovered at the LHC [1; 2], if this boson is a fundamental particle or a particle composed of new strongly interacting particles is still an open question. If this is a composite boson there are still no signals of other possible composite states of this scheme, a possible solution to this problem was recently discussed in Refs.[30; 31], where it is argued that the Higgs boson can be a composite dilaton [30]. In this work, considering an effective potential for composite operators we verify that the potential responsible for a light composite scalar boson of \(O(120)GeV\), behaves like \(\propto\Phi^{4}\) suggesting that if the Higgs boson is a composite scalar it may be a composite dilaton.
## I Introduction
The Higgs boson discovery was one of the major particle physics breakthrough in the last decades. If this boson is composed by new strongly interacting particles is still an open question, composite scalar bosons appear in the context of Technicolor theories (TC) [3; 4; 5], which usually have a composition scale of order of \(\Lambda_{C}\geq 1\)TeV. Another possibility raised a few years ago is to have a Higgs arising as a composite pseudo-Goldstone boson (PGB) from the strongly interacting sector. In this case the Higgs mass is protected by an approximate global symmetry and is only generated via quantum effects, models based on this approach are usually called composite Higgs models[29].
Technicolor, was inspired in QCD, to provide a natural and consistent quantum-field theoretic description of electroweak (EW) symmetry breaking, without elementary scalar fields. Crafting a realistic Technicolor model can be a very precise engineering problem.
Over the years different models have emerged considering different approaches to obtain large mass anomalous dimension (\(\gamma\)), which in summary, can be classified as
(i) Quasi-conformal TC theories[6; 7; 8; 9; 10; 11], where \(\gamma(N_{TC},n_{F})\), and it is possible to obtain an almost conformal behavior when the fermions are in the fundamental representation, \(R=F\), introducing a large number of TC fermions \(n_{F}\). Nonetheless, the cost of such procedure may be a large S parameter, in an the similar way an almost conformal TC theory can also be obtained when the fermions are in larger representations that the fundamental one[12; 13; 14; 15].
(ii) Methods of lattice gauge theory, as the authors in Refs.[16; 17; 18; 19] has demonstrated, the conformal window for the \(SU(3)\) gauge theory lies in the range \(8<n_{f}<12\), and in this region is possible indeed have a slowly running coupling (or \(\beta\approx 0\) ).
(iii) Gauged Nambu-Jona-Lasinio models, in this case the existence of an "effective" four-fermion interaction in TC dynamics[20; 21; 22; 23; 24; 25; 26], could also be responsible by large \(\gamma\) values.
As proposed by Holdom[27], a light composite scalar boson may be generated when the strong interaction theory (or TC) has a large mass anomalous dimension (\(\gamma\)), the discussion of how these scenarios might lead to a light composite scalar boson is presented in Refs.[28]. Assuming these different scenarios, calculations involving effective Higgs Lagrangians led to different predictions regarding the masses of composite scalar bosons.
The self-energy of the new fermions (technifermions) responsible for the composite states, that are characterized by a large mass anomalous dimension \(\gamma\), result in mass diagrams whose calculation do not scale with the naive dimensions. In this case, the self-energy \(\Sigma_{\rm F}(p^{2})\) at large momenta is proportional to
\[\Sigma_{\rm F}(p^{2})\propto\frac{\mu_{\rm F}^{3}}{p^{2}}(p^{2}/\mu_{\rm F}^{ 2})^{\gamma/2}. \tag{1}\]
where \(\mu_{\rm F}\sim O(1)TeV\) is the typical composition or dynamical fermion mass scale.
The absence of signals of a large scalar boson sector in the experimental data, as well as a possible explanation of a light Higgs boson, has been beautifully discussed recently in Refs.[30; 31]. In that references it is argued that the Higgs boson, a Gildener-Weinberg dilaton [32], could be a composite dilaton [30].
Recently, considering constituents of same mass, we computed the composite scalar mass using Bethe-Salpeter equations (BSE) [33]. The calculation was performed with the help of an ansatz in the form of Eq.(1) for the constituent self-energy dependent on the mass anomalous dimension, the results obtained indicate how the scalar mass \(M_{S}\) can vary with the mass anomalous dimension(\(\gamma\)) leading to \(M_{S}(\gamma)=A(\gamma)\mu_{\rm F}\). At this point, we must emphasize that there are not in the literature other calculations using the Bethe-Salpeter equations to address the question of a possible composite Higgs boson in Technicolor models. Usually, the solutions of the Bethe-Salpeter equations are used in hadronic physics, as, for example, in the study of meson spectroscopy[34].
In this work, considering effective potential for composite operators[35; 36; 37] we extend the discussion started in Ref.[33], in particular we verify that the potential responsible for a light composite scalar boson, \(M_{S}\sim O(120)GeV\), behaves like \(\propto\Phi^{4}\), indicating the possibility that the Higgs boson may in principle be a composite dilaton as suggested in Refs.[30; 31], and its mass can be smaller than the composition scale as long as we have large anomalous dimensions. Moreover, we determine the mass obtained for the lightest pseudo scalar boson, \(\Pi^{0}\sim\bar{N}N\) and we retrieve the result described in Ref.[38], assuming the Bethe-Salpeter equations.
This paper is organized as follows: In section II we determine the effective potential for composite operators for the constituent self-energy considered in Ref.[33], in the section III we calculate the scalar composite scalar mass from the effective potential. With the help of Bethe-Salpeter equations (BSE), assuming the same conditions employed in the previous section, in the section IV we calculate the scalar boson mass and compare with the previous result. At the end of this section, we determine \(\Pi^{0}\) mass from the BSE equations. The section V contains our conclusions.
## II The effective potential and the self-energy ansatz
### The effective potential for composite operators
The effective potential for composite operators was proposed many years ago in Ref[35] by Cornwall, Jackiw and Tomboulis, as complementary references we suggest to the reader [37; 39], where it is possible to find a detailed calculation of the composite scalar masses considering this method. The effective action for composite operators \(\overline{\Gamma}(G)\) is a function of the Green functions denoted by \(G=G_{n}\), and is stationary with respect to variations of \(G_{n}\). The effective potential \(V(G)\) is defined by the following equation
\[V(G_{n})\int d^{4}x=-\overline{\Gamma}(G)|_{ti}, \tag{2}\]
where (\(ti:=translation\ invariant\)), and \(V(G)\) can be written in terms of the complete fermion (S) and gauge boson (D) propagators, in the form
\[V(S,D)=-iTr\left(\ln S_{0}^{-1}S-S_{0}^{-1}S+1\right)+iV_{2}(S,D), \tag{3}\]
where in this expression the complete fermion propagator is represented by
\[S^{-1}(p)=p\!\!\!/A(p^{2})-B(p^{2})\,. \tag{4}\]
whereas, the free propagator is \(S_{0}=i/p\!\!\!/\), and we shall assume \(A(q^{2})=1\) and \(B(p^{2})=\Sigma(p^{2})\). In these equations, we are not considering contributions to the potential due to gauge and ghosts loops, we just are interested only in the fermionic bilinear condensation in the scalar channel, and will consider a non-abelian gauge theory, stronger than QCD, whose fermions form the composite scalar boson. In this case, we can consider the approximation where \(D=D_{0}\) stands for the bare gauge boson propagator, that in Landau gauge, can be written as
\[g^{\mu\nu}D_{\mu\nu}(p-k)=\frac{3}{(p-k)^{2}}=3G(p-k). \tag{5}\]
In order to keep the above equations in a compact form, we are not writing the Lorentz and \(SU(n)_{TC}\) indexes, as well as momentum integrals. The last term in Eq.(3), \(V_{2}(S,D)\), corresponds to the sum of all two-particle irreducible vacuum diagrams, shown in Fig.(1), that can be analytically represented by
\[iV_{2}(S,D)=-\frac{i}{2}\,Tr(\Gamma S\Gamma SD), \tag{6}\]
where \(\Gamma\) is the fermion proper vertex. An important property in the potential Eq.(3) is that its minimization with respect to the complete propagators (\(S\) or \(D\)), \(\delta V/\delta(S,D)=0\), reproduce exactly the Schwinger-Dyson equations for the complete \(S\) and \(D\) propagators.
To obtain an effective Lagrangian for the composite scalar boson, it is better to compute the vacuum energy density which is given by the effective potential calculated at minimum subtracted by its perturbative part, which does not contribute to dynamical mass generation
\[\Omega_{V}=V(S,D)-V(S_{0},D_{0}), \tag{7}\]
After replacing the Eqs.(3), (4) and (5) in Eq.(7), now indicating the momenta integrals we can write
\[\Omega_{V} = i\int\frac{d^{4}p}{(2\pi)^{4}}Tr\left[\ln\left(1-\frac{\Sigma^{ 2}(p^{2})}{p^{2}}\right)+\frac{\Sigma^{2}(p^{2})}{p^{2}-\Sigma^{2}(p^{2})}\right.\] \[\left.+\frac{3C_{2}g^{2}(p^{2})\Sigma^{2}(p^{2})}{p^{2}(p^{2}- \Sigma^{2}(p^{2}))}i\int^{p^{2}}\frac{d^{4}k}{(2\pi)^{4}}\frac{\Sigma^{2}(k^{ 2})}{k^{2}(k^{2}-\Sigma^{2}(k^{2}))}\right],\]
where to obtain the above equation we assumed the angle approximation
\[\frac{g^{2}(p,k)}{(p-k)^{2}}=\frac{g^{2}(p^{2})}{p^{2}}\Theta(p^{2}-k^{2})+ \frac{g^{2}(k^{2})}{k^{2}}\Theta(k^{2}-p^{2})\]
and \(\Theta\) is the Heaviside step function. In the Euclidean space we can write \(\Omega_{V}\) as
\[\Omega_{V}=\Omega_{V_{A}}+\Omega_{V_{B}} \tag{9}\]
where
Figure 1: Two-particle irreducible contribution to the vacuum energy.
\[\Omega_{V_{A}}=-\frac{N_{TC}n_{F}}{8\pi^{2}}\int p^{2}dp^{2}\left[\ln\left(1+ \frac{\Sigma^{2}(p^{2})}{p^{2}}\right)-\frac{\Sigma^{2}(p^{2})}{p^{2}+\Sigma^{2 }(p^{2})}\right], \tag{10}\]
and
\[\Omega_{V_{B}}=+\frac{N_{TC}n_{F}}{8\pi^{2}}\int p^{2}dp^{2}\left[\frac{3C_{2} \alpha_{TC}(p^{2})\Sigma^{2}(p^{2})}{2\pi p^{2}(p^{2}+\Sigma^{2}(p^{2}))}\int^ {p^{2}}\frac{dk^{2}\Sigma^{2}(k^{2})}{k^{2}(k^{2}+\Sigma^{2}(k^{2}))}\right], \tag{11}\]
in these equations, \(N_{TC}\) is the number of technicolors (techniquarks are in the fundamental representation \(R=F\) of \(SU(N_{TC})\)), and \(n_{F}=n_{F}(R)\) is the number of technifermions(F).
### The self-energy ansatz
In usual calculations involving the Schwinger-Dyson equations, the self-energy \(\Sigma(p^{2})\) is obtained from the numerical solutions for the fermionic propagators. In this work we will assume the ansatz employed in [33], that is a function of the mass anomalous dimension \(\gamma\), and has the form
\[\Sigma(p^{2})=\frac{\mu^{3}}{p^{2}+\mu^{2}}\left(\frac{p^{2}+\mu^{2}}{\mu^{2} }\right)^{\kappa} \tag{12}\]
where \(\kappa=\gamma/2\). In this expression \(\mu=\mu_{F}\) is the dynamically generated mass, and Eq.(12) behaves in the infrared region as \(\mu\), as \(\kappa\to 0\), Eq.(12) leads to self-energy roughly equal to \(\mu^{3}/p^{2}\) in the asymptotic region, which is the behavior predicted by a standard operator product expansion (OPE) for the techniquark self-energy \(\Sigma(p^{2})\) for \(\langle\bar{Q}Q\rangle\sim\mu^{3}\).
However, when \(\kappa\to 1\)(or \(\gamma\to 2\)) Eq.(12) can be written in the form
\[\Sigma(p^{2})\approx\mu\left[1+\delta_{1}\ln\left[(p^{2}+\mu^{2})/\mu^{2} \right]\right]^{-\delta_{2}}\,, \tag{13}\]
where to arrive at this form, \(\delta_{1}\) and \(\delta_{2}\) are obtained from \(\gamma\) when expanded as a function of the running coupling \(g^{2}(p^{2})\). In this case, the self-energy stands for an extreme walking gauge theory, and the standard OPE prediction is modified by a large anomalous dimension (\(\gamma\)), and the self-energy behaves as \(\mu\ln^{-\gamma}(p^{2}/\mu^{2})\) asymptotically, mapping the SDE possible solutions in the full Euclidean space, allow us to obtain an equation for the scalar mass \(M_{S}\) as a function of \(\gamma\).
## III The effective Lagrangian for composite scalar bosons
### The kinetic term of the effective action
In this section, we shall consider the problem of generating one light composite scalar boson in the context of the effective potential assuming the self-energy ansatz given by Eq.(12). As pointed out recently in Ref.[30; 31] a composite Gildener-Weinberg dilaton should result from an effective potential that, at leading order(0), is proportional to
\[V(\Phi)_{0}\propto\Phi^{4}, \tag{14}\]
where \(\Phi\) is a composite effective field. A theory that generates a composite Higgs boson, where \(\Phi\propto\bar{Q}Q\) and \(\bar{Q}Q\) corresponds to a bound state of techniquarks Q, should naturally not contain a quadratic term in the effective potential.
The \(\Sigma^{2}(p^{2})/p^{2}\) term in the logarithm in Eq.(10) can be expanded, the contribution of the \(\Sigma^{2}\) term in this equation eventually cancels out, leading to the absence of quadratic terms in the effective potential. This cancellation is a consequence of the fact that \(\Sigma(p^{2})\) obeys the linear homogeneous SDE for the fermion propagator[37; 39].
In order to obtain an expression for the effective Lagrangian of composite scalars bosons from Eq.(12), we will reconsider the approach described in Ref.[36] to determine a complete effective theory, including the kinetic term of the effective action. The fermionic propagator can be described by a fermion bilinear that has the following operator expansion
\[S(x,y)=\langle\Omega|T[\chi(x+\frac{1}{2}y)\psi(x-\frac{1}{2}y)]|\Omega \rangle\stackrel{{\sim}}{{y\to 0}}\ C(y)\phi(x), \tag{15}\]
where in the above equation \(C(y)\) is a \(c\)-number function, and \(\phi(x)\) acts like a dynamical effective scalar field. Taking into account the structure of the real vacuum, where the propagator is a fermion bilinear and \(\Sigma\) depending on two momenta \(\Sigma(p,k)\), we can consider the Fourier transform of above equation and write
\[\Sigma(p,k)\sim\phi(k)\Sigma(p)\,. \tag{16}\]
In the effective action, \(\phi(x)\) can be seen as a variational parameter whose minimum will be indicated by \(\phi\), corresponding to the leading contribution of its expansion around \(k=0\). A more detailed presentation of this approach can be seen in Ref.[36], as commented in Ref.[39], depending on the theory dynamics, i.e. \(\kappa\in[0,1]\) in Eq.(12), when the self-energy decreases slowly with the momentum, which corresponds to the case when \(\kappa\to 1\) in Eq.(12), the kinetic term is important for the characterization of the effective Lagrangian.
The kinetic term is given by the polarization diagrams \((\Pi(k^{2},\phi))\) shown in Fig.(2), these diagrams are responsible in the effective Lagrangian for terms of the form
\[\Omega_{K}=\int d^{4}x\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi\,. \tag{17}\]
When the diagrams of Fig.(2) are calculated it is possible to verify that this term will be multiplied by a non-trivial function (\(Z(\kappa)\)), which has a dependency with (\(N_{TC}\)), and the number of fermions (\(n_{F}\)), and also \(\gamma\), once we consider the anzatz for \(\Sigma\) described by Eq.(12). This non-trivial function, that must normalize the scalar composite field \(\phi\), was obtained in the Ref.[36] and can be written as
\[(Z(\kappa))^{-1}\approx\frac{N_{TC}n_{F}}{4\pi^{2}}\int dp^{2}\frac{(p^{2})^{ 2}(\Sigma(p^{2})/\mu)^{2}}{(p^{2}+\mu^{2})^{3}}\;\;, \tag{18}\]
the index \(\kappa=\frac{\gamma}{2}\) is the same one appearing in Eq.(12).
### The effective Lagrangian
Once we have characterized the kinetic term in the effective Lagrangian, now we can calculate the expansion for \(\Sigma(p^{2})/p^{2}<<1\) in Eq.(9), and write this equation in terms of the variational field \(\phi\) in the form [37; 39]
\[\tilde{\Omega}\approx\int d^{4}x\left[\frac{1}{2Z^{(\kappa)}}\partial_{\mu} \phi\partial^{\mu}\phi-\lambda_{4(0)}\phi^{4}+\lambda_{6(0)}\phi^{6}-...\right], \tag{19}\]
where now
\[\lambda_{4(0)}=\;\frac{N_{TC}n_{F}}{8\pi^{2}}\,\int dz\left(\;- \frac{3}{4}\frac{f^{4}(z)}{z}-f^{2}(z)\frac{df^{2}(z)}{dz}\;+\right.\] \[\left.+\frac{2}{3}\frac{f^{6}(z)}{z^{2}}+\frac{f^{4}(z)}{z^{2}} \frac{df^{2}(z)}{dz}-\frac{5}{8}\frac{f^{8}(z)}{z^{3}}-\frac{f^{6}(z)}{z^{2}} \frac{df^{2}(z)}{dz}+\right.\] \[\left.+\frac{3}{5}\frac{f^{10}(z)}{z^{4}}+\frac{f^{8}(z)}{z^{3}} \frac{df^{2}(z)}{dz}...\right)\] \[\lambda_{6(0)}=-\frac{\lambda_{4(0)}}{\mu^{2}}, \tag{20}\]
with \(z=\frac{p^{2}}{\mu^{2}}\) and \(f(z)=\frac{\Sigma(p^{2})}{\mu}\). The effective coupling constants \(\lambda_{4(0)}\) and \(\lambda_{6(0)}\) are determined from Eqs.(9) and (12).
In Eq.(11) \(C_{2}\) is the Casimir operator for fermions, and we consider \(g^{2}(p^{2})=g^{2}\) and also the MAC hypothesis[40; 41] in this calculation, where \(\frac{g^{2}C_{2}}{4\pi}\approx C_{2}\alpha\approx\frac{\pi}{3}\), that lead to
\[\lambda_{4(0)}(\kappa)\approx\frac{N_{TC}n_{F}}{8\pi^{2}} \left[\frac{1}{2}+\frac{3}{16\kappa-16}+\frac{2}{3(7-6\kappa)}+\right.\] \[\left.-\frac{5}{8(10-8\kappa)}+\frac{3}{5(13-10\kappa)}+\right.\] \[\left.-\frac{(\kappa-1)\left(68\kappa^{2}-172\kappa+109\right)}{( 4\kappa-5)(6\kappa-7)(10\kappa-13)}\right]\] \[\lambda_{6(0)}(\kappa)\approx-\frac{1}{\mu^{2}}\lambda_{4(0)}( \kappa). \tag{21}\]
The Eq.(19) differs from a conventional scalar field Lagrangian by the kinetic term \(Z(\kappa)\), therefore, the final effective Lagrangian comes out when we normalize the scalar field \(\Phi\) according to \(\Phi\equiv[Z(\kappa)]^{-\frac{1}{2}}\phi\), leading to the normalized effective Lagrangian \(\tilde{\Omega}(\kappa)\)
\[\tilde{\Omega}(\kappa)=\int d^{4}x\left[\frac{1}{2}\partial_{\mu}\Phi\partial ^{\mu}\Phi-\tilde{\lambda}_{4}(\kappa)\Phi^{4}+\tilde{\lambda}_{6}(\kappa) \Phi^{6}-...\right], \tag{22}\]
where, assuming Eq.(12), the normalization function \(Z(\kappa)\) can be determined to be equal to
\[Z(\kappa)\approx\frac{4\pi^{2}}{N_{TC}n_{F}}\left(2-2\kappa\right). \tag{23}\]
The coupling constants \(\lambda_{4(0)}\), \(\lambda_{6(0)}\) indicated in Eq.(21), were replaced by the respective normalized ones \(\tilde{\lambda}_{4}(\kappa)\equiv Z^{2}(\kappa)\lambda_{4(0)}\) and \(\tilde{\lambda}_{6}(\kappa)\equiv Z^{3}(\kappa)\lambda_{6(0)}\). Therefore, the scalar mass \(M_{S}^{2}(\kappa)\) can be determined from the effective potential Eq.(22) at the minimum from
\[M_{S}^{2}(\kappa)=\frac{\partial^{2}\tilde{\Omega}(\kappa)}{\partial\Phi^{2}}| _{{}_{\Phi=\Phi_{min}}\approx 2}\frac{(\tilde{\lambda}_{4}(\kappa))^{2}}{ \tilde{\lambda}_{6}(\kappa)}\approx 2Z(\kappa)\lambda_{4(0)}(\kappa)\mu^{2}. \tag{24}\]
Note that the effective potential depends on the TC fermionic representation \(R\), the product \((N_{TC}\times n_{F}(R))\), and the anomalous dimension \(\gamma=\gamma(N_{TC},n_{F}(R))\). In order to present numerical results as a function of \(\gamma\) (which is also a function of \(N_{TC}\) and \(n_{F}(R)\)), we will simply assume that TC is not so different from QCD and adopt \(N_{TC}=3\) and \(n_{F}=5\)[33], expecting that we can still present the variation of the TC masses with \(\gamma\) when determined from the effective potential. At this point we must point out that
\[M_{S}^{2}(\kappa)/\mu^{2}\approx 2Z(\kappa)\lambda_{4(0)}(\kappa)\propto\frac{ \not{N}_{TC}\not{n}_{F}}{\not{n}_{F}}f(\kappa)\propto f(\kappa)\]
i.e. the dependence of \(M_{S}^{2}(\gamma,N_{TC},n_{F}(R))=M_{S}^{2}(\gamma)\) results only from the anzatz dependency, Eq.(12), with \(\gamma\).
In usual perturbative QCD, we can expect \(\gamma\sim 0\), however, as we commented large mass anomalous dimensions \(\gamma\) could be obtained in scenarios as listed (i)-(iii).
An increase of \(n_{F}\), for example as the authors in Refs.[16; 17; 18; 19] have demonstrated the conformal window
Figure 2: Diagrams contributing to the kinetic term in the effective Lagrangian.
for the \(SU(3)\) gauge theory lies in the range \(8<n_{f}<12\), or the consideration of higher TF representations \(R>F\)[12; 13; 14; 15], would lead to large values to \(\gamma\) needed to produce changes in the TC mass function. In calculations involving the Eq.(12), we will consider \(\gamma\) as an adjustable parameter, and if the Higgs boson is a composite particle, a realistic TC model could probably be characterized by one of these possible scenarios.
## IV The TC scalar mass: BSE versus effective potential approach
### The \(Su(n)_{tc}\) scalar masses
In this section, we will reconsider the approach employed in Ref.[33] for the characterization of the TC scalar and pseudo scalar masses using the Bethe-Salpeter equations (BSE).
We will assume that (BSE) equations described in Refs.[42; 43; 44], with the approximations discussed in[33], with the difference that in this work we will consider \(G_{\rho\nu}(k-q)\) given by Eq.(5), in order to compare the results obtained with the effective potential.
The Bethe-Salpeter equation can be written as
\[\chi^{tc}(p,q)=-\imath\int\frac{d^{4}k}{(2\pi)^{4}}S(q+\alpha p)K _{\rho\nu}(p,k,q)S(q-\beta p),\] \[K_{\rho\nu}(p,k,q)=\gamma_{\rho}\chi^{tc}(p,k)\gamma_{\nu}G_{ \rho\nu}(k-q), \tag{25}\]
where in the above expression \(\chi^{tc}\) is the technicolor(tc) BS wave function, \(\alpha\) and \(\beta\) characterize the fraction of momentum carried by the constituents, with \(\alpha+\beta=1\).
In the calculation, we will consider that each constituent carries half of the momentum, i.e. \(\alpha=\beta=1/2\). In the Eq.(25) the fermion propagator is given by (4), and the BSE solution appear as an eigenvalue problem for \(p^{2}=M^{2}\), where \(M\) is the bound state mass.
In the equation(25) the variables are \(p,q,k\), \(k\) is integrated, and we remain with an equation in \(q\) that will have a solution for \(p^{2}=M^{2}\). To solve this integral equation, to scalar channel we can project \(\chi^{tc}(p,q)=\chi^{tc}_{S}(p,q)\) into four coupled homogeneous integral equations given by
\[\chi^{tc}_{S}(p,q)=\chi_{S0}+\not{p}\chi_{S1}+\not{q}\chi_{S2}+[\not{p},\not{ q}]\chi_{S3}, \tag{26}\]
which are functions of \(p^{2}\), \(q^{2}\) and \(p.q=pqcos\theta\). It is possible to expand \(\chi^{tc}(p,q)\) in terms of Tschebyshev polynomials, and these equations can be truncated at a given order determined by the relative size of the next-order functions. A satisfactory solution can be obtained by keeping only some terms, like \(\chi^{(0)}_{S(0,1)},\chi^{(0,1)}_{S(1)},\chi^{(0,1)}_{S(2)}\).
In addition to the above considerations we will consider constituents of same mass \(m=m_{a}=m_{b}\), with \(m=\Sigma(x+\frac{1}{4}p^{2})\), where \(x=q^{2}/\mu^{2}\)and \(\mu=\mu_{tc}\) is the characteristic mass scale of the binding forces(TC).
The procedure for determining \(M_{S}(\kappa)\) is the same one described in Ref.[33]. In Fig.3 we present the behavior obtained for \(M_{S}(\kappa)/\mu_{tc}\) considering the anzats for \(\Sigma(p^{2})\) given by Eq.(12), assuming Eq.(24) and the corresponding result obtained from Eq.(25). As in Eq.(24) the scalar mass depicted in the Fig.3 is just a function of \(\gamma\), which is an adjustable parameter.
In this figure we normalize ours results for \(M_{S}\) in terms of
\[M_{S}=2\mu_{tc}. \tag{27}\]
associated to a negligible \(\gamma\).
The choice of this normalization is based on the result described in Ref.[45], where Delbourgo and Scadron verified analytically with the help of the homogeneous BSE equations, that the sigma meson mass is given by \(m_{\sigma}=2\mu_{dyn}\). We can use this result obtained for QCD to determine, by appropriate rescaling, the behavior of \(M_{S}\) in TC models, where \(\mu_{tc}=1TeV\).
In the Fig.3, the dot-dashed line in orange matches the results obtained from Eq.(24), the points denoted in (\(\blacklozenge\)) represent the BSE numerical solutions obtained for \(M_{S}(\kappa)/\mu_{tc}\). The line, in red, corresponds to the radius \(M_{H}/\mu_{tc}\) for comparison with the \(M_{S}(\kappa)/\mu_{tc}\) results.
In the dot-dashed line in olive we show the fit for \(M_{S}(\kappa)/\mu_{tc}\) data with \(R^{2}=0.99997\) which corresponds to
\[S(\kappa)=\frac{M_{S}(\kappa)}{\mu_{tc}}=2-1.3161\kappa-2.65806 \kappa^{2}\] \[+1.38845\kappa^{3}+0.66806\kappa^{4}. \tag{28}\]
The expansion considered in Eq.(19) is not sensitive to the region of low momenta, which is captured by the BS equations, so that at \(\gamma\approx 1\) the curves start to show a different behavior.
The comparison of the behavior exhibited for \(M_{S}(\kappa)/\mu_{F}\), obtained with the different approaches, suggests that the potential responsible for generating the
Figure 3: Scalar masses \(M_{S}(\kappa)\) considering different approaches. The contextualization of the curves behavior are described in the text.
composite light scalar behaves like \(\propto\Phi^{4}\), being characterized by \(\gamma\sim O(1.2-1.8)\) indicating that the composite scalar boson \(\Phi\) seems to behave like a dilaton as suggested in[30; 31].
### TC pseudo scalar masses
Let us suppose that the TC group is not so different from QCD where there are many pseudo Goldstone bosons (or technipions) resulting from the chiral symmetry breaking of the technicolor theory.
These technipions, besides the ones absorbed by the W's and Z gauge bosons, can be classified for example according to[47]:
(a) Charged and neutral color singlets:
\[\Pi^{\pm}\sim\bar{U}^{i}D_{i}+\bar{D}^{i}U_{i}-3\bar{N}E\] \[\Pi^{0}\sim\bar{U}^{i}U_{i}+\bar{D}^{i}D_{i}-3(\bar{N}N+\bar{E}E), \tag{29}\]
where \((i)\) denote de number of TC flavours.
(b) Colored triplets:
\[\Pi^{3}\sim\bar{N}U_{i}^{a}+\bar{E}U_{i}^{a}, \tag{30}\]
(c) Colored octets:
\[\Pi^{8}\sim\bar{U}_{i}^{a^{\prime}}U_{i}^{a}+\bar{D}_{i}^{a^{\prime}}D^{a}i, \tag{31}\]
in the above expressions \((a,a^{\prime})\) denote a color index.
The heaviest pseudo Goldstone boson carries color once they have large radiative corrections from QCD, while others may have only electroweak corrections to their masses.
The lightest technifermion will be the neutral one (N), and the lightest pseudo Goldstone \(\Pi^{0}\sim\bar{N}N\), and we assumed that such neutral boson is composed by (N) technifermions. From this point we can determine \(\Pi^{0}\) mass, \(M_{\Pi^{0}}\), from the BSE equations.
For pseudo scalar components the projection of \(\chi^{te}(p,q)=\chi^{te}_{P}(p,q)\) is given by
\[\chi^{te}_{P}(p,q)=\gamma_{5}(\chi_{P}+p\!\!\!/\chi_{P1}+q\!\!\!/\chi_{P2}+[p\! \!\!/,q]\chi_{P3}\,), \tag{32}\]
the components of Eq.(32), \(\chi^{(0,1)}_{P(i)}\) for \(i=0..3\), are listed in Appendix A. Assuming the Eqs.(25) and (32), in Fig.4 we present the behavior for \(M_{\Pi^{0}}(\kappa)/\mu_{tc}\) compared to \(M_{S}(\kappa)/\mu_{tc}\), where we again normalize our results in terms of Eq.(27).
In this figure the dot-dashed line in olive correspond to the fit of \(M_{S}(\kappa)/\mu_{tc}\) given by Eq.(28), the points indicated by (a) in Fig.(3) represent the BSE numerical solutions obtained for \(P(\kappa)=M_{\Pi^{0}}(\kappa)/\mu_{tc}\).
The dot-dashed light blue line represents the fit for \(M_{\Pi^{0}}(\kappa)/\mu_{tc}\) data with \(R^{2}=0.999997\), which corresponds to
\[P(\kappa)=M_{\Pi^{0}}(\kappa)/\mu_{tc} = 2-1.88021\kappa+0.0626412\kappa^{2} \tag{33}\] \[-0.792769\kappa^{3}+0.780151\kappa^{4}.\]
Therefore, from Eqs.(28) and (33), we determine the radius for \(M_{S}(\kappa)=M_{H}\) as
\[\frac{M_{PS}}{M_{S}}=2.47, \tag{34}\]
which represents the following lower bound on the lightest pseudo scalar mass \(M_{\Pi^{0}}\approx 309GeV\). This result confirms the estimate presented in Ref.[38], which corresponds to
\[M_{\Pi^{0}}\approx(200-460)GeV. \tag{35}\]
where we had also assumed that such neutral boson is solely composed by N technifermions.
## V Conclusions
In this work, we verify that the effective potential responsible for \(M_{S}\sim O(120)GeV\) behaves like \(\propto\Phi^{4}\) at leading order. We also considered the comparison between two different approaches to obtain \(M_{S}(\gamma)\), i.e. the effective potential for composite operators and Bethe-Salpeter equations. These results corroborate with the hypothesis that if the Higgs boson is a composite scalar, it may be a composite dilaton as suggested in Refs.[30; 31].
This result is displayed in Fig.(3), the curves obtained with different approaches, considering \(G(k-q)\) given by Eq.(5), overlap exactly up to \(\gamma\approx 1\). As we commented, the expansion considered in the Eq.(19) is not sensitive to the region of low momenta, which is captured by the BS equations, so that at \(\gamma\approx 1\) the curves start to show different behaviors.
In the last section of this work, we include the determination of the lightest pseudo scalar mass \(M_{\Pi^{0}}\approx 309GeV\), that confirms the estimate presented in Ref.[38]. Charged and colored technifermions will not only have larger masses than the neutral technifermion(N), but also more radiative corrections to their masses, and we can expect
Figure 4: Pseudoscalar masses \(M_{\Pi^{0}}(\kappa)/\mu_{tc}\)(curve in light blue) and scalar masses depicted in Fig.(3). The contextualization of the curves behavior are described in the text.
even larger masses for these colored and charged pseudo scalar bosons.
According to the discussion presented in Ref.[39], we still have other contributions to the effective Lagrangian, given by Eq.(19). These contributions are the ones coming from ordinary massive quarks and leptons that couple to the composite scalar boson \(\Phi\). These contributions will be dominated by the heaviest fermion, the quark top, and will generate terms of order \(\Phi^{3}\) and \(\Phi^{4}\).
However, as we verified in this work the \(\Phi^{4}\) contributions to \(\tilde{\Omega}\) from massive fermions can be disregarded, the only exception is the contribution to \(\Phi^{3}\) term, which is small but introduce some effect in the scalar mass calculation. If the Higgs boson is a composite particle it is still possible that its constituents are bounded by a non-Abelian gauge strong interaction, and we believe that combining different approaches can be useful for characterizing the properties of this possible composite state.
## Appendix A The pseudo scalar components \(\chi^{(0,1)}_{P(i)}\) in the BS equations
Assuming that each constituent, the (N) techniquarks, carries half of the momentum \(\alpha=\beta=1/2\), the different components of Eq.(32), \(\chi^{(0,1)}_{P(i)}\) for \(i=0..3\), are listed in the sequence
\[\chi^{(0)}_{P0}(x,p^{2})=3[(x-\frac{1}{4}p^{2}+m^{2})J_{1}]I_{P0}+\Delta\chi^{ (0)}_{P0}\,, \tag{10}\]
where
\[I_{P0}=\frac{2}{3\pi}\int dyy\chi^{(0)}_{P0}K_{1}\,, \tag{11}\]
with \(y=k^{2}/\Lambda^{2}\) and
\[K_{1}(x,y)=\frac{3}{16\pi^{2}}\int d\theta sen^{2}\theta G(x,y,cos\theta)\,, \tag{12}\]
\[J_{1}=\frac{2}{\pi}\int_{0}^{\pi}d\theta\frac{sen^{2}\theta}{D(p^{2},q^{2},pq cos\theta)} \tag{13}\]
and in the Eq.(13), we have
\[D(p^{2},q^{2},pqcos\theta)=\{(q+\frac{1}{2}p)^{2}+m^{2}\}\{(q-\frac{1}{2}p)^{ 2}+m^{2}\}. \tag{14}\]
Considering Taylor's series expansion of \((q+\frac{1}{2}p)^{2}+m^{2}\) and \((q-\frac{1}{2}p)^{2}+m^{2}\), keeping the first-order derivative terms for \(m\), we have that the function \(J_{1}\) can be written as
\[J_{1}=\frac{2}{c_{1}c_{4}+c_{2}c_{3}}\left[\frac{c_{2}}{D_{1}}+\frac{c_{4}}{D _{2}}+d_{1}\left(\frac{c_{1}}{D_{1}}-\frac{c_{3}}{D_{2}}\right)\right] \tag{15}\]
where, in our approximation with \(\alpha=\beta=1/2\), we obtain
\[c_{1}=c_{3}=x+\frac{1}{4}p^{2}+m^{2}\] \[c_{2}=c_{4}=1+2mm^{\prime}\]
and \(m^{\prime}\) is the derivative of \(m\) with respect to the momentum. In addition, as a consequence of \(\alpha=\beta\)
\[d_{1}=0\]
\[D_{1}=D_{2}=c_{1}+\sqrt{c_{1}^{2}-p^{2}xc_{2}^{2}}.\]
In Eq.(10), the term \(\Delta\chi^{(0)}_{P0}\) stands for corrections to the leading-order results for \(\chi^{(0)}_{P0}\), that correspond to \(\chi^{(0,1)}_{P1}\), \(\chi^{(0,1)}_{P2}\) and \(\chi^{(0,1)}_{P3}\).
With the approximations considered, we have
\[\Delta\chi^{(0)}_{P0}=-\frac{2}{3\pi}mp^{2}J_{1}\int dyy\chi^{(0) }_{P1}(K_{1}+2yK_{3})+\] \[-\frac{4}{9\pi}p^{2}J_{3}\int dyy\chi^{(0)}_{P1}(3K_{1}-4yK_{3})+\] \[+\frac{2}{3\pi}p^{2}(J_{1}-J_{3})h(K_{6},K_{3},x,y)\] \[+\frac{2}{3\pi}\left[(x-\frac{1}{4}p^{2}+m^{2})((J_{1}-4J_{3})p^{ 2}\right]\] \[\times h(K_{7},K_{1},y), \tag{16}\]
where we define,
\[h(K_{6},K_{3},x,y)=\int dyy\chi^{(0)}_{P3}\left(2\sqrt{xy}K_{6}- \frac{8}{3}xyK_{3}\right)\] \[h(K_{7},K_{1},y)=\int dyy^{2}\chi^{(2)}_{P0}\left(\frac{4}{3}K_{ 7}-K_{1}\right). \tag{17}\]
In the equation above, the lowest order terms \(\chi^{(0)}_{P(1-3)}\) are given by
\[\chi^{(0)}_{P1}=\frac{mJ_{1}}{J_{1}(x-\frac{1}{4}p^{2}+m^{2})}\chi^{(0)}_{P0} \,\ \chi^{(0)}_{P2}=0\,\ \chi^{(0)}_{P3}=\frac{1}{2}\frac{J_{1}}{J_{1}(x-\frac{1}{4}p^{2}+m^{2})} \chi^{(0)}_{P0}. \tag{18}\]
While the higher order term, \(\chi^{(2)}_{P0}\) is given by
\[\chi^{(2)}_{P0}=\frac{1}{xp^{2}}\frac{4(x-\frac{1}{4}p^{2}+m^{2})J_{3}-J_{1}}{ J_{1}(x-\frac{1}{4}p^{2}+m^{2})}\chi^{(0)}_{P0}. \tag{19}\]
We are dealing with scalars(S) and pseudo scalars(PS) bosons with equal mass constituents and in this case we
have simpler equations, compared to Ref.[43], that correspond to
\[J_{2}=0\;\;,\;\;J_{3}=\frac{1}{D_{1}^{2}}. \tag{11}\]
In the Eqs.(10-11), the kernels \(K_{i}(x,y)\), with \(i=3,6,7\) are given by
\[K_{3}(x,y)=\frac{3}{16\pi^{2}}\int d\theta\frac{sen^{4}\theta}{x+y-2\sqrt{xy} cos\theta}G(x,y,cos\theta), \tag{12}\]
\[K_{6}(x,y)=\frac{3}{16\pi^{2}}\int d\theta sen^{2}\theta cos\theta G(x,y,cos \theta), \tag{13}\]
\[K_{7}(x,y)=\frac{3}{16\pi^{2}}\int d\theta sen^{4}\theta G(x,y,cos\theta). \tag{14}\]
As in the determination of \(M_{S}(\kappa)\), to obtain \(M_{PS}(\kappa)\) we consider \(G_{\rho\nu}\) in the Landau gauge given by Eq.(5), in addition we also consider the MAC hypothesis[40; 41]. Therefore, in this equation \(G(x,y,cos\theta)=\mu^{2}G(k-q)^{2}\), with \(G(k-q)\) given by Eq.(5), that leads to
\[\mu^{2}G(k-q)=\frac{\mu^{2}}{p^{2}+k^{2}-2pqcos\theta}=G(x,y,cos\theta). \tag{15}\]
###### Acknowledgements.
I would like to thank A. A. Natale for reading the manuscript and for useful discussions. This research was partially supported by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) under the grant 310015/2020-0 (A.D.).
|
2310.00024 | Contrasting Features of Parton Energy Loss in Heavy-ion Collisions at
RHIC and the LHC | Energetic quarks and gluons lose energy as they traverse the hot and dense
medium created in high-energy heavy-ion collisions at the BNL Relativistic
Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). The nuclear
modification factor ($R_{AA}$) of leading particles quantifies parton energy
loss in such collisions, with the particle spectrum in $p+p$ collisions as a
reference. Previous $R_{AA}$ measurements at RHIC energies have revealed an
approximately constant trend at high transverse momenta ($p_{T}$), implying a
scenario where parton energy loss, $\Delta p_{T}$, scales proportionally with
$p_{T}$, a feature naively expected from energy loss dynamics in elastic
collisions. In this study, we investigate the LHC $R_{AA}$ measurements which
exhibit a pronounced $p_{T}$ dependence of $R_{AA}$ for various particle
species, and our analysis attributes this behavior to $\Delta p_T$ being
approximately proportional to $\sqrt{p_{T}}$. These distinct features are
consistent with model calculations of dominant radiative energy loss dynamics
at the LHC, in contrast to the dominance of collisional energy loss at RHIC.
Additionally, the linear increase of fractional energy loss with medium density
at different $p_{T}$ magnitudes affirms the previous empirical observation that
the magnitude of the energy loss depends mostly on the initial entropy density,
with no significant path-length dependence. Implications on the dynamical
scenarios of parton energy loss and future experimental investigations will
also be discussed. | Thomas Marshall, Philip Suh, Gang Wang, Huan Zhong Huang | 2023-09-28T22:23:58Z | http://arxiv.org/abs/2310.00024v3 | # Contrasting Features of Parton Energy Loss in Heavy-ion Collisions at RHIC and the LHC
###### Abstract
Energetic quarks and gluons lose energy as they traverse the hot and dense medium created in high-energy heavy-ion collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). The nuclear modification factor (\(R_{AA}\)) of leading particles quantifies parton energy loss in such collisions, with the particle spectrum in \(p+p\) collisions as a reference. Previous \(R_{AA}\) measurements at RHIC energies have revealed an approximately constant trend at high transverse momenta (\(p_{T}\)), implying a scenario where parton energy loss, \(\Delta p_{T}\), scales proportionally with \(p_{T}\), a feature naively expected from energy loss dynamics in elastic collisions. In this study, we investigate the LHC \(R_{AA}\) measurements which exhibit a pronounced \(p_{T}\) dependence of \(R_{AA}\) for various particle species, and our analysis attributes this behavior to \(\Delta p_{T}\) being approximately proportional to \(\sqrt{p_{T}}\). These distinct features are consistent with model calculations of dominant radiative energy loss dynamics at the LHC, in contrast to the dominance of collisional energy loss at RHIC. Additionally, the linear increase of fractional energy loss with medium density at different \(p_{T}\) magnitudes affirms the previous empirical observation that the magnitude of the energy loss depends mostly on the initial entropy density, with no significant path length dependence. Implications on the dynamical scenarios of parton energy loss and future experimental investigations will also be discussed.
**keywords:** heavy-ion collision; nuclear modification factor; parton energy loss
Color opacity stands as a fundamental trait of the hot and dense medium created in heavy-ion collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). As energetic quarks and gluons traverse the medium, they shed energy through elastic scattering [1; 2; 3; 4] and radiation of soft gluons [5; 6; 7]. In the scenario of an infinitely-high-momentum parton, energy loss would predominantly occur through radiative processes. Conversely, in the opposite scenario, collisional energy loss would become the dominant factor. Prior empirical examinations of final-state leading particle spectra and the pertinent nuclear effects, using RHIC data, have revealed the proportionality between parton energy loss (\(\Delta p_{T}\)) and the magnitude of transverse momentum (\(p_{T}\)), supporting the prevalence of collisional energy loss [8]. Given that collision center-of-mass energies (\(\sqrt{s_{NN}}\)) at the LHC significantly surpass those at RHIC by over an order of magnitude, the associated \(p_{T}\) range of generated particles now spans into a realm where radiative energy loss dynamics are expected to assume a more prominent role. Hence, the analysis of LHC data using the same framework as in Ref. [8] is warranted to investigate the potential transition in the dynamics of energy loss from RHIC to the LHC.
Both radiative and collisional energy losses are intricately linked to the path length (\(L\)) and the entropy density of the medium. We approximate the medium entropy density as \(\frac{1}{3}\frac{dN}{dy}\), where \(\frac{dN}{dy}\) represents the experimentally measured particle density per unit rapidity, and \(S\) corresponds to the transverse overlap area of the colliding system, which can be determined using Monte Carlo Glauber calculations [9; 10; 11; 12]. A previous study of RHIC data has unraveled the lack of or minimal dependence of \(\Delta p_{T}\) on \(L\), implying that parton energy loss is predominantly determined by the initial medium density [8]. This feature could arise from the scenario of rapid expansion of the collision system, resulting in a swift decrease in medium entropy density over time. It is of great interest to investigate whether the LHC data corroborate the same characteristics.
In experiments, the nuclear modification factor, \(R_{AA}\), quantifies the suppression or enhancement of particle yields in heavy-ion collisions relative to a nucleon-nucleon (\(NN\)) reference:
\[R_{AA}(p_{T})=\frac{d^{2}N^{AA}/dp_{T}d\eta}{T_{AA}d^{2}\sigma^{NN}/dp_{T}d \eta}, \tag{1}\]
where \(T_{AA}\) accounts for the nuclear collision geometry, and \(\eta\) denotes pseudorapidity. Both STAR [13][14] and PHENIX [15][16] data demonstrate a plateauing of the \(R_{AA}\) spectrum at values much lower than unity in the high-\(p_{T}\) region (\(\gtrsim 5\) GeV/c). Treating the suppression of the nuclear modification factor as a result of empirical loss of transverse momentum from the \(p+p\) spectrum to the nucleus+nucleus spectrum, these flat \(R_{AA}\) curves
were found to indicate a constant fractional \(p_{T}\) shift in the spectrum. From a classical standpoint, this behavior is consistent with elastic collisional energy loss. Higher-\(p_{T}\) particles would lose a proportionally higher amount of momentum through elastic collisions within the medium, resulting in a constant \(\Delta p_{T}/p_{T}\). While this seems to describe the observed RHIC data fairly well, LHC data demonstrate significantly different characteristics.
Figure 1 depicts the published \(p_{T}\) spectra of various final-state particles in \(p\)+\(p\) collisions at (a) 2.76 TeV and (b) 5.02 TeV. Each dataset can be described by a Tsallis distribution [17]:
\[\frac{1}{2\pi p_{T}}\frac{d^{2}N}{dp_{T}d\eta}=A(1+\frac{p_{T}}{p_{0}})^{-n}, \tag{2}\]
where \(A\), \(p_{0}\), and \(n\) are free parameters in the fit. Note that certain datasets have been adjusted by scaling factors compared to their original sources. These scaling factors will be incorporated into the parameter \(A\) and do not affect the relevant physics being investigated.
Following the procedures outlined in Ref. [8] and treating the suppression empirically as a horizontal shift in the \(p_{T}\) spectrum from \(p\)+\(p\) to \(A\)+\(A\) collisions, we can express \(R_{AA}\) as
\[R_{AA}(p_{T})=\frac{(1+p_{T}^{\prime}/p_{0})^{-n}p_{T}^{\prime}}{(1+p_{T}/p_{0 })^{-n}p_{T}}\left[1+\frac{dS(p_{T})}{dp_{T}}\right] \tag{3}\]
where \(p_{T}^{\prime}\equiv p_{T}+S(p_{T})\), and \(S(p_{T})\) is the magnitude of the shift. Although \(S(p_{T})\) being proportional to \(p_{T}\) adequately describes RHIC data at the high-\(p_{T}\) region, we start with a more general form in this paper, namely \(S(p_{T})=S_{0}p_{T}{}^{\alpha}\). Then, Eq. (3) becomes
\[R_{AA}(p_{T}) = \frac{[1+(p_{T}+S_{0}p_{T}{}^{\alpha})/p_{0}]^{-n}(p_{T}+S_{0}p_ {T}{}^{\alpha})}{(1+p_{T}/p_{0})^{-n}p_{T}}\] \[\times(1+S_{0}o{p_{T}}^{\alpha-1}).\]
Once we determine \(p_{0}\) and \(n\) for each particle species from the \(p_{T}\) distribution in Fig. 1, we regard them as fixed parameters in Eq. (4), and use this formula to fit the corresponding \(R_{AA}\) data allowing \(S_{0}\) and \(\alpha\) to vary as free parameters.
The necessity of introducing the \(\alpha\) parameter is convincingly illustrated in Fig. 2, which shows the \(R_{AA}\) measurements as a function of \(p_{T}\) for charged hadrons in Au+Au collisions at 200 GeV [13] and for charged pions in Pb+Pb collisions at 5.02 TeV [20] for (a) 0-5% and (b) 30-40% centrality ranges. The fit functions adhere to Eq. (4) with \(S_{0}\) serving as the sole free parameter. At \(p_{T}\gtrsim 5\) GeV/\(c\), the flat \(R_{AA}\) patterns at RHIC agree with \(\alpha=1\), whereas the increasing trends at the LHC harmonize with \(\alpha=0.5\). At both collision energies, the flattening and increasing trends initiate at approximately the same \(p_{T}\) value of around 5 GeV/\(c\). This pattern is also evident in the \(R_{AA}\) data for other particle species to be presented, presumably because below this \(p_{T}\) the soft physics dynamics including hydrodynamics and coalescence formation dominate, whereas above the \(p_{T}\) of 5 GeV/\(c\) parton fragmentation starts to dominate particle production where the parton energy loss picture emerges.
Figure 3 delineates \(R_{AA}(p_{T})\) for charged hadrons in (a) 0-5% and (b) 30-40% Pb+Pb collisions at 2.76 TeV [24], and for charged pions in (c) 0-5% and (d) 30-40% Pb+Pb collisions at 5.02 TeV [20]. All the datasets exhibit upward trends for \(p_{T}\gtrsim 5\) GeV/\(c\). When we apply the same fitting approach and fix \(\alpha\) at 0.5, the resulting fit curves (dashed lines) adequately capture all the data points. When we take \(\alpha\) as a free parameter (solid curve), the extracted \(\alpha\) values are consistent with 0.5 within statistical uncertainties.
We further investigate whether other final-state leading particles also exhibit these features. Figure 4 shows similar rising trends of \(R_{AA}\) at higher \(p_{T}\) for (a) \(\pi^{0}\) and (b) \(\eta\) mesons in 0-10% Pb+Pb at 2.76 TeV [25], for (c) prompt \(J/\psi\) meson in 0-100% Pb+Pb at 5.02 TeV [21], and for (d) prompt \(D^{0}\) meson in 0-10% Pb+Pb at 5.02 TeV [22]. The \(\alpha\) values extracted for \(\pi^{0}\), \(\eta\), and \(J/\psi\) mesons are consistent with 0.5 within the fitted statistical uncertainties. The fits to the prompt \(D^{0}\) data seem to show some tension between the varied and fixed \(\alpha\) values, but the difference is a less-than-\(2\sigma\) effect. The curve
Figure 1: Particle \(p_{T}\) spectra in \(p\)+\(p\) collisions at (a) 2.76 TeV and (b) 5.02 TeV. The 2.76 TeV data (charged particles, \(\pi^{0}\), and \(\eta\)) are from ALICE [18; 19]. The 5.02 TeV results include charged pions from ALICE [20], prompt \(J/\psi\) and prompt \(D^{0}\) mesons from CMS [21; 22], and muons from charm and bottom hadrons from ATLAS [23]. Different scaling factors are applied for better visibility. Fits to the data follow Eq. (2) as discussed in the text.
with the fixed parameter (\(\alpha=0.5\)) does appear to agree with all the data points within the uncertainties. More precise measurements of \(D^{0}\)\(R_{AA}\) are required to better constrain the value of \(\alpha\).
Figure 5 displays \(R_{AA}(p_{T})\) for muons originating from (a) charm and (b) bottom hadrons in 0-10% Pb+Pb collisions at 5.02 TeV. In both cases, the fit curves with \(\alpha\) = 0.5 align with all the data points within uncertainties. The \(\alpha\) values extracted from the free-parameter fits exhibit a slight deviation from 0.5, with less than 1.5\(\sigma\) significance.
To recap, the analyzed LHC data here suggest that to explain the \(R_{AA}\) measurements for light- and heavy-quark hadrons as a \(p_{T}\) shift in the spectrum from \(p\)+\(p\) collisions, we require the corresponding \(\Delta p_{T}\) to scale with \(\sqrt{p_{T}}\). This \(p_{T}\) dependence contrasts with the previously observed proportionality with \(p_{T}\) in RHIC data. Our analysis results with more recent data are in line with a previous study of LHC \(R_{AA}\) data that determined the \(\alpha\) value to be 0.55 [26]. Our \(p_{T}\) dependence of the parton energy loss at LHC supports theoretical predictions involving energy loss dynamics from medium-induced gluon radiation [27]. The distinct change from \(\alpha=1\) at RHIC to \(\alpha=0.5\) at LHC suggests a transition in the relative importance of collisional energy loss dynamics to radiative energy loss dynamics.
The transition in the parton energy loss dynamics in the medium might find an explanation in the VNI/BMS parton cascade model calculations [28] that propose a significant reduction in parton collisional energy loss as the medium mass scale increases. The substantially higher \(\sqrt{s_{NN}}\) at the LHC leads to a considerably greater abundance of heavy quarks within the medium compared with RHIC. This increase in the medium mass scale could consequently cause a reduction in collisional energy loss. Another theoretical prediction, using the Monte Carlo pQCD tomographic model, known as CUJET1.0 [29], initially underestimates the growth of \(R_{AA}\) with \(p_{T}\) at LHC energies. In order to describe the LHC data, this model necessitates an adaptation that enhances the relative contribution of radiative energy loss over collisional energy loss at the LHC.
The VNI/BMS calculations for charm quark energy loss at RHIC energies agree with the prevailing collisional energy loss up until the 7-12 GeV/\(c\) region for initial charm quarks, followed by a crossover to the dominance of radiative energy loss and a plateauing effect at higher momenta [30]. Extending this investigation to LHC energies may offer insights into elucidating the subtle upward trend observed in the higher \(p_{T}\) range, especially in relation to the nuanced distinctions between heavy-quark and light-quark behaviors.
The dead-cone effect [31] predicts that gluon radiation is more strongly suppressed for bottom quarks than charm quarks, as the former bears a larger mass-to-energy ratio, leading to a wider dead cone. Recent measurements of heavy-quark meson production in \(p\)+\(p\) col
Figure 2: \(R_{AA}\) as a function of \(p_{T}\) for charged hadrons in Au+Au collisions at 200 GeV (red) [13] and for charged pions in Pb+Pb collisions at 5.02 TeV (blue) [20] for (a) 0–5% and (b) 30–40% centrality ranges. The fit functions follow Eq. (4) with fixed \(\alpha\) values of 1 and 0.5 for RHIC and the LHC data, respectively, using the corresponding \(p_{0}\) and \(n\) values extracted from the Tsallis fits in Fig. 1.
Figure 3: \(R_{AA}(p_{T})\) for charged hadrons in (a) 0–5% and (b) 30–40% Pb+Pb collisions at 2.76 TeV [24], and for charged pions in (c) 0–5% and (d) 30–40% Pb+Pb collisions at 5.02 TeV [20]. The fit functions from Eq. (4) either take \(\alpha\) as a free parameter or fix it at 0.5, using the \(p_{0}\) and \(n\) values extracted from the Tsallis fits in Fig. 1.
lisions by the ALICE experiment [32] reveal heavy-quark fragmentation in the vacuum and provide a direct observation of the dead-cone effect. However, the LHC data of the \(R_{AA}\) trends for muons from charm and bottom decays do not exhibit the anticipated reduced radiative energy loss for bottom quarks. The decay muon measurements can be influenced by various factors, including substantial momentum smearing resulting from decay kinematics and the existence of non-prompt \(c\to\mu\) decays that originate from \(b\) quarks. Another factor to consider is that the dead-cone effect may become less pronounced for very-high-energy quarks represented in the muon measurements. More precise data are needed to elucidate the nature of heavy-quark dynamics in the medium.
We also investigate the relationship between energy loss and path length at LHC energies. Previous examinations of RHIC data have revealed that the deduced fractional energy loss, \(\Delta p_{T}/p_{T}\), is a linear function of medium initial entropy density (quantified by \(\frac{1}{S}\frac{dN}{dy}\)) across different centrality intervals, despite significant variations in the path length for traversing partons [8]. This suggests a weak dependence of energy loss on path length. We apply the same analysis to LHC data, and discover similar outcomes, as shown in Fig. 6. Now that fractional energy loss varies with \(p_{T}\) according to the LHC \(R_{AA}\) data, we plot \(\Delta p_{T}/p_{T}\) as a function of \(\frac{1}{S}\frac{dN}{dy}\) at different \(p_{T}\) values for charged hadrons in Pb+Pb collisions at 2.76 TeV and for charged pions in Pb+Pb collisions at 5.02 TeV. In each case for each \(p_{T}\) regime, a clear linear trend emerges, and the linearity is especially strong for higher \(p_{T}\) scales, where parton fragmentation dominates particle production. These findings support the earlier observation of a weak path length dependence of parton energy loss, even though the medium densities at RHIC and the LHC are very different. As discussed for RHIC data [8], the weak path length dependence of energy loss could arise from the rapid expansion of the medium, where the majority of energy loss occurs before the parton is able to traverse a full path length. Thereby, medium density becomes the dominant factor that determines the energy loss during the rapid expansion. We argue that in such a rapidly expansive medium, the static path length from the initial geometry of colliding nuclei fails to trace the parton energy loss in the medium.
In summary, we present a parton energy loss study showing a significant distinction between RHIC and LHC data when empirically interpreting \(R_{AA}(p_{T})\) as a momentum loss in \(A\)+\(A\) collisions relative to the \(p\)+\(p\) reference. While the RHIC data favor a direct proportionality between the \(p_{T}\) shift and \(p_{T}\) itself, the LHC data suggest a proportionality with \(\sqrt{p_{T}}\). This difference in the \(p_{T}\) dependence signifies the heightened importance of radiative energy loss compared with collisional energy loss within the same transverse momentum range in colliding systems at higher \(\sqrt{s_{NN}}\). Additionally, we find that the magnitude of the parton energy loss at LHC is largely determined by the initial medium entropy density, consistent with previous results at RHIC, indicating a limited path length dependence of parton energy loss, and placing greater emphasis on the initial medium density
Figure 5: \(R_{AA}(p_{T})\) for muons from (a) charm and (b) bottom hadrons in 0–10% Pb+Pb collisions at 5.02 TeV [23]. The fit functions from Eq. (4) either take \(\alpha\) as a free parameter or fix it at 0.5, using the \(p_{0}\) and \(n\) values extracted from the Tsallis fits in Fig. 1.
Figure 4: \(R_{AA}(p_{T})\) for (a) \(\pi^{0}\) and (b) \(\eta\) mesons in 0–10% Pb+Pb collisions at 2.76 TeV [25], for (c) prompt \(J/\psi\) mesons in 0–100% Pb+Pb collisions at 5.02 TeV [21], and for (d) prompt \(D^{0}\) mesons in 0–10% Pb+Pb collisions at 5.02 TeV [22]. The fit functions from Eq. (4) either take \(\alpha\) as a free parameter or fix it at 0.5, using the \(p_{0}\) and \(n\) values extracted from the Tsallis fits in Fig. 1.
for a rapid explosive medium. The distinct parton energy loss dynamics at RHIC and at LHC can be further investigated with high-statistics heavy-quark-tagged jets from the sPHENIX at RHIC, as well as the LHC experiments in future runs.
###### Acknowledgements.
The authors thank Dylan Neff, Jared Reiten, and Anthony Frawley for many fruitful discussions. T. M., P. S., G. W., and H. H. are supported by the U.S. Department of Energy under Grant No. DE-FG02-88ER40424 and by the National Natural Science Foundation of China under Contract No.1835002.
|
2309.12543 | Real-time Batched Distance Computation for Time-Optimal Safe Path
Tracking | In human-robot collaboration, there has been a trade-off relationship between
the speed of collaborative robots and the safety of human workers. In our
previous paper, we introduced a time-optimal path tracking algorithm designed
to maximize speed while ensuring safety for human workers. This algorithm runs
in real-time and provides the safe and fastest control input for every cycle
with respect to ISO standards. However, true optimality has not been achieved
due to inaccurate distance computation resulting from conservative model
simplification. To attain true optimality, we require a method that can compute
distances 1. at many robot configurations to examine along a trajectory 2. in
real-time for online robot control 3. as precisely as possible for optimal
control. In this paper, we propose a batched, fast and precise distance
checking method based on precomputed link-local SDFs. Our method can check
distances for 500 waypoints along a trajectory within less than 1 millisecond
using a GPU at runtime, making it suited for time-critical robotic control.
Additionally, a neural approximation has been proposed to accelerate
preprocessing by a factor of 2. Finally, we experimentally demonstrate that our
method can navigate a 6-DoF robot earlier than a geometric-primitives-based
distance checker in a dynamic and collaborative environment. | Shohei Fujii, Quang-Cuong Pham | 2023-09-21T23:58:16Z | http://arxiv.org/abs/2309.12543v2 | # Real-time Batched Distance Computation for Time-Optimal Safe Path Tracking
###### Abstract
In human-robot collaboration, there has been a trade-off relationship between the speed of collaborative robots and the safety of human workers. In our previous paper, we introduced a time-optimal path tracking algorithm designed to maximize speed while ensuring safety for human workers [1]. This algorithm runs in real-time and provides the safe and fastest control input for every cycle with respect to ISO standards [2]. However, true optimality has not been achieved due to inaccurate distance computation resulting from conservative model simplification. To attain true optimality, we require a method that can compute distances 1. at many robot configurations to examine along a trajectory 2. in real-time for online robot control 3. as precisely as possible for optimal control. In this paper, we propose a batched, fast and precise distance checking method based on precomputed link-local SDFs. Our method can check distances for 500 waypoints along a trajectory within less than 1 millisecond using a GPU at runtime, making it suited for time-critical robotic control. Additionally, a neural approximation has been proposed to accelerate preprocessing by a factor of 2. Finally, we experimentally demonstrate that our method can navigate a 6-DoF robot earlier than a geometric-primitives-based distance checker in a dynamic and collaborative environment.
## I Introduction
Collaborating with robots while ensuring human safety has been a critical challenge, as slowing down the robot operation to mitigate injuries will impede productivity. To maximize the productivity of collaborative robots while guaranteeing the safety, we have proposed time-optimal path tracking algorithm [1] which runs in real-time and provides the safe and fastest control input with respect to _Speed and Separation Monitoring_ in ISO standards [2]. In this path-tracking method, distances between the obstacles and a robot for waypoints along an executing trajectory must be given. Given the distances, the algorithm computes the fastest velocity profile and navigates the robot in a time-optimal manner (Fig. 1). Finally, the control input is sent a robot to follow the derived velocity profile. This whole process must run in every control cycle, which is about 10 ms according to the communication protocol of industrial robots1.
Footnote 1: For example, the control period is 8 ms in the case of DENSO b-CAP communication protocol [https://www.denso-wave.com/en/rob](https://www.denso-wave.com/en/rob) ot/product/function/b-CAP.html.
To achieve true optimality in path tracking, the precise distances need to be given. In our previous paper, the robot is simplified with spheres and the distances between the spheres and voxels are computed with _hypot_ function using their center positions. Such distance checking with a simplified model can run in real-time. However, the computed distances are smaller than its actual value due to simplification, which makes the robot's behavior conservative and exacerbates the productivity of the robot. In contrast, a exact mesh-to-mesh distance checker cannot run in real-time (experimentally, 130 \(\upmu\)s per one configuration, 65 \(\mathrm{ms}\) per one trajectory with FCL [3]). To the best of our knowledge, no existing distance checker is applicable to real-time safety control.
In this paper, we propose a batched, fast and precise distance checker based on pre-computed link-local Signed Distance Fields(SDFs) to address this issue. Leveraging GPU parallelization for pre-processing of robot's SDFs, the proposed method is able to check distances for multiple robot configurations within less than 1 ms at runtime. Additionally, a neural approximation of the pre-processing has been proposed, resulting in 2x faster pre-processing. Finally, we experimentally demonstrate that our distance checker actually navigates a robot faster than the method using a robot modeled with spheres in a dynamic, collaborative environment.
The paper is organized as follows. We survey related work in Section II. Section III presents our parallel distance checking method and some techniques to reduce the pre-processing time in a constant order including the neural approximation. In Section IV, we evaluate the performance of the neural approximation and also examine that the approximation does
Fig. 1: Problem Setting Overview: Computing distances in real-time, across multiple robot configurations, with precision. See Section I for more information.
not affect the precision of distance computation. Then, the experimental comparison is shown for the real-time safe path tracking in a collaborative environment. Finally, we discuss the limitations of our approach and conclude with some directions for future work in Section V.
## II Related Work
### _Model-based Distance Computation_
In collision detection and distance computation between 3D models, hierarchal data structure, or 'broad-phase structure', is commonly applied to filter out object pairs that are far away and dramatically accelerates the collision/distance queries. Examples of such data structures include AABB Tree, OCTree and Inner Sphere Tree [3, 4, 5]. However, these data structures are optimized for CPU and lack batch-processing capabilities, hence they do not have an sufficient throughput for real-time safety control. For instance, the distance query with an octree for 1000 configurations will take \(39.1\ \mathrm{ms}\) according to [4] (note that this does not include the octree construction time), which is still slow for real-time safety control. Another challenge is that the throughput depends on the positions of robots and obstacles, which makes it difficult to ensure its constant execution time at the time of deployment of the system. In contrast, our approach does not depend on runtime-varying settings except for the number of configurations.
Simplification of robot/human models with geometric primitives such as spheres and capsules is commonly used for distance computation in motion planning and safe robotic operation [6, 7, 8, 9]. However, evaluating distances for multiple configurations in a batched manner using primitives other than spheres are actually slow. In fact, our preliminary experiment shows that distance computation between a 6 DoF robot simplified with 7 capsules and (only) 3000 points for 500 configurations took about 70 \(\mathrm{ms}\) even on GPU, which is not applicable to real-time control. This is primarily because the projection of points onto the axis of capsules requires a time-consuming (batched) matrix multiplication at runtime.
### _Signed Distance Fields (SDFs)_
In unknown environments, prior knowledge about obstacles such as their shape, size and position is not always accessible. Therefore, Signed Distance Fields(SDFs) or Un-signed Distance Fields(USDFs) of an _environment_ are often used because these does not necessarily require a prior knowledge on obstacles and offers the gradient of (U)SDFs to push the trajectory away from obstacles [6, 10, 11]. There are a number of methods for SDFs construction; some come from a context of SLAM [12, 13, 14] and some from that of machine learning/NeRF [15]. However, most of them assume a _static_ environment and incrementally construct a scene since SDFs reconstruction requires data propagation which is inherently time-consuming. To the best of our knowledge, no previous work satisfies all the requirements for the safe-control application: 'batched','real-time' and 'precise'.
One of the promising work is [16] which builds on [17, 18]. This method computes the SDFs of an environment from an incoming sensory pointcloud in parallel on GPU using a Parallel Banding Algorithm [19]. The distance data for the voxels occupied by the robot is then retrieved. In their demonstration, they showcase online motion planning of a mobile manipulator platform. However, the computation time of this method depends on the size of environment due to the data propagation and is not suitable especially for large environments. Most importantly, their reported SDFs construction time is \(17.5\pm 0.4\ \mathrm{ms}\) for 5cm resolution and \(36.2\pm 8.3\ \mathrm{ms}\) for 2.5 cm resolution, which is not fast enough for real-time control (Note that the 'SDFs computation time' does not include the time for distance queries).
Another interesting work is ReDSDF [20], a machine learning based SDFs estimator that employs a neural network which takes query points and poses as inputs and outputs distances for each of them. While its estimation accuracy is sufficient for safety-critical use-cases, their architecture is not suitable for real-time safety control due to the following reasons: For robot's SDFs generation: 1. ReDSDF requires retraining of a neural network for any change in a robot, and 2. the neural network needs to be evaluated for each waypoint along a trajectory, repeatedly feeding the same query points. These requirements make ReDSDF unsuitable. For environment's SDFs construction: 1. ReDSDF requires a model that is trained for each individual obstacle, and 2. each obstacle needs to be tracked in some way; these are not realistic for industrial applications.
Some previous methods employ link-local SDFs of a _robot_, instead of an _environment_, for distance computation in motion planning or collision avoidance [21, 22]. In these approaches, the pointcloud is transformed into every link coordinate and the distance is then obtained from link-local SDFs, resulting in a computational complexity of \(O(DM)\) where \(D\) is the DoF of a robot and \(M\) is the number of pointcloud. This computation heavily depends on the number of points and its batch processing is often insufficiently fast for real-time robot control. In contrast, our method performs the transformation of the link-local SDFs onto the coordinates of the environment beforehand, eliminating the need for pointcloud transformations (Fig. 2). This leads to a faster distance retrieval in the real-time distance computation phase.
## III Batched Robot SDFs Computation
### _Overview_
The pipeline of our parallel distance checking is illustrated in Fig. 3. We consider a \(D\)-DoF robot and examine distances at \(C\) robot configurations (\(\theta_{\mathbf{c}}\in\Theta\)). The environment is discretized into voxels whose extent is \(\mathbf{e_{e}}=(e_{ex},e_{ey},e_{ez})\) and resolution is \(\mathbf{r_{e}}=(r_{ex},r_{ey},r_{ez})\). The total number of voxels of the environment \(V_{e}\) is \(\prod\frac{2\mathbf{e_{e}}}{\mathbf{r_{e}}}\).
At the preprocessing stage, given a robot model, we pre-compute Signed Distance Fields(SDFs) for each link on its local coordinates. We call it as _Link SDFs_. We refer \(\mathbf{e_{r}}\) to the extent of Link SDFs \((e_{rx},e_{ry},e_{rz})\) and \(\mathbf{r_{r}}\) to the resolution of
Link SDFs \((r_{rx},r_{ry},r_{rz})\). The size of the precomputed Link SDFs, \(2\mathbf{e_{r}}\), must be divided by the resolution of the voxelized environment \(\mathbf{r_{e}}\) without residue for alignment operation that is later introduced. The resolution of Link SDFs is arbitrary and recommended to be finely voxelized.
Next, given the configurations \(c\), we compute transformations \(T_{i,c}\) of each link \(i\) by applying parallel forward kinematics. Then, according to \(T_{i,c}\), Link SDFs are transformed and aligned into the voxels of the environment. We call the first euclidean transformation operation as "euclidean transformation" and the second alignment operation as "alignment". To compute Robot SDFs for each configuration, a minimum value of the transformed Link SDFs for each link and for each voxel is taken. Besides, obstacles in the environment are voxelized. By extracting the distances at the voxels occupied by the obstacles and taking the minimum value for each link, the distance between the robot and obstacles can be computed.
At the "euclidean transformation" stage, we shift the transformed Link SDFs by \(\delta t_{i,c}\in(-\frac{r_{e}}{2},\frac{r_{e}}{2})\) since the position of each link will not be exactly at the center of the Link SDFs,. And then, at the "alignment" stage, we translate the transformed SDFs by \(t_{i,k}-\delta t_{i,k}\) and snap them into the environment voxels. The transformation can be computed in the scheme of affine grid transformations [23]2. \(\delta t_{i,c}\) can be computed by following the simple equations:
Footnote 2: Please refer to pytorch’s documentation as well: [https://pytorch.h.org/docs/stable/generated/torch.nn.functional.af](https://pytorch.h.org/docs/stable/generated/torch.nn.functional.af) fine_grid.html
\[T_{Oi,c} =T_{i,c}-(-e_{e}) \tag{1}\] \[k_{i,c} =\lfloor T_{Oi,c}/r_{e}\rfloor-\lfloor e_{e}/e_{r}\rfloor\] (2) \[\delta t_{i,c} =T_{Oi,c}-(k_{i,c}\cdot r_{e}+e_{r}) \tag{3}\]
where \(\mathbf{e_{e}}\) is the 3D extent of environment, \(\mathbf{r_{e}}\) is the 3D resolution of environment, and \(\mathbf{e_{r}}\) is the 3D extent of Link SDFs. The total number of voxels in transformed SDFs \(V_{r}\) is \(\prod\frac{\mathbf{e_{e}}}{\mathbf{r_{e}}}\).
At runtime, to compute distances against obstacles in the environment based on the Robot SDFs, we voxelize the obstacles and extract the values from Robot SDFs that is occupied by the voxelized obstacles. By reducing the extracted values with _min_ for each configuration \(c\), we can obtain minimum distance between a robot and the obstacles for each \(c\). This process is fast because it only reads the data on the GPU memory and does not require any calcuation.
As an extra bonus, self-collision detection can be done by aligning "in a predefined and alternating the order of checking, paying attention to the robots kinematics" as in [24], though it is not applied in our experiment since self-collision is usually examined in the motion-planning phase rather than the execution phase.
There can be a variation for reduction of Robot SDFs. For example, if you only need a binary occupancy information of the robot, you can use store boolean values for each voxel, checking whether the distance is greater or less than 0. This will lower the memory consumption by 8 times (= sizeof(float) / sizeof(bool)).
### _Techniques for computation time reduction in a constant order_
We introduce the following techniques to optimize the computation time in a constant order.
#### Iii-B1 Euclidean Grid Approximation with A Tiny Neural Network
The computation of euclidean grid transformation mapping is mathematically a matrix multiplication. Given the center positions of grids \(p_{j,xyz}\)for \(j\in[0,V_{r}))\) where \(V_{r}\) is the number of grids in each Link SDFs and link transformations \(T_{i,c}\), euclidean grid transformations \(G_{i,c}\) can be computed as follows:
\[\begin{split} P_{xyz}:=\left(\cdots\quad p_{j,xyz}\quad\cdots \right)\\ \begin{pmatrix}G_{i,c}\\ 1\end{pmatrix}=\begin{pmatrix}R_{i,c}&\frac{\delta t_{i,c}}{e_{r}}\\ 0&1\end{pmatrix}^{-1}\begin{pmatrix}P_{xyz}\\ 1\end{pmatrix}\\ =\begin{pmatrix}R_{i,c}^{T}&-R_{i,c}^{T}&\frac{\delta t_{i,c}}{e_{r}}\\ 0&1\end{pmatrix}\begin{pmatrix}P_{xyz}\\ 1\end{pmatrix}\end{split} \tag{4}\]
Note that \(p_{i,xyz}\) are normalized in the range of \([-1,1]\) with the voxel resolution of environment voxels \(\mathbf{e_{r}}\). The number of column of \(P_{xyz}\) is the number of voxels in the transformed Link SDFs. This batch processing of matrix-matrix multiplication is actually time-consuming, because general matrix-matrix multiplications of BLAS libraries provided by vendors are optimized for square matrices and we cannot leverage full performance of dedicated devices including GPU for tall-and-skinny matrices [25]. Instead, we use a tiny neural network \(f\) which is composed of two fully-connected layers and one ReLU activation layer to approximate and simplify this operation:
\[\begin{split}\delta t_{inv\ i,c}=-R_{i,c}^{T}\frac{\delta t_{i,c} }{e_{r}}\\ G_{i,c}=f(R_{i,c})+\delta t_{inv\ i,c}\end{split} \tag{5}\]
\(f\) takes a rotation matrix and output euclidean grid transformations only for the specified rotation. Recent advance in
Fig. 2: A common way to compute a distance between pointcloud and SDFs requires pointcloud transformations for each link coordinate. Instead, we transform and merge the link-local SDFs into the global coordinates in a pre-processing stage, and then evaluate it to obtain distances at runtime.
deep learning provides us a highly-optimized API for neural approximation 3. Since the original operation is deterministic and robot-model agonistic, we can train the neural network quickly (about 10-20 mins) and reuse the pre-trained models for any type of robots once it is trained without any additional training. As described in Section IV-B, the maximum error of this approximation is about 1mm in our setting, which is covered by the discretization error of SDFs and therefore negligible. Therefore, the maximum error from the ground truth which derives from the total pipeline is only the discretization error: \(\frac{|r_{e}|}{2}+\frac{|r_{r}|}{2}\).
Footnote 3: See [https://developer.nvidia.com/cudnn](https://developer.nvidia.com/cudnn)
#### Iii-A2 Grids in Sphere instead of using Cubic Grids
Another small technique to reduce computation time is to compute transformed SDFs only for the grids in a sphere of a radius \(e_{r}+sqrt(3(\frac{r_{e}}{2})^{2})\) which inscribes the link. We can roughly reduce the number of grids by: \(\frac{4}{(2e_{r})^{3}}\approx 0.53\).
## IV Experiments and Results
### _System setup_
All the experiments are done on a single machine, on which AMD Ryzen(tm) 9 4900HS and NVIDIA GeForce RTX(tm) 2060 with Max-Q Design are equipped for CPU and GPU. We use PyTorch and develop custom CUDA kernels for evaluation.
### _Precision and Speed of Neural Approximation for Euclidean Grid Transformations_
First, we examine the effect of approximation of euclidean grid transformations in Fig. 4. In this experiment, we train the tiny neural network of 32 hidden parameters using L1 loss and Adam optimizer with a learning rate of \(1e^{-4}\). We measured the time to compute euclidean grid transformations for 500 configurations of 6 DoF robot.
We compare the computation speed in Fig. 4a. As a result, the neural approximation of euclidean transformations \(G\) is about 3.2x faster than the deterministic one, and the total SDFs computation becomes about 2x faster.
We also test approximation errors from the ground truth.We use \(10^{7}\) randomly-generated link transformations to examine the maximum error. The result is that the approximation error of the euclidean grid transformations is less than 0.0013 at the maximum. This means that, in the following experiment, considering the extent of Link SDFs \(e_{r}\) is set to \(1.2\)\(\mathrm{m}\), the actual error in \(G\) is \(0.0013\times e_{r}=1.56\)\(\mathrm{mm}\) which is way smaller than the grid discretization size (1 \(\mathrm{cm}\)) and therefore negligible.
### _Comparison with a robot in simulation_
Finally, we compare our method with sphere-based distance checker in simulation (Fig. 5). The robot loops between point A and point B while the experimenter is in close proximity to the robot and randomly moves his arms aside the robot impeding the robot's motion. The robot's motion is planned for each trajectory at runtime. The experimenter's
Fig. 3: A pipeline of parallel batched distance checking with pre-computed link-wise signed distance fields (SDFs). This is in 2D for clear illustration, but actual computation is in 3D and in a batched manner. See Section III for detail.
motion is recorded by a Kinect v2, and we replay the obtained sequence of pointclouds in each experiment at the same timing. The pointcloud is converted into 4 \(\mathrm{cm}\) voxels at runtime. We record a total time trajectory execution time for the robot to move back and forth for 6 laps over 10 trials. The protective distance \(d_{prot}\) is set to \(10~{}\mathrm{cm}\) and the extent of Link SDFs \(\mathbf{e_{r}}\) is set to \(1.2~{}\mathrm{m}\). The resolution of precomputed Link SDFs is set to 1 \(\mathrm{cm}\). The clearance threshold is set to \(sqrt((4/2)^{2}\times 3)+sqrt((1/2)^{2}\times 3)\approx 4.3~{}\mathrm{cm}\). Our code is based on OpenRAVE [26].
At runtime, after planning a trajectory between points using an off-the-shelf RRT-based motion planner in OpenRAVE and before executing the trajectory, intermediate waypoints are sampled using TOPP-RA's automatic gridpoint suggestion feature [27]. Robot SDFs are then computed with our proposed method for each waypoint configuration in a parallel, batched manner. During the trajectory execution, the computed SDFs are used to retrieve the distances between a robot and obstacles for each waypoint and time-optimal safe velocity is computed and applied to a robot at every control cycle based on [1].
To ensure the safety, \(\mathbf{e_{r}}\) needs to be large enough to capture the obstacle coming closer to a moving robot. We select the value (1.2 \(\mathrm{m}\)) as follows: Given the joint velocity limit \(v_{limit,i}\) and acceleration limit \(a_{limit,i}\) for joint i, the maximum braking time \(t_{brake}\) is computed as \(\underset{i}{\max}\,\underset{a_{limit,i}}{\max}\,\underset{i}{\max}\, \underset{a_{limit,i}}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i} {\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max} \,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{ \max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{\max}\,\underset{i}{i}{\max}\, \underset{i}{\max}\,\underset{i}{\max}\,\underset{
Section IV-C, about 5GB GPU memory was consumed at runtime. While this issue can be mitigated by employing multiple GPUs, but considering the hardware cost, further work is needed to reduce memory consumption.
|
2309.06536 | Automatic detection of solar flares observed at 45 GHz by the POEMAS
telescope | Every 11 years, the Sun goes through periods of activity, with the occurrence
of many solar flares and mass ejections, both energetic phenomena of magnetic
origin. Due to its effects on Earth, the study of solar activity is of
paramount importance. POEMAS (Polarization of Millimeter Emission of Solar
Activity) is a system of two telescopes, installed at CASLEO (El Leoncito
Astronomical Complex) in Argentina, which monitors the Sun at two millimeter
wavelengths (corresponding frequencies of 45 and 90 GHz). The objective of this
work is to automatically detect solar flares observed by the polarimeter. First
it is necessary to eliminate the background noise, caused mainly by
instrumental problems, from the light curves of millimeter solar emission. The
methodology used to exclude the noise proposed in this work is to use the
tendency of time series. The subtraction of this model from the light curves
provides the input to automate the detection of solar flares using artificial
intelligence techniques. A Neural Network was trained to recognize patterns and
analyze a dataset in order to identify solar flares. Previously, a total of 30
flares had been visually identified and analyzed in the POEMAS database between
2011/11/22 and 2013/12/10. The methodology presented here confirmed 87% of
these events, moreover the neural network was able to identify at least 9 new
events. As the neural network was trained to detect impulsive events (lasting
less than 5 min), long duration bursts were not automatically detected, nor
were they detected visually due to the background noise of the telescope.
Visual inspection of the POEMAS data, when comparing with microwave data from
the RSTN, allowed the identification of an additional 10 long-duration solar
flares at 45 GHz. We discuss some problems encountered and possible solutions
for future work. | Vanessa Lessa, Adriana Valio | 2023-09-12T19:21:33Z | http://arxiv.org/abs/2309.06536v1 | # Automatic detection of solar flares observed at 45 GHz by the POEMAS telescope
###### Abstract
Every 11 years, the Sun goes through periods of activity, with the occurrence of many solar flares and mass ejections, both energetic phenomena of magnetic origin. Due to its effects on Earth, the study of solar activity is of paramount importance. POEMAS (Polarization of Millimeter Emission of Solar Activity) is a system of two telescopes, installed at CASLEO (El Leoncito Astronomical Complex) in Argentina, which monitors the Sun at two millimeter wavelengths (corresponding frequencies of 45 and 90 GHz). The objective of this work is to automatically detect solar flares observed by the polarimeter. First it is necessary to eliminate the background noise, caused mainly by instrumental problems, from the light curves of millimeter solar emission. The methodology used to exclude the noise proposed in this work is to use the tendency of time series. The subtraction of this model from the light curves provides the input to automate the detection of solar flares using artificial intelligence techniques. A Neural Network was trained to recognize patterns and analyze a dataset in order to identify solar flares. Previously, a total of 30 flares had been visually identified and analyzed in the POEMAS database between 2011/11/22 and 2013/12/10. The methodology presented here confirmed 87% of these events, moreover the neural network was able to identify at least 9 new events. As the neural network was trained to detect impulsive events (lasting less than 5 min), long duration bursts were not automatically detected, nor were they detected visually due to the background noise of the telescope. Visual inspection of the POEMAS data, when comparing with microwave data from the RSTN, allowed the identification of an additional 10 long-duration solar flares at 45 GHz. We discuss some problems encountered and possible solutions for future work.
keywords: Solar Flares, Millimeter Emission, Pattern Recognition, Neural +
+
Footnote †: journal: Artificial Intelligence
## 1 Introduction
The Sun is an active star with a magnetic cycle of about 11 years (Hathaway, 2015). In periods of maximum activity, an increase in the frequency of solar flares and coronal mass ejections can be observed. Both particles and magnetic fields thrown into interplanetary space by coronal mass from the Sun may impact the Earth. Geomagnetic storms, disruption of telecommunications signals, GPS malfunctions, and blackouts are some of the disruptions affecting Earth.
Over the years, studies on solar activity and the Sun's behavior have been carried out trying to mitigate these effects on Earth (Pulkkinen, 2007). For example, studies involving active regions, magnetic fields, solar flares, coronal mass ejections, and others are all relevant. Since the emission from solar activity is produced at all wavelengths of the electromagnetic spectrum,
observations at different frequencies is crucial to understand solar phenomena and the mechanisms involved (Dulk, 1985).
In 1859, English astronomers Richard C. Carrington and Richard Hodgson identified the first solar flare (Tsurutani et al., 2003). The explosion was quite intense, and a flash located in a small region was detected in images of the Sun's visible light. Just 17 hours later, a coronal mass ejection hit the Earth, causing one of the largest magnetic storms ever recorded. If a similar storm reaches our planet today, it would cause severe communication and electrical energy problems, among others (Phillips, 2014).
Solar phenomena are usually associated with active regions of the solar atmosphere. When a solar flare occurs, a large amount of energy is released (\(10^{28}-10^{32}\) erg), this energy is used in accelerating particles and heating the plasma, which generate radiation across the entire electromagnetic spectrum (from X rays to radio waves) (Mann et al., 2009).
Observations of solar activity, both from ground and space observatories, have generated a large amount of data. Thus it is necessary to apply artificial intelligence techniques to analyze the data in search of the sudden increases in the emission caused by solar flares. Here we have used a neural network to find patterns and identify solar flares automatically.
This work involves the automatic detection of solar flares in the data of the Polarization Emission of Millimeter Activity at the Sun (POEMAS) telescope (Valio et al., 2013). POEMAS is a polarimeter that observed the Sun daily from December 2011 to December 2013 at the rarely explored frequencies of 45 and 90 GHz.
The paper is organized as follows. In Section 2, we describe the data, and in Section 3, the Neural Network methodology. In Section 4, the results of the Neural Network experiments are detailed. Finally, we conclude, in Section 5, and anticipate future research.
## 2 POEMAS Telescope
POEMAS is a system of two telescopes, installed at the CASLEO Observatory (El Leoncito Astronomical Complex), in Argentina. The POEMAS telescope provides solar left and right circular polarization measurements at two-millimeter wavelengths (45 and 90 GHz) with a temporal resolution of 10ms. It operated continually for two years (Dec 2011 - Dec 2013) observing the full disk of the Sun, and detected several flares. The data collected from
the Sun every day were written to binary files, converted using the Python programming language, and finally written to FITS files.
### Data Acquisition
The antenna temperature data at both left and right circular polarized emission at 45 and 90 GHz are recorded in the daily binary files (TRK extension). Also the azimuth and elevation angles of the Sun are recorded in the file. For the analyzes, we used the light curve resulting from the sum of the two, right (RCP) and left (LCP), circular polarizations of the antenna temperature at 45 GHz. Using the Python programming language version 3.6, the TRK files were converted to FITS files using the following procedures:
* conversion of data from the TRK to FITS file, using the 10ms configuration of the original file.
* integration of the temporal resolution of the FITS file level 0 from 1 ms to 1s using the median of the data within the 1s interval.
* the merger of all FITS files level 1 of the same day into a single new file
* application of the time series (Trend) for all FITS file level 2
The first day of POEMAS observation used in this work was 12/01/2011, while the last day used was 12/10/2013. In this period, we did not have data for 51 days, which resulted in a total of 690 days for analysis.
After converting the POEMAS binary files to Flexible Image Transport System (FITS) files, the data is integrated into the database. Finally, the automatic detection of solar flares is applied using Artificial Intelligence techniques, especially pattern recognition and machine learning (Deep Learning).
### Data calibration
Unfortunately, there is a misalignment of the telescope support structure due to mechanical problems, causing the signal to abruptly decrease during local noon. This decrease of the antenna temperature is clearly seen in Figure 1, especially between 16 and 17 UT. The red curve on the same plot depicts the expected light curve profile of the observations. Due to variations
in the solar emission caused by this telescope misalignment, it is difficult to identify any increase caused by a solar event, except for the most intense ones, which are usually rare.
Therefore, to minimize the daily variation of the telescope measurements at 45 GHz, a time series subtraction of the signal was performed. This time series is the component trend of the antenna temperature of the same polarization on the day under consideration. In Figure 2, we can see the 3 components of the time series for the observations on 01/27/2012. In the top panel of Figure 2, the light curve observed by POEMAS, that is the result of the sum of the RCP and LCP polarizations, is shown. The second plot from the top is the signal trend for different growth and decrease patterns. In the third plot, the seasonality is presented; in this case, we consider an interval of 50 points to analyze the behavior for every period of 1s. In the last panel, the residuals after subtraction of the effects of seasonality and trend from the data are presented. The residual fluctuations are attributed to random components.
In Figure 3, the original observed data (blue curve), the trend of the time series (red curve), and the result after subtracting the trend from the observed signal (black curve) are shown. A value of 1000 was added to this subtraction residual to place it on the same scale as the original signal. On this day, a solar flare occurred at 18:15 UT, and the impulsive peak can be observed in the original curve (blue). On the same day, between 21:00 UT
Figure 1: Full-day observation of solar energy flux at 45 GHz, left circular polarization (black curve) and attenuated flux (red curve).
Figure 2: Decomposition of the time series of 2012/01/27
and 22:00 UT, there was a drop in signal due to interference in the Earth's atmosphere, probably due to clouds in front of the Sun. These drops in signal due to clouds are significant noise in the signal, and we have disregarded them from the data analysis.
To verify the existence of a solar event, we used data from the Radio Solar Telescope Network (RSTN) operated by the Meteorological Agency of the United States Air Force (Guidice, 1979). This network is composed of 4 radio stations located around the globe. Considering the location of POEMAS in Argentina, we will use data from observatories in Palehua, Hawaii (USA), Sagamore Hill in Massachusetts (USA), and San Vito (Italy). The 3 observatories cover the POEMAS observation window depending on the time of year. However, the antenna with the highest time intersection is in Sagamore Hill, Massachusetts (USA). Data from Sagamore Hill and Palehua stations for 01/27/2012 are shown in Figure 4, in the bottom and top panels, respectively.
The solar flare that happened on this day at 18:15 UT is clearly seen in the RSTN data shown in the two panels of Figure 4. The impulsive phase of
Figure 3: Analysis of the emission observed by POEMAS at 45 GHz all day 2012/01/27
Figure 4: RSTN data for 2012/01/27 - Palehua (upper) and Sagamore Hill (bottom)
this event was also detected in the POEMAS signal at 45 GHz (Figure 3). However, to see the gradual phase of the flare, it is necessary to subtract the daily instrumental variation from the signal due to the telescope's misalignment. This can be done by subtracting the signal observed on the day before (or after) the event, since the variation in the signal does not vary significantly in the period of a day or so. However, this is not the procedure performed in this work due to its requirement of human supervision.
## 3 Neural Network
Based on biological neural networks, that is, on the biological neuron, Artificial Neural Networks (ANNs) are mathematical models that have computational capacity acquired through learning and generalization. This structure attempts to mimic a human brain with connections between neurons (synapses) and input and output signals.
Frank Rosenblatt at the _Cornell Aeronautical Laboratory_ developed the first multi-neuron network of the linear discriminator type and named this network the _perceptron_. A _perceptron_ is a network with neurons arranged in layers. This proposed model learns concepts and can answer with true (1) or false (0). In the early 1960s, Rosenblatt extended his work by publishing several articles and a book (Rosenblatt, 1962; Tappert, 2019).
The resulting 45 GHz emission signal, after subtraction of the trend of the time series (black curve of Figure 3) was input to a Multilayer Perceptron Neural Network (NN). The temporal resolution of the light curve is 1 second, which would generate a lot of data points for the network. Therefore, the temporal resolution was modified to 10 seconds, using the median in each 10-second interval to reduce the number of data points. Then a 5-minute window was considered to go through the resulting signal extracting chunks every 10 seconds.
To better exemplify the process, in Figure 5 we show the validation of time intervals to be later considered as true signals for input to the neural network. In the example, the supposed event starts at 1:02:00 UT and ends at 1:03:00 UT. The first true interval would be [0:58:00, 1:03:00] UT, the second true interval would be [0:58:10, 1:03:10] UT, and so forth. Therefore, we would have a sliding 5 min window traversing the final signal every 10s. The last true interval of this supposed event would be [1:02:00, 1:07:00] UT. If this was the only event in one day, there would be 25 true intervals.
This was the most critical process in evaluating the signal due to the large volume of data. There were a total of 690 days of solar observation by the POEMAS with an average of 10 hours per day. Considering a temporal resolution of 10 s, there are approximately 3600 points in a day; thus resulting in \(2.484\times 10^{6}\) data points.
### Neural Network Training (NNT)
Previously, Hidalgo Ramirez et al. (2019) visually detected and analyzed 30 events observed by POEMAS telescope, which are listed in Table 1. First, we separate these events into two classes: training and classification. The first 14 flares were separated into the classification group. Events 15 through 30 were used for training of the neural network. The model adopted was supervised learning, where we submitted the 228 intervals referring to the 16 events and classified them as positive.
We need a balanced training base, and for that, we use the Nearmiss data balancing algorithm, a subsampling algorithm that randomly reduces most class examples. In the case of negative intervals, however, it selects samples based on distance (Mani and Zhang, 2003).
The next step is to define the NN structure after the training base is
Figure 5: Example of interval extraction
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline N & Date & Time (UT) & NN \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: Events identified in the work of Hidalgo Ramirez et al. (2019). The last column lists the events identified in this work by the Neural Network (NN).
balanced (228 positives and 228 negative intervals). The input layer has 30 nodes that will receive each interval of 5 minutes with a resolution of 10 seconds. The output layer has only 1 node, which reveals if the prediction for the data input is true or false. Several configuration tests were performed for the middle layer, using one and two layers. The best training results were found using only 1 intermediate layer. Thus, we decided to use one layer and varied the number of neurons to compare the results.
Because we are using supervised learning, when the NNT predicts a given input and the result is not expected, the network must adjust the weights to reduce the error and repeat the process until the error rate is 0. The activation function used was ReLu, as it is not linear and has better results compared to Tanh.
ReLU is the most commonly used activation function when designing neural networks today. The function is non-linear, and it does not activate all neurons at the same time, because if input is negative, it will be converted to zero and the neuron will not be activated. The Sigmoid function is a sensitive function and is continuously differentiable. This is not a linear function, and this is an interesting feature because it essentially means that when there are several neurons with sigmoid function as activation function the output too is nonlinear. This function varies between values [0,1]. The Tanh function is very similar to the sigmoid function. In fact, it's just an improved version of the sigmoid function as it varies between values [-1,1] (Burns, 2019).
To assess the quality of training and classification, we consider accuracy as a figure of merit, as it defines the proximity of an experimental result to its actual value. The greater the accuracy, the closer it is to the actual result. For all NNT configurations performed, the training had an accuracy of 100%. That is, they hit 228 positives (VP) and 228 negatives (VN) and had no false positives (FP) and no false negatives (FN).
### Classification
The network efficiency and learning quality depend on its architecture specification, that is, the function of neuronal activation, learning rule, initial values, and training data. We consider a network with 3 layers: one input, one intermediate, and one output. This configuration showed the best performance and results in the training phase. For the primary classification, there were 14 events and 141 actual intervals. The 16 events used in the training were labeled in the classification as negative.
For each experiment, we varied the number of intermediate layer neurons. We started with 30 neurons and added 30 neurons in each experiment afterwards. In Table 2, we present the results of the 8 experiments. In the first column of the Table, the number of the experiment is given, in the second the number of neurons used in the intermediate layer, in the third column the number of positive intervals identified by the network as positive, and in the fourth column the number of positive intervals specified as negative. In the fifth column, the number of negative intervals identified as positive by the NNT are seen. Finally, in the sixth column, the number of negative intervals that are real negative, and in the seventh column, we have the accuracy of each of the experiments.
We started with 33% accuracy in experiment 1 with 30 neurons in the middle layer. In experiments 2, 3, and 4 we had a gradual increase in accuracy, reaching 45%. In experiment 5, with 150 neurons, we had a drop in accuracy to 33%, which is equivalent to the results in experiment 1. As the results were not inferior to any previous investigation, we decided to continue increasing the neurons to check the results. In experiment 6 we achieved an accuracy of 47% with 160 neurons in the hidden layer. In experiment 7, the accuracy decreases to 40%, and experiment 8 reached the lowest accuracy found, 22%. Experiment 7 had the highest number of true positives but a high number of false positives, so we focused on experiment 6 to analyze the results. Experiment 6 with a configuration of 160 neurons had an accuracy of 47%, the best among all the configurations.
To improve the results, we note that each minute has 6 points (10 seconds time resolution) evaluated several times by the neural network. For
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline & \# Neurons & True & False & False & True & **Accuracy** \\ & Occult layer & Positive & Negative & Positive & Negative & \\ \hline
**Exp. 1** & 30 & 126 & 15 & 1522161 & 747732 & 33\% \\
**Exp. 2** & 60 & 126 & 15 & 1334829 & 935064 & 41\% \\
**Exp. 3** & 90 & 127 & 14 & 1338661 & 931232 & 42\% \\
**Exp. 4** & 120 & 127 & 14 & 1246748 & 1023145 & 45\% \\
**Exp. 5** & 150 & 126 & 15 & 1522170 & 747732 & 33\% \\
**Exp. 6** & 160 & 126 & 15 & 1196951 & 1072942 & 47\% \\
**Exp. 7** & 170 & 131 & 10 & 1355206 & 914687 & 40\% \\
**Exp. 8** & 180 & 113 & 28 & 1764115 & 505778 & 22\% \\ \hline \end{tabular}
\end{table}
Table 2: Results of the Neural Network.
experiment 9, if the network assessed less than 3 points within a minute as positive, we would consider that minute as negative. For experiment 10, if less than 4 points are evaluated as positive within one minute, this minute is assumed to be negative. The proposed method improved the accuracy of the neural network to 60%, as seen in Table 3. There are still many false positives and a considerable decrease in true positives, from 126 to 13, when comparing experiments 6 and 10. Also, the false negative increased from 15 to 128 cases.
Since the results of the neural network presented many false positives, we then compared the neural network results with the RSTN light curves, as well as the POEMAS observations, by visual inspection. First, the daily light curves from December 2011 to December 2013 (years of POEMAS observation) provided by RSTN were checked for events at frequencies \(>4\) GHz (microwaves). When identifying a microwave event in the RSTN data, we check if the Neural Network identified this event at the same day and time in the 45 GHz data.
## 4 Results and discussion
In this work, we analyzed the 45 GHz light curves observed by POEMAS telescopes for a total of 690 days, from December 2011 to December 2013. For the analysis, we considered the sum of the signal involving the two circular polarizations, RCP plus LCP. Moreover, visual inspection was performed on the daily microwave light curves observed by the RSTN telescope network during the same period.
The Neural Network (NN) application detected patterns in the 45 GHz light curve of the POEMAs and identified already known events and new ones. We compared the results of the Network with the work of Hidalgo Ramirez et al. (2019), who identified 30 events, as listed in Table 1. In addition, some events identified in the RSTN microwave data were not detected
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \# Positive & True & False & False & True & **Accuracy** \\ & Points & Positive & Negative & Positive & Negative & \\ \hline
**Exp. 9** & \(<3\) & 25 & 116 & 990033 & 1279860 & 56\% \\
**Exp. 10** & \(<4\) & 13 & 128 & 906998 & 1362895 & 60\% \\ \hline \end{tabular}
\end{table}
Table 3: Results of the Neural Network for experiments 9 and 10.
visually in the POEMAS light curves nor by the NN, given its temporal characteristic of long duration. Below we discuss each of these results in more detail.
### Events identified previously and confirmed by the Neural Network
The Neural Network identified 26 of the 30 events from the work of Hidalgo Ramirez et al. (2019), thus the NN was able to retrieve 87% of the events (last column of Table 1). An example of such an event is shown in Figure 6 that occurred on 2013 May 13th. In the top panel, the POEMAS light curve is shown in blue, whereas the points identified by the NN as positive are depicted in red. In the bottom panel, the microwave light curves of the RSTN data from the Sagamore-Hill telescope (USA) are presented, where the event at 16:03 UT is clearly seen at all frequencies. Thus, the NN correctly identified the event that peaked at approximately 16:03 UT. This was a large event, of GOES X-ray class X2.8, that occurred on the East limb of the Sun.
### Events previously reported and not identified by the Neural Network
In Table 1 there are four events identified visually by Hidalgo Ramirez et al. (2019) but not recognized by the NN, which make up only 13% of the events. The plots in Figure 7 represent the light curves of 2012 May 7th observed at 45 GHz by the POEMAS and in microwaves by RSTN. The event is clearly identified in the RSTN at approximately 17:23 UT. This event is not easily recognized in the POEMAS data and was neither identified by the NN, probably because of a duration longer than 5 min.
### New events identified by the Neural Network
A total of 9 events were identified by the Neural Network, but went unnoticed in the visual inspection of Hidalgo Ramirez et al. (2019). These flares are listed in Table 4. An example of such is the burst that occurred on 2012 July 28th, identified by the NN at approximately 21:00 UT. The time profile of the flare is shown in the top right panel of Figure 8, where the light curve of the previous day was subtracted to eliminate the variation due to the misalignment of the telescope, and better show the event.
### New long duration events visually identified
From the visual inspection of the RSTN microwave light curves, we identified 11 events not identified by Hidalgo Ramirez et al. (2019) nor by the Neural Network, which are listed in Table 5. An example of such event is shown in Figure 9 for the 2012 July 12th flare, a long-duration event that lasted for more than 2 hours. The top panel shows the POEMAS 45 GHz light curve in blue, with the light curve of the day before depicted in green. The subtraction of the emission from the previous day is shown in black on the top panel and highlighted in the middle panel of Figure 9. The flare that
Figure 6: Light curves from the event on 2013/05/13 at 16:03 UT. **Upper:** 45 GHz data from POEMAS (blue) and Neural Network result (red) and **Lower:** data from RSTN.
started at approximately 16:00 UT, peaked just before 17 UT and ended after 18:30 UT, was not readily visually identified in the POEMAS light curve. In the bottom panel of the figure, the microwave light curves observed by RSTN clearly show the event at all frequencies, where the temporal profile of the 15 GHz emission closely resembles that of the 45 GHz from POEMAS. The non-identification of this and the other 9 events probably occurred due to their gradual temporal profile, lasting from 30 minutes to more than an hour. We point out that the data input for NNT consisted of 5 min intervals.
Figure 7: Light curves from the event on 2012/05/07 at 17:23 UT. **Upper:** 45 GHz data from POEMAS (blue) and Neural Network result (red) and **Lower:** data from RSTN.
Figure 8: **Top:** Poemas light curve on 2012 July 28th, with the event at 20:50 UT identified by the NN depicted in red. **Middle:** Highlight of the 45 GHz event after subtraction of the emission from the previous day. **Bottom:** Microwave light curve of the same day observed by RSTN.
Figure 9: Solar flare of 2012/07/12. **Top:** Light curve mission at 45 GHz from POEMAS for the day of the event, 2012/07/12 (blue curve) and the previous day, 2012/07/11 (green curve). The result of the subtraction of the emission on the 12th by the 11th of July 2012 is shown by the black curve (shifted by 900 to fit the scale of the figure). **Middle:** Blow up of the subtracted light curve to better identify the solar flare detected at 45 GHz. **Bottom:** Microwave light curves observed by RSTN (\(1-15\) GHz) for the whole day.
## 5 Summary and conclusions
In this work, we analyzed two years of light curves from POEMAS telescopes at 45 GHz, from December 2011 through December 2013. The main objective was to automatically detect solar events in these light curves. The detection of solar flares in the light curves of POEMAS was hindered due to problems with the telescope's pointing, causing daily variations in the signal. Therefore, it was necessary to apply initial computational techniques to calibrate and reduce the POEMAS data. We created and used a Neural Network (NN) to identify solar flares in the data automatically. The application of this Neural Network was later compared with the microwave emission (\(1-15\)
\begin{table}
\begin{tabular}{|c|c|c|} \hline N & Date & Time (UT) \\ \hline
1 & 2012/01/28 & 11:50 \\
2 & 2012/05/08 & 13:00 \\
3 & 2012/07/28 & 21:00 \\
4 & 2012/09/02 & 18:10 \\
5 & 2012/10/20 & 18:15 \\
6 & 2012/10/21 & 20:00 \\
7 & 2013/05/03 & 16:30 \\
8 & 2013/05/03 & 17:30 \\
9 & 2013/07/02 & 17:50 \\ \hline \end{tabular}
\end{table}
Table 4: Events identified only by the Neural Network
\begin{table}
\begin{tabular}{|c|c|c|} \hline N & Date & Time (UT) \\ \hline
1 & 2012/03/02 & 17:40 \\
2 & 2012/03/03 & 18:00 \\
3 & 2012/03/04 & 11:00 \\
4 & 2012/03/17 & 20:50 \\
5 & 2012/06/06 & 20:00 \\
6 & 2012/06/14 & 13:30-15:00 \\
7 & 2012/07/12 & 16-17:00 \\
8 & 2012/07/27 & 17:15 \\
9 & 2013/07/03 & 20:00 \\
10 & 2013/08/17 & 18:20-19:30 \\ \hline \end{tabular}
\end{table}
Table 5: Long-term events not identified in the work of Hidalgo Ramírez et al. (2019) nor by the Neural Network
GHz) detected by the RSTN radio-telescope network.
The first challenge of this work was the data transformation, due to noise and augmented by the misalignment of the telescope. Moreover, clouds obstructed the observation of the Sun and interference in the Earth's atmosphere even in the absence of clouds, such as increased water vapor or ice crystals, also precluded the detection of the flare signal by creating several spurious peaks in the light curves.
Using a Neural Network with supervised learning, we reached an accuracy of 47%. This value is low, however it is due to the few samples used for training and the intrinsic noise of POEMAS' light curves. Later the accuracy was improved to 60% by using a constraint on the False Positives. Nevertheless, despite the problems in the data mentioned above, we confirmed 26 previously known events and could identify 9 new events detected in the light curves of POEMAS, thanks to the NN.
Comparing the RSTN data for the two years of 2012 and 2013, we visually identified 10 long-term events not previously identified in the POEMAS light curve by visual inspection nor by the NN. As the Neural Network was not supplied with any long-term events for training, it is not capable of detecting this type of event. Thus the Neural Network constructed here can detect only short-term impulsive events, with duration less than 5 minutes.
In summary, with the aid of artificial intelligence, in this work we have identified 35 solar events in the 45 GHz emission from 2012 to 2013 from a total of 49 bursts, or 71%. If we consider, that the NN was not trained to detect events with duration longer than 5 min, then the accuracy of the NN increases to 90%. From the total of 49 events, 19 solar flares are unprecedented at 45 GHz, not having been identified in previous works that analyzed these data (Hidalgo Ramirez et al., 2019). The statistics of the NN is summarized in Table 6.
The use of artificial intelligence is innovati
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Event & Total & Old\({}^{*}\) & Neural Network & Table \\ \hline Old\({}^{*}\) & 30 & 30 (100\%) & 26 (87\%) & 1 \\ New & 9 & 0 (0\%) & 9 (100\%) & 4 \\ Long duration & 10 & 0 (0\%) & 0 (0\%) & 5 \\ \hline Total & 49 & 30 (61\%) & 35 (71\%) & \\ \hline \end{tabular}
\end{table}
Table 6: Summary of the events detected, or not, by the Neural Network.
Solar Physics, we can mention the works by Hou et al. (2020); Ishikawa et al. (2021); Neira et al. (2020), who obtained an accuracy of approximately 90%. To leverage the study of solar flares, such techniques must be explored.
The search for solar flares using Neural Network was a first step to automating the process. Several challenges were encountered, such as clouds, periodic signal variations, and non-detection of long-term events, and their solution will be proposed in future work.
## Acknowledgements
A.V acknowledges partial financial support from FAPESP grant #2013/10559-5. V.L. thanks the fellowship from MackPesquisa funding agency.
|
2309.09605 | Metallicity and Spectral Evolution of WASP-39 b: The Limited Role of
Hydrodynamic Escape | The recent observations on WASP-39 b by JWST have revealed hints of high
metallicity within the atmosphere compared to its host star (Feinstein et al.
2022; Ahrer et al. 2023; Alderson et al. 2023; Rustamkulov et al. 2023; Tsai et
al. 2023). There are various theories on how these high metallic atmospheres
emerge. In this study, we closely investigate the impact of extreme escape in
the form of hydrodynamic escape to see its impact on atmospheric metallicity
and spectral features such as CH$_4$, CO$_2$, and SO$_2$. We perform a grid
simulation, with an adapted version of MESA that includes hydrodynamic escape
(Kubyshkina et al. 2018; 2020), to fully evolve planets with similar masses and
radii to the currently observed WASP-39 b estimates. By making use of
(photo-)chemical kinetics and radiative transfer codes, we evaluate the
transmission spectra at various time intervals throughout the simulation. Our
results indicate that the massive size of WASP-39 b limits the metal
enhancement to a maximum of ~1.23 the initial metallicity. When incorporating
metal drag, this enhancement factor is repressed to an even greater degree,
resulting in an enrichment of at most ~0.4%. As a consequence, when assuming an
initial solar metallicity, metal-enriched spectral features like SO$_2$ are
still missing after ~9 Gyr into the simulation. This paper, thus, demonstrates
that hydrodynamic escape cannot be the primary process behind the high
metallicity observed in the atmosphere of WASP-39 b, suggesting instead that a
metal-enhanced atmosphere was established during its formation. | Amy J. Louca, Yamila Miguel, Daria Kubyshkina | 2023-09-18T09:22:14Z | http://arxiv.org/abs/2309.09605v1 | # Metallicity and Spectral Evolution of WASP-39 b: The Limited Role of Hydrodynamic Escape
###### Abstract
The recent observations on WASP-39 b by JWST have revealed hints of high metallicity within the atmosphere compared to its host star (Feinstein et al. 2022; Ahrer et al. 2023; Alderson et al. 2022; Rustamkulov et al. 2022; Tsai et al. 2023). There are various theories on how these high metallic atmospheres emerge. In this study, we closely investigate the impact of extreme escape in the form of hydrodynamic escape to see its impact on atmospheric metallicity and spectral features such as CH\({}_{4}\), CO\({}_{2}\), and SO\({}_{2}\). We perform a grid simulation, with an adapted version of MESA that includes hydrodynamic escape (Kubyshkina et al. 2018; 2020), to fully evolve planets with similar masses and radii to the currently observed WASP-39 b estimates. By making use of (photo-)chemical kinetics and radiative transfer codes, we evaluate the transmission spectra at various time intervals throughout the simulation. Our results indicate that the massive size of WASP-39 b limits the metal enhancement to a maximum of \(\sim 1.23\)x the initial metallicity. When incorporating metal drag, this enhancement factor is repressed to an even greater degree, resulting in an enrichment of at most \(\sim\)0.4%. As a consequence, when assuming an initial solar metallicity, metal-enriched spectral features like SO\({}_{2}\) are still missing after \(\sim 9\) Gyr into the simulation. This paper, thus, demonstrates that hydrodynamic escape cannot be the primary process behind the high metallicity observed in the atmosphere of WASP-39 b, suggesting instead that a metal-enhanced atmosphere was established during its formation.
planets and satellites: gaseous planets -- planets and satellites: atmospheres -- planets and satellites: physical evolution -- planets and satellites: composition +
Footnote †: journal: ApJ
0000-0002-0002-0788]Amy J. Louca
0000-0002-0780-0886]Yamila Miguel
0000-0002-0783-0885]Daria Kubyshkina
## 1 Introduction
With the successful launch of JWST, we are now able to look ever so closely at exoplanet atmospheres. Recent observations of WASP-39 b showed a \(26\sigma\) carbon dioxide detection in the transmission spectrum, a molecule that is thought to appear only in higher metallicity atmospheres. They also found no sign of methane in the atmosphere and a possible hint of sulfur dioxide (Feinstein et al. 2022; Ahrer et al. 2023; Alderson et al. 2022; Rustamkulov et al. 2022). Both of these findings support the idea of metal enhancement and the latter even photo-chemical effects (Tsai et al. 2023) in WASP-39 b's atmosphere. Notably, the host star is thought to have solar-like metallicity (Polanski et al. 2022). However, if the planet had the same metallicity as the host star, it would not be expected to have CO\({}_{2}\) and SO\({}_{2}\) signatures and CH\({}_{4}\) depletion without external processes drastically altering the composition. Thus, the question remains as to what processes could explain these features. |
2309.07628 | Impact of Excitation and Weighting Errors on Performance of Compact OTA
Testing Systems | This paper investigates the impact of complex excitation errors of the
chamber array antenna on the accuracy of the test zone of a random
line-of-sight over-the-air testing setup. First, several combinations of
compact chamber arrays of lengths L and short distances D between the test zone
and the chamber array, which emulate a plane wave impinging at the test zone
are obtained. The chamber array is linear and uniform with 100 antenna
elements, and a linear taper was applied to some of the elements to emulate a
plane wave impinging at the test zone with more compact setups. A subset of L
and D was chosen, providing compact over-the-air test setups that fulfilled the
defined figures of merit, which assess the similarity of the obtained field
distribution to that of a plane wave. The tolerance of the chosen setups to
complex excitation errors of the chamber array was then investigated,
concluding that these errors must be considered when defining appropriate L and
D combinations. Moreover, the performance of the matched filter and
zero-forcing algorithms is evaluated for errors of the device under test array
weighting coefficients. A random line-of-sight over-the-air testing setup with
two arrays was simulated, where one of the arrays emulated the desired signal
and the other emulated the interference, observing that the errors were more
significant at higher signal-to-noise ratios. Additionally, the zero-forcing
algorithm was more sensitive to errors than the matched filter, which was
expected since the accuracy of the former for interference suppression is
critical. | Alejandro Antón Ruiz, Andrés Alayón Glazunov | 2023-09-14T11:48:14Z | http://arxiv.org/abs/2309.07628v1 | # Impact of Excitation and Weighting Errors on Performance of Compact OTA Testing Systems
###### Abstract
This paper investigates the impact of complex excitation errors of the chamber array antenna on the accuracy of the test zone of a random line-of-sight over-the-air testing setup. First, several combinations of compact chamber arrays of lengths \(L\) and short distances \(D\) between the test zone and the chamber array, which emulate a plane wave impinging at the test zone are obtained. The chamber array is linear and uniform with 100 antenna elements, and a linear taper was applied to some of the elements to emulate a plane wave impinging at the test zone with more compact setups. A subset of \(L\) and \(D\) was chosen, providing compact over-the-air test setups that fulfilled the defined figures of merit, which assess the similarity of the obtained field distribution to that of a plane wave. The tolerance of the chosen setups to complex excitation errors of the chamber array was then investigated, concluding that these errors must be considered when defining appropriate \(L\) and \(D\) combinations. Moreover, the performance of the matched filter and zero-forcing algorithms is evaluated for errors of the device under test array weighting coefficients. A random line-of-sight over-the-air testing setup with two arrays was simulated, where one of the arrays emulated the desired signal and the other emulated the interference, observing that the errors were more significant at higher signal-to-noise ratios. Additionally, the zero-forcing algorithm was more sensitive to errors than the matched filter, which was expected since the accuracy of the former for interference suppression is critical.
OTA, automotive, precoding, excitation errors.
## I Introduction
Over-The-Air (OTA) testing has become the standard method for full performance evaluation of wireless devices. It accounts for the antenna characteristics of the Device Under Test (DUT) in an environment that emulates its actual use. Besides testing the communication protocols and the performance of the radio frequency part, it may also consider other sources of error such as using its own power source [1].
OTA is a key enabler of the development of the automotive industry, especially as it is moving towards the integration of an increasing number of sensors, i.e., radars, lidars, cameras, as well as wireless communications and GPS. Radars will be mostly used in the \(76-81\) GHz range since the \(24\) GHz ultra-wideband has been phased out this January [2]. However, many other products still operate in the lower Millimeter Wave (mmWave) bands and below, including Vehicle-to-Everything (V2X) communications, which operate in the sub-6 GHz bands for now. Nevertheless, there is a need for larger data rates than those achieved by sub-6 GHz bands to support applications such as exchanging raw data from sensors in vehicles. This can be achieved by resorting to the mmWave frequencies, such as the already defined FR2 frequency bands [3].
Currently, there are OTA testing systems available for mmWave communications, e.g., [4] and [5]. There are already solutions for mmWave radar testing too [6], including car-mounted ones [7]. However, to the best of the authors' knowledge, and in agreement with [1], there are no available solutions for FR2 communications automotive OTA testing that are feasible in terms of hardware costs and dimensions, so further efforts must be made to devise such solutions.
In this paper we present numerical simulations of an OTA system at \(28\) GHz, corresponding to the center frequency of the 3GPP n257 band, chosen as a representative of FR2. One of the main challenges for automotive OTA, especially for vehicle-in-the-loop testing, is fulfilling the far-field criterion, i.e., the Fraunhofer distance. For a whole car at mmWave, it extends to several km. Clearly, these distances are not feasible in a controlled environment. Thus, there is a need to resort to OTA techniques that, while not physically being in the far-field, can emulate an impinging plane wave at the automobile. Among such techniques, there are compact test ranges, plane wave generators and random line-of-sight (usually denoted in the literature as RLOS, Random-LOS, or Ranlos). We take the approach of the Random-LOS technique [8, 9, 10].
In this work, we first conduct a study of the combined effects of linear chamber array size and the distance between the center of the chamber array and the center of the Test Zone (TZ) for a given TZ size. The performance criteria is a set of Figures of Merit (FoM) that evaluate the similarity of the field emulated in the TZ to a plane wave. The idea is to find the most compact set, i.e., the smallest array and shortest distance (at least shorter than Fraunhofer distance) that satisfies the accuracy criteria. We also study the tolerance in terms of FoM compliance of a subset of distances and chamber array sizes to random complex excitation errors of the chamber array due to, e.g., manufacturing tolerances and quantization errors. We also study the impact of errors on the performance of Matched Filter (MF) and Zero Forcing (ZF) algorithms due to complex errors in the weights of the DUT array. The results show that chamber array excitation errors must be considered when selecting distances and chamber array sizes, and also that DUT errors affect significantly more to ZF than to MF.
## II OTA setup and FoM for TZ quality evaluation
### _OTA setup_
The basic arrangement of the OTA setup is depicted in Fig. 1, which is not to scale. The chamber array is defined as a uniform linear array along the \(x-\)axis, with a fixed number
of Antenna Elements (AEs) \(N_{C}=100\). The AEs are idealized vertically polarized isotropic radiators operating at \(28\) GHz. A linear taper from 0 dB to -6 dB is applied to 25 elements on each side of the array to reduce field fluctuations in the TZ as explained in Section II-C [11]. Thus, the Electric Field (EF) at a given point \(P\), accounting only for the vertical polarization (\(z-\)axis), is computed by the superposition principle as
\[E_{z}=\sum_{i=1}^{N_{C}}t_{c_{i}}E_{0}\frac{e^{-jkr_{i}}}{4\pi r_{i}}=\sum_{i=1 }^{N_{C}}t_{c_{i}}\frac{e^{-jkr_{i}}}{4\pi r_{i}}, \tag{1}\]
where \(E_{0}\) is set to \(1\) for simplicity, \(t_{c_{i}}=10^{t_{c_{i_{dB}}}/20}\) is the tapering coefficient in linear scale of the \(i\)-th AE, being \(t_{c_{i_{dB}}}\) the dB-scale tapering coefficient, and \(r_{i}\) is the distance between the \(i\)-th AE and \(P\). Inter-Element Spacing (IES) is considered a variable, ranging from \(0.5\lambda\) to \(1.5\lambda\), with \(0.05\lambda\) step, resulting in a variable chamber array length \(L=(N_{C}-1)IES\). The TZ is contained by the \(XY-\)plane and it is defined as a circle of radius \(R=(N_{C}-1)IES/4\), where IES equals \(0.5\lambda\). Hence, \(R=99\lambda/8=13.26\) cm is a quarter of the length of the shortest considered chamber array. The center of the TZ is at a distance \(D\) of the chamber array, along the \(y-\)axis.
### _FoM for the TZ_
The objective of this OTA setup is to emulate an EF distribution over the TZ that emulates a plane wave. To assess the accuracy of the plane wave, several FoM are defined. First, we consider
\[R_{mag}=max(20\log_{10}\left(\left|\mathbf{E}_{z}\right|))-min(20\log_{10} \left(\left|\mathbf{E}_{z}\right|\right)), \tag{2}\]
where \(\mathbf{E}_{z}\) is the EF of every sample belonging to the TZ. \(R_{mag}\) defines the dynamic range of the magnitude of the EF samples in the TZ. These samples come from the nodes of a mesh with equal \(\lambda/8\) interval in both the \(x-\) and \(y-\)axes, so the density of samples is constant over the circular TZ area.
Secondly, we evaluate the standard deviation of the magnitude of the EF samples in the TZ in dB
\[\sigma_{mag}=\sqrt{\frac{\sum_{s=1}^{Ns}\left(X_{s}-\bar{x}\right)^{2}}{N-1}}, \tag{3}\]
where \(N\) is the number of samples in the TZ, \(X_{s}\) is the EF magnitude in logarithmic units of the \(s\)-th sample, and \(\bar{x}\) is the mean of the EF magnitude over all the TZ samples. This FoM is supported by the 3GPP [12].
Thirdly, we compute the dynamic range of the phase of the EF over the TZ
\[R_{phs_{rows_{n}}}=\max\left(\angle\mathbf{E}_{z_{n}}\right)-\min\left( \angle\mathbf{E}_{zn}\right), \tag{4}\]
\[R_{phs}=\max\big{(}\mathbf{R}_{physrows}\big{)}, \tag{5}\]
where \(\mathbf{R}_{phs}\) contains the phase range over each parallel stripe (with respect to the chamber array) of samples of the TZ, \(R_{phs_{rows_{n}}}\) is the phase range of a given parallel stripe, \(\mathbf{E}_{z_{n}}\) contains the EF values of a given parallel stripe, and \(\angle\) denotes the phase value or angle. This FoM may arise a concern due to the periodic nature of the phase. Indeed, if wrapped up to the interval \([0^{\circ},360^{\circ}[\), then one could argue that, e.g. if within a row, there is a phase value of \(359^{\circ}\) and other value of \(2^{\circ}\), the resulting \(R_{phs_{rows_{n}}}\) would be \(357^{\circ}\). This has been taken into account, so that the correct variation, of \(3^{\circ}\) in this case, is always computed. We limit the variation to \(180^{\circ}\) because that is the maximum phase deviation that can actually occur.
It is worth noting the ideal values of the FoM. Since the desired EF distribution is that of a plane wave, the magnitude of the EF over the TZ should be the same, so \(R_{mag}\) and \(\sigma_{mag}\) should be 0 dB. Similarly, the phase should be constant along each parallel stripe, so \(R_{phs}=0^{\circ}\). However, a perfect plane wave EF distribution is not achievable due to physical limitations, e.g., finite aperture and a number of sources. Thus, we focus on acceptable FoM values: \(R_{mag}\leq 1\) dB is commonly accepted [13], while \(\sigma_{mag}\leq 0.25\) dB is, according to [12], required, and \(R_{phs}\leq 10^{\circ}\) is often assumed as an acceptable limit [14].
### _L and \(D\) satisfying the FoM limits_
Having defined the FoM and their acceptable values, we investigate the \(L\) and \(D\) combinations that fulfill them for the considered OTA setup. This study is extended to stricter acceptable values, paving the way for the study described in Section III, where the use of these stricter values is justified. The study consists in varying the IES between \(0.5\lambda\) and \(1.5\lambda\), with \(0.05\lambda\) step, resulting in the variation of \(L\), varying \(D\) from \(40\) to \(2450\lambda\). The maximum \(D\) value corresponds to roughly half the shortest Fraunhofer distance, i.e. the one for the shortest considered chamber array, which has an IES of \(0.5\lambda\). The FoM values are computed for each \(L\) and \(D\) combination and evaluated against the FoM acceptable values.
The goal is to find an OTA test setup which is as compact as possible in terms of antenna size \(L\) and chamber dimensions, roughly defined by \(D\). The considered values of \(D\) are significantly lower than the shortest Fraunhofer distance. It is worthwhile to note that no optimization has been carried out, so the chosen combinations of \(L\) and \(D\) can be further improved, e.g., by the use of more advanced tapering techniques and by applying actual computational optimization techniques like the ones used in [15, 16].
As shown in [11], linear tapering is an effective technique to reduce EF variations. Indeed, the \(L\) and \(D\) values fulfilling the
Fig. 1: Initial OTA setup
FoM limits were significantly lower than without tapering. The results from this study are shown in Fig. 2. Note that each point marked in yellow corresponds to one of the 5 chosen \(L\) and \(D\) combinations for Sections III and IV. From these results, it can be observed that, due to the highly non-linear nature of the aggregation of the EF generated by these 100 sources, while still being in the near-field, there are discontinuities in the \(L\) and \(D\) combinations. I.e., for a given value of \(L\), intuitively one would think that if a value of \(D\) fulfills the FoM limits, then a larger value of \(D\) should fulfill them too, but that is not generally the case. On the other hand, there is some continuity in the ratios of \(L\) and \(D\) that fulfill the FoM limits, forming a series of somewhat continuous "curves" that fulfill the FoM. Finally, if we focus only on the most compact possible setups, i.e., lowest \(L\) and \(D\) combinations, marked in yellow, it can be observed that there is a trade-off between \(L\) and \(D\), so a smaller chamber array requires a larger distance, and vice versa.
## III Study of chamber array excitation errors
### _Error model_
A study of excitation errors of the chamber array can be found, e.g., in [14, 17]. The study conducted in this paper aims at quantifying the chamber array excitation error that can be tolerated by the selected \(L\) and \(D\) combinations. The error model in this paper is different from the ones presented in the references above, thus the results are not directly compared. In [14] two normally distributed random variables with the same standard deviation were used, which is good for simplicity. Nevertheless, the error is made proportional to the weighting coefficient, which adds complexity to the analysis. In [17], separated amplitude and phase error normally distributed random variables are assumed, each with its own standard deviation, which is impractical for the study conducted in this paper, since this would make this study two-dimensional, unnecessarily increasing the complexity.
For the sake of simplicity, the error model in this paper comprises a normally distributed complex random variable given by
\[\epsilon_{ch_{i}}=\mathcal{N}\left(0,\sigma_{ch}\right)+j\mathcal{N}\left(0, \sigma_{ch}\right), \tag{6}\]
where \(\epsilon_{ch_{i}}\) is the excitation error of the \(i\)-th element of the chamber array, so a different realization of the excitation error is used for each of the AEs of the chamber array, and \(\sigma_{ch}\) is the standard deviation of the excitation error. For similarity with [14], the standard deviation will be increased in dB-scale \(\sigma_{ch_{dB}}\). Hence,
\[\sigma_{ch}=10^{\sigma_{ch_{dB}}/20}-1. \tag{7}\]
Therefore, the EF expression at a given point \(P\), is now computed as
\[E_{z}=\sum_{i=1}^{N_{C}}\left(1+\epsilon_{ch_{i}}\right)t_{c_{i}}\frac{e^{-jkr _{i}}}{4\pi r_{i}}. \tag{8}\]
### _Simulation results_
Monte-Carlo simulations were conducted for each selected combination of \(L\) and \(D\) shown in Fig. 2 (b) and (c), where \(\sigma_{ch_{dB}}\) was progressively increased with \(0.01\) dB step. The FoM were computed at each step and compared against the limits, i.e., \(\sigma_{mag}\leq 0.25\) dB, \(R_{mag}\leq 1\) dB, and \(R_{phs}\leq 10^{\circ}\). When any of the FoM does not comply with these values, then the value of \(\sigma_{ch_{dB}}\) of the previous iteration is stored. The reason of not using the \(L\) and \(D\) from Fig. 2 (a), is that some
of the best \(L\) and \(D\) combinations in terms of the compactness of the setup, did not even tolerate a \(\sigma_{ch_{dB}}\) value of \(0.01\) dB, so more restrictive values of FoM were used in Fig. 2 (b) and (c) to ensure some headroom for excitation errors. The results of this study are shown in Table I, where each of the five points marked in Fig. 2 (b) and (c) corresponds, in order, with each of its rows. It can be observed that not all the \(L\) and \(D\) combinations from Fig. 2 (b) tolerate the same amount of standard deviation of the excitation error, even though all of them, in absence of excitation error, fulfill the same level of FoM. Additionally, the fifth \(L\) and \(D\) combination is, as one could intuitively think, the one with more resilience to this error, due to its larger headroom in FoM levels.
## IV Study of DUT weight errors
### _Setup_
In this study, we consider a setup similar to the one shown in Fig. 1. Here, a second chamber array was used to emulate an interferer, and thus it is denoted as Interferer Array (IA). By design, it is identical to the Main Array (MA) and is placed at its side, as can be seen from Fig. 3. The chosen \(L\) and \(D\) define the minimum angle \(\alpha_{min}\). A generic DUT array antenna is placed in the TZ, and parallel to the main chamber array. The DUT consists of \(49\) vertically polarized idealized isotropic AEs. The considered \(L\) and \(D\) combinations are the ones from Section II and are shown in Fig. 2 and Table I. Additional placements of the IA are considered by increasing \(\alpha_{min}\) by 15\({}^{\circ}\) for each \(L\) and \(D\) combinations.
### _Method_
In this section, we aim at analyzing the impact of weight errors at the DUT on the performance of evaluated precoding algorithms. For this purpose, we assume that the MA and the IA emulate two user equipment, while the DUT is going to play the role of a base station. It could also represent two access points communicating with the onboard (on a vehicle) communications unit. Furthermore, in order to assess the impact of weight errors, we are going to measure the uplink sum rate. In order to do this, the channel matrix \(\mathbf{H}\) is first computed according to (1), obtaining the \(E_{z}\) values at each AE of the DUT, from the MA and the IA. After that, the weights are computed for both MF and ZF, according to:
\[\mathbf{W}_{MF}=\mathbf{H}^{\dagger}, \tag{9}\]
\[\mathbf{W}_{ZF}=\mathbf{H}^{\dagger}(\mathbf{H}\mathbf{H}^{\dagger})^{-1}. \tag{10}\]
After this, \(\mathbf{W}\) is distorted by an error statistically distributed as the one already presented in Section III-A. However, now the standard deviation is named as \(\sigma_{DUT_{dB}}\), and varied from \(0-2\) dB, with \(0.1\) dB step. The other variable is the SNR, which is evaluated at \(-10\), \(0\), \(10\) and \(20\) dBs. The Signal to Interference plus Noise Ratio (SINR) for the MA and the IA is computed for each combination of \(\sigma_{DUT_{dB}}\) and SNR and,
Fig. 3: MF and ZF performance study setup
Fig. 5: ZF average sum rate as a function of \(\sigma_{DUT_{dB}}\), for different SNR levels. \(L\) and \(D\) from first row of Table I: \(L=133.65\lambda\), \(D=286\lambda\).
Fig. 4: MF average sum rate as a function of \(\sigma_{DUT_{dB}}\), for different SNR levels and for both positions of the IA (\(\alpha_{min}\) and \(\alpha_{min}+15^{\circ}\)). (a) \(L\) and \(D\) from first row of Table I: \(L=133.65\lambda\), \(D=286\lambda\). (b) \(L\) and \(D\) from fifth row of Table I: \(L=69.3\lambda\), \(D=591\lambda\).
after that, the sum rate is computed according to
\[SR=\sum_{u=1}^{2}\log_{2}{(1+SINR_{u})}, \tag{11}\]
where the subscript \(u\) refers to the chamber arrays (MA and IA). The same procedure was repeated for all the iterations of the Monte-Carlo simulations, with the corresponding averaging of the sum rate afterward. This is repeated for all the \(L\) and \(D\) combinations, as well as the different IA placements.
### _Results_
As expected, the performance of MF and ZF is impacted by the weighting errors of the DUT array. However, it depends on the SNR, and, as can be seen from Fig. 4 and Fig. 5, this impact is much higher for ZF than for MF. The results for all \(L\) and \(D\) combinations show some differences in some cases and only for MF, finding no relevant changes for ZF. As for the two considered angles \(\alpha_{min}\) and \(\alpha_{min}+15^{\circ}\), they follow the same trend as the \(L\) and \(D\) combinations, having only relevant differences for some of the \(L\) and \(D\) combinations and only for MF. Therefore, to illustrate the most relevant case in terms of difference of results, the results for \(L\) and \(D\) combinations corresponding to the first and fifth rows of Table I for MF are shown in Fig. 4 (a) and Fig. 4 (b), respectively, with the two considered angles. For ZF, due to the similarity of all results, i.e. for all \(L\) and \(D\) combinations and both considered angles, only the \(L\) and \(D\) combination belonging to the first row of Table I and only for the \(\alpha_{min}\) angle is presented in Fig. 5.
In Fig. 4, it can be seen that the impact of the MF weighting errors on the sum rate is only relevant for large SNRs. Additionally, the \(L\) and \(D\) combination of Fig. 4 (a) shows that the performance of MF is very similar for both positions of the IA. On the other hand, for the \(L\) and \(D\) combination of Fig. 4 (b), the performance of MF suffers more.
In Fig. 5, the impact of ZF weighting errors on sum rate is relevant for all SNRs, with similar behaviour, although affecting more to higher SNRs. In any case, the weighting error impact is larger for ZF than for MF across the board.
## V Conclusion
In this paper, we have shown that the errors due to the chamber array excitation weights may affect the feasible size of the chamber array and the distance between the test zone center to the chamber array center, i.e., the size of the testing facility. It was concluded that not all feasible combinations of these two parameters will be resilient to the chamber array errors to the same degree. It was also shown the impact on the performance of weighting coefficients errors of the DUT array in terms of sum rate for matched filter and zero-forcing precoding algorithms, concluding that zero-forcing is, in general, much more sensitive than a matched filter to such errors. It was also concluded that the impact of weighting errors increased with the signal-to-noise ratio. We hope that the current paper paves the further steps toward future over-the-air testing solutions, especially for automotive applications. Testing solutions using FR2 frequencies for communications will benefit from the presented findings when focusing on having an over-the-air testing solution that is as compact as possible while considering different sources of error in the design process.
## Acknowledgment
The work of Alejandro Anton was conducted within the ITN-5VC project, which is supported by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 955629.
|
2309.16043 | Te Vacancy-Driven Anomalous Transport in ZrTe$_5$ and HfTe$_5$ | In the search for experimental signatures of quantum anomalies, the layered
Dirac materials ZrTe$_{5}$ and HfTe$_{5}$ have received much attention for
potentially hosting a chiral anomaly. These materials exhibit a negative
longitudinal magnetoresistance (NLMR) that is taken as a signature of broken
chiral symmetry. The anomalous transport properties of ZrTe$_{5}$ and
HfTe$_{5}$ are known to strongly correlate with the presence of Te vacancies,
prompting questions as to the microscopic mechanism driving the NLMR. In this
work, the effect of Te vacancies on the electronic structure of ZrTe$_{5}$ and
HfTe$_{5}$ is investigated via first-principles calculations to garner insight
into how they may modulate the transport properties of these materials. While
Te vacancies act as a source of effective compressive strain, they also produce
local changes to the electronic structure that cannot be explained simply as
volume effects. The reorganization of the electronic structure near the Fermi
energy indicates that Te vacancies can rationalize both spectroscopic and
transport measurements that have remained elusive in prior first-principles
studies. These results show that Te vacancies contribute, at least in part, to
the anomalous transport properties of ZrTe$_{5}$ and HfTe$_{5}$ and offer a
path towards understanding the possibility of a chiral anomaly in these
materials. | Elizabeth A. Peterson, Christopher Lane, Jian-Xin Zhu | 2023-09-27T21:54:36Z | http://arxiv.org/abs/2309.16043v2 | # Te Vacancy-Driven Anomalous Transport in ZrTe\({}_{5}\) and HfTe\({}_{5}\)
###### Abstract
In the search for experimental signatures of quantum anomalies, the layered Dirac materials ZrTe\({}_{5}\) and HfTe\({}_{5}\) have received much attention for potentially hosting a chiral anomaly. These materials exhibit a negative longitudinal magnetoresistance (NLMR) that is taken as a signature of broken chiral symmetry. The anomalous transport properties of ZrTe\({}_{5}\) and HfTe\({}_{5}\) are known to strongly correlate with the presence of Te vacancies, prompting questions as to the microscopic mechanism driving the NLMR. In this work, the effect of Te vacancies on the electronic structure of ZrTe\({}_{5}\) and HfTe\({}_{5}\) is investigated via first-principles calculations to garner insight into how they may modulate the transport properties of these materials. While Te vacancies act as a source of effective compressive strain, they also produce local changes to the electronic structure that cannot be explained simply as volume effects. The reorganization of the electronic structure near the Fermi energy indicates that Te vacancies can rationalize both spectroscopic and transport measurements that have remained elusive in prior first-principles studies. These results show that Te vacancies contribute, at least in part, to the anomalous transport properties of ZrTe\({}_{5}\) and HfTe\({}_{5}\) and offer a path towards understanding the possibility of a chiral anomaly in these materials.
## I Introduction
Theoretical predictions of quantum anomalies, symmetry breaking that occurs when moving from classical field theories to quantum field theories, originated in the fields of particle and cosmological physics [1; 2; 3; 4]. As experimental verification of predicted quantum anomalies at the relevant length and energy scales of particle and cosmological physics is generally challenging or simply intractable, the possibility of observing quantum anomalies in experimentally accessible condensed matter systems has engendered a great deal of excitement. In the case of three-dimensional (3D) Dirac and Weyl semimetals, a chiral anomaly is characterized by an imbalance in right- and left-handed chiral fermions in the presence of an applied magnetic field. The resulting chiral current may be induced by application of parallel electric and magnetic fields producing an anomalous contribution to the conductivity [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. This manifests as a negative longitudinal magnetoresistance (NLMR). In recent years the layered 3D Dirac materials ZrTe\({}_{5}\) and HfTe\({}_{5}\) have garnered attention as potential material platforms that host a chiral anomaly [18; 12; 13]. These materials exhibit a number of anomalous transport properties, including a Lifshitz transition as a function of temperature and a negative longitudinal magnetoresistance [18; 12; 17; 18; 19; 20; 21; 22; 23]. This NLMR is suggested to be evidence of the chiral anomaly in these materials [18; 12; 13]. However, NLMR is not sufficient to prove the presence of a chiral anomaly, as it may arise from alternative sources such as current jetting or defects [24; 10; 25]. Lacking a robust microscopic description of the source of the anomalous transport properties in ZrTe\({}_{5}\) and HfTe\({}_{5}\), it is as yet unclear if these materials exhibit a genuine chiral anomaly. Further, there is widespread controversy in the characterization of the topological nature of these materials; different experiments report different topological phases, including topological insulating and Dirac semimetallic states [18; 19; 20; 26; 27; 28; 29; 30; 31; 23].
The sample dependence of the topological [32; 33; 34; 31; 32; 33; 31; 34] and transport [35; 36; 23] properties of ZrTe\({}_{5}\) and HfTe\({}_{5}\) are well documented and are suspected to depend, at least in part, on sample volume. Topological phase transitions from a strong topological insulator, to a Dirac semimetal, to a weak topological insulator are theoretically predicted to occur with increasing volume [32; 33; 34; 31; 32]. The Lifshitz transition (experimentally characterized by a change from p-type to n-type carriers and from metallic to semiconducting to metallic character with decreasing temperature as measured through Hall and longitudinal conductivity, respectively) that is widely featured in the literature on ZrTe\({}_{5}\) and HfTe\({}_{5}\) does not occur in all samples [35; 36; 23]. Puzzlingly, samples that are highly stoichiometric with a 1:5 ratio of Zr or Hf to Te generally do not exhibit strong signals of anomalous transport behavior; in fact, Te deficiency appears to be critical to the observation of the Lifshitz transition [35; 23]. Recent experimental and theoretical results using angle-resolved photoemission spectroscopy (ARPES) and first-principles calculations rationalize the observed sample dependence of topological phases by reporting that Te vacancies are a source of chemical pressure, or internal strain, altering the volume of ZrTe\({}_{5}\) and HfTe\({}_{5}\)[23].
However, outstanding discrepancies between low temperature spectroscopic and transport measurements of ZrTe\({}_{5}\) and HfTe\({}_{5}\) suggest that the role of Te vacancies is more complex. Below the Lifshitz transition temperature, Hall conductivity measurements indicate that the primary carriers in these materials are electrons, suggest
ing the conduction band must be populated [13; 20; 35]. While most reported ARPES measurements show that as temperature decreases the Fermi energy shifts downward into the valence band at the \(\Gamma\) point, suggesting the primary carriers are holes [12; 29; 30]; one ARPES study observed that as temperature decreases the Fermi energy shifts upward into the conduction band in the neighborhood of the \(\Gamma\) point suggesting the primary carriers are electrons, consistent with Hall conductivity measurements [33]. This study went on to measure the full Brillouin zone (BZ), identifying electron pockets far from the BZ center. Electronic structure calculations on pristine ZrTe\({}_{5}\) and HfTe\({}_{5}\) do not predict any portion of the conduction band dipping below the Fermi energy, an apparent contradiction. Moreover, ARPES measurements generally suggest the Dirac point is centered at the \(\Gamma\) point while electronic structure calculations of the pristine materials generally place the Dirac point along the \(\Gamma\)-Y high-symmetry line (or \(\Gamma\)-Z depending on convention) [12; 29; 30; 33; 35].
The observation that the Lifshitz transition and NLMR occur only when Te vacancies are present strongly indicates that Te vacancies serve as more than just a source of effective strain in altering the electronic structure and transport properties of ZrTe\({}_{5}\) and HfTe\({}_{5}\). However, the subtle effect of Te vacancies on the electronic structure of these materials is poorly understood. In this work first-principles density functional theory (DFT) calculations of the electronic structure of ZrTe\({}_{5}\) and HfTe\({}_{5}\) in the presence of Te vacancies offer insight into the role that Te vacancies play in modulating the electronic structure of these materials. These calculations reveal that while Te vacancies do indeed serve as a source of chemical pressure, or effective strain, they also significantly modify the electronic structure of ZrTe\({}_{5}\) and HfTe\({}_{5}\); understanding these modifications is critical to elucidating whether the transport properties of these materials are a signature of the presence of a chiral anomaly.
## II Electronic Structure Calculations
\begin{table}
\begin{tabular}{||l|c|c|c|c||} \hline & Source & a (Å) & b (Å) & c (Å) \\ \hline \hline ZrTe\({}_{5}\) & ICSD & 3.987 & 14.530 & 13.724 \\ & DFT & 4.026 & 14.793 & 13.636 \\ \hline \hline HfTe\({}_{5}\) & ICSD & 3.968 & 14.455 & 13.691 \\ & DFT & 4.000 & 14.711 & 13.596 \\ \hline \end{tabular}
\end{table}
Table 1: Lattice constants of the experimental crystal structures of ZrTe\({}_{5}\) and HfTe\({}_{5}\) from the ICSD [37; 38] and the fully geometrically relaxed crystal structures calculated using DFT with Grimme-D3 dispersion corrections.
Figure 1: (a) The conventional crystal structure of HfTe\({}_{5}\) characterized by Hf ion (teal) polyhedron centers each coordinated to 8 Te ions (yellow). Layers of HfTe\({}_{5}\) polyhedra are formed by Hf-Te bonded chains along the **a** direction and Te-Te bonded chains along the **c** directions. Layers are stacked along the **b** direction held together by vdW dispersion forces. (b) The isostructural conventional crystal structure of ZrTe\({}_{5}\) with Zr ions in purple and Te ions in yellow. The three symmetrically distinct Te ion sites are indicated. (c) Orthorhombic Brillouin zone with high-symmetry points used in the band structure calculations shown in Figure 2-4 marked.
ZrTe\({}_{5}\) and HfTe\({}_{5}\) are layered materials with one-dimensional chains of HfTe\({}_{8}\) polyhedra along the **a** axis connected by Te-Te bonds along the **c** axis. These act planes are bound by vdW dispersion forces along the **b** axis, as shown in Fig. 1(a). The lattice constants of ZrTe\({}_{5}\) and HfTe\({}_{5}\) are similar, as shown in Table 1. They have three symmetrically distinct Te sites, as shown in Fig. 1(b). Prior experiments suggests that Te vacancies form readily on sites 2 and 3 in ZrTe\({}_{5}\)[35], however the calculated formation energies of Te vacancies on each site indicate that a vacancy on site 1 is the most favorable, in agreement with recent work [23]. The DFT calculated formation energies for Te vacancies on each site are all relatively high, ranging from 77-98 meV/atom, significantly higher than average thermal energy \(k_{b}T\) (\(\sim\) 26 meV) at room temperature, as seen in as shown in Table 2. Nonetheless, under high temperature or non-equilibrium synthesis conditions a finite population of Te vacancies would still be expected to form. Notably, the relative formation energy differences between different vacancy sites are on the order of 10-20 meV/atom, meaning that a mixture of vacancy types should be present even though site 1 is the most stable.
### Band Structures
The band structures of pristine ZrTe\({}_{5}\) and HfTe\({}_{5}\) are both characterized by a Dirac point at the Fermi energy that opens into a narrow band gap between the \(\Gamma\) and Y high-symmetry points due to spin-orbit coupling (SOC) as shown in Fig. 2(a,e). The PBE+SOC calculated indirect band gaps of pristine ZrTe\({}_{5}\) and HfTe\({}_{5}\) are 27 meV and 24 meV respectively. When Te vacancies are introduced, the electronic structure calculated with SOC looks very similar for both ZrTe\({}_{5}\) and HfTe\({}_{5}\). This is likely a reflection of the very close similarity in lattice parameters between the two isostructural materials.
The PBE+SOC band structure for Te vacancies at site 1 (Fig. 2(b,f)) is quite similar to the pristine cases except with (i) the direct band gap at the Dirac point shifted from being along the \(\Gamma\)-Y high-symmetry line to being exactly at the \(\Gamma\) point and (ii) a decrease of the conduction band (CB) energy below the Fermi level along the Z-U and R-T high-symmetry lines. In ZrTe\({}_{5}\), the Fermi energy still lies within the Dirac point gap while in HfTe\({}_{5}\) the Fermi energy cuts through the valence band (VB) near the Dirac point at \(\Gamma\). Te vacancies at site 2 (Fig. 2(c,g)) shift the Fermi energy below the VB top at the Dirac point and also shift the CB below the Fermi energy in several more regions of the Brillouin zone (BZ). Te vacancies at site 3 (Fig. 2(d,h)) shift the CB and VB in opposite directions both towards the Fermi energy.
In all cases, Te vacancies make the band structures of ZrTe\({}_{5}\) and HfTe\({}_{5}\) metallic, although least dramatically for Te vacancies at site 1, the most favorable Te vacancy. The volume reduction caused by Te vacancies at site 3 is the largest (\(>\) 1%), followed by site 2, while the volume reduction caused by Te vacancies at site 1 is much less dramatic (\(<\) 0.1%)(see Table 2 for details). This explains the trend in metallicity across the band structures of each type of Te vacancy, with more effective compressive strain resulting in a more metallic band structure.
Interestingly, at \(\Gamma\), the direct band gap is larger in the presence of Te vacancies than it is for the pristine crystal structures. This is consistent with recent band structure calculations of HfTe\({}_{5}\) under compressive strain [23].
The band gap at the Dirac point appears both when ZrTe\({}_{5}\) and HfTe\({}_{5}\) are pristine and when they host Te vacancies in our PBE+SOC band structures. As noted above though, while the band gap for the pristine case is along the \(\Gamma\)-Y high-symmetry line, the band gap shifts to being exactly at the \(\Gamma\) point when Te vacancies are present. Recent first-principles calculations on HfTe\({}_{5}\) under compressive strain do not report this qualitative change, indicating this is not merely a volume effect [23]. Experimental ARPES studies consistently find the Dirac point centered at \(\Gamma\)[29; 30; 33; 33], contrary to DFT calculations of the pristine crystal structures. The suspected omnipresence of Te vacancies of variable concentration may offer an explanation for the experimental observation that the band gap is located at the \(\Gamma\) point.
\begin{table}
\begin{tabular}{||c|c|c|c|c|c|c||} \hline & V\({}_{\text{Te}}\) Site & E\({}_{f}\) (meV/atom) & \(\Delta\)V (\%) & \(\Delta\)a (\%) & \(\Delta\)b (\%) & \(\Delta\)c (\%) \\ \hline \hline & 1 & 83.1 & -0.14\% & -0.10\% & 0.00\% & -0.03\% \\ ZrTe\({}_{5}\) & 2 & 88.2 & -1.42 & -1.01 & -0.07 & -0.35 \\ & 3 & 97.6 & -1.02\% & -0.19 & 0.00 & -0.83 \\ \hline \hline & 1 & 77.0 & -0.08 & -0.12 & +0.06 & -0.02 \\ HfTe\({}_{5}\) & 2 & 83.5 & -1.42 & -1.09 & -0.03 & -0.31 \\ & 3 & 94.8 & -1.10 & -0.27 & +0.11 & -0.93 \\ \hline \end{tabular}
\end{table}
Table 2: Defect formation energies and changes in the volume and lattice constants for ZrTe\({}_{5}\) and HfTe\({}_{5}\) with Te vacancies at each symmetrically distinct site. Changes in the volume and lattice constants are for the fully relaxed defect structure and reported relative to the volume and lattice constants of the fully relaxed pristine crystal structure.
### Site Projected Densities of States
To better understand the source of the energy shifts and reordering of the bands near the Fermi energy, the site-projected density of states is calculated for ions in regions near the Te vacancy site and ions in regions far from the Te vacancy site. Fig. 3(a-d) shows the band structures of pristine ZrTe\({}_{5}\) and ZrTe\({}_{5}\) with Te vacan
Figure 2: The DFT band structures near the Fermi energy of ZrTe\({}_{5}\)(a-d) and HfTe\({}_{5}\)(e-h) in the fully relaxed geometries for the pristine crystal structure (a,e) and the cases of a neutral Te vacancy in Te site 1 (b,f), Te site 2 (c,g), and Te site 3 (d,h). The band structures calculated both with (solid lines) and without (dashed lines) spin-orbit coupling (SOC) are plotted to illustrate how SOC affects the band structure around the Dirac point near the Fermi energy.
cies plotted with orbital contributions from each element indicated by purple and yellow dots for Zr and Te respectively. The band gap at the Dirac point, circled in orange, is consistently characterized by a mix of contributions from Zr and Te orbitals. The conduction bands that shift below the Fermi energy are predominantly of Te character. One notable exception is a flat band of Zr character that lies right at the Fermi energy along the U-R high-symmetry line for Te vacancies at site 3, which would be expected to have important implications for the
Figure 3: The DFT+SOC band structures and densities of states for ZrTe\({}_{5}\) in the fully relaxed geometries for the pristine crystal structure (a,e) and a neutral Te vacancy in Te site 1 (b,f), Te site 2 (c,g), and Te site 3 (d,h). Species projected orbital contributions are plotted for Zr (purple) and Te (yellow). The Dirac point is circled in each band structure to illustrate shifts in its location relative to the Fermi energy and changes in gap size and dispersion. The density of states for Te vacancies on each symmetrically distinct site are plotted for the Zr polyhedron where the Te vacancy is located (Near V\({}_{Te}\), solid red line) and the Zr polyhedron farthest from the Te vacancy (Far from V\({}_{Te}\), black dashed line).
transport properties of this case. Nearly identical observations can be made for the band structures of HfTe\({}_{5}\) as shown in Fig. 4(a-d).
The local site projected density of states for the Zr and Te ions in the Zr polyhedron furthest away from the Te vacancy (Far from V\({}_{\rm Te}\)) strongly resembles the density of states of pristine ZrTe\({}_{5}\) (as shown in Fig. 3(e-h)), with the exception of a notable shift in the position of the band gap for Te vacancies on site 2. Conversely, the local site projected density of states for the ions in
Figure 4: Same as Fig. 3 for HfTe\({}_{5}\). The DFT+SOC band structures and densities of states for HfTe\({}_{5}\) in the fully relaxed geometries for the pristine crystal structure (a,e) and a neutral Te vacancy in Te site 1 (b,f), Te site 2 (c,g), and Te site 3 (d,h). Species projected orbital contributions are plotted for Hf (teal) and Te (yellow). The Dirac point is circled in each band structure to illustrate shifts in its location relative to the Fermi energy and changes in gap size and dispersion. The density of states for Te vacancies on each symmetrically distinct site are plotted for the Hf polyhedron where the Te vacancy is located (Near V\({}_{Te}\), solid red line) and the Hf polyhedron farthest from the Te vacancy (Far From V\({}_{Te}\), black dashed line).
the Zr polyhedron where the Te vacancy is located (Near V\({}_{\rm Te}\)) indicates that orbitals near the Te vacancy are the primary source of in-gap states. A similar conclusion is drawn from the site projected densities of states both near and far from the Te vacancies for HfTe\({}_{5}\) (Fig. 4(e-h)).
## Discussion
The electronic structure calculations are of critical importance in rationalizing the experimentally observed low temperature transport and spectroscopic properties of ZrTe\({}_{5}\) and HfTe\({}_{5}\), as these DFT calculations correspond to zero temperature. Near the \(\Gamma\) point in the electronic structure, the Fermi energy either remains in the band gap at the Dirac point or cuts through the valence band (as in the case of Te vacancies at site 2). This is consistent with most of the experimental ARPES observations near the \(\Gamma\) point [12; 29; 30]. However, when considering the entire Brillouin zone, it becomes clear that there is significant population of the conduction bands throughout regions of the BZ that are further away from \(\Gamma\) when Te vacancies are present. This is consistent with more comprehensive ARPES measurements that observed electron pockets along the Y-S high-symmetry line [33]. Combined, these calculations offer an explanation for the seemingly conflicting results of ARPES measurements and Hall conductivity measurements.
These results support and help rationalize the observation that Te vacancies contribute to the anomalous transport properties of ZrTe\({}_{5}\) and HfTe\({}_{5}\). While Te vacancies do serve as a source of chemical pressure, or effective strain, by reducing the volume, this does not account for the notable differences in the local site projected density of states in proximity to Te vacancy sites. The local density of states of regions far from the Te vacancies capture the effects of volume modulation, including shifts of the Fermi energy relative to the Dirac point and changes in the size of the band gap at the Dirac point. The local density of states of regions near the Te vacancies capture the effects of the Te vacancies themselves, most notably the introduction of additional states at the Fermi level that fundamentally alter the band structure and drive a transition to a metallic state.
## Conclusions
In this work, first-principles DFT electronic structure calculations of the layered Dirac materials ZrTe\({}_{5}\) and HfTe\({}_{5}\) with Te vacancies offer insight into the sample dependence of their experimentally observed spectroscopic and anomalous transport properties. Consistent with most low temperature ARPES measurements, Te vacancies are shown to promote both semiconducting or hole-doped behavior at the Dirac point, which is shifted to the \(\Gamma\) point. They further promote occupation of the conduction band by shifting segments to energies below the Fermi energy which rationalizes the n-type conductivity at low temperatures consistently reported by Hall measurements. Different combinations of Te vacancies at different concentrations are expected to produce a spectrum of electronic structure reorganizations accounting for sample-dependent reports of gapped and metallic behavior. Te vacancies affect the electronic structure and corresponding transport properties of ZrTe\({}_{5}\) and HfTe\({}_{5}\) both as a general source of effective strain and, most importantly, by introducing additional states near the Fermi energy localized in the vicinity of the Te vacancy sites. The qualitative character of the Dirac point in the DFT+SOC calculations is relatively robust to the introduction of Te vacancies, only changing by shifting towards the \(\Gamma\) point and exhibiting a larger band gap. This suggests that the proposed mechanism of a chiral anomaly occurring in ZrTe\({}_{5}\) and HfTe\({}_{5}\) deriving from the topological nature of Dirac materials may still hold, however not via the simple picture of chiral fermions originating strictly at the Dirac point of the pristine structures. As the anomalous transport properties that suggest a chiral anomaly are tied to the presence of Te vacancies, further analysis of the topological character of the shifted Dirac point and the newly occupied conduction bands will reveal their significance in potentially hosting a quantum anomaly in experimentally accessible condensed matter systems.
## Calculation details
First-principles calculations are performed using density functional theory (DFT) with a plane-wave basis and projector augmented wave (PAW) pseudopotentials [39] as implemented in the Vienna _ab initio_ simulation package (VASP) [40; 41]. Calculations are performed in the generalized gradient approximation (GGA) as implemented by Perdew, Burke, and Ernzerhof (PBE) [42] with additional vdW dispersion forces approximately accounted for via the Grimme-D3 method [43]. The crystal structures of bulk ZrTe\({}_{5}\) and HfTe\({}_{5}\) are relaxed using a 600 eV energy cutoff and 20x8x8 \(\Gamma\)-centered k-mesh until forces are converged to \(<\) 1 meV/A. This method results in good agreement of the a and c lattice constants (Table 1). The out-of-plane b lattice constants are only slightly overestimated by 1.8% and 1.7% for ZrTe\({}_{5}\) and HfTe\({}_{5}\) respectively. The band structure is calculated both with and without spin-orbit coupling (SOC) using a 500 eV energy cutoff. The density of states is calculated with SOC.
Te vacancy calculations are performed on 2x1x1 supercells of bulk ZrTe\({}_{5}\) and HfTe\({}_{5}\) with a single Te ion removed, a 2.5% Te vacancy concentration. Vacancies are
introduced at each of the three symmetrically distinct Te sites. Geometry optimization is performed both for fixed lattice parameters with only internal coordinates allowed to relax as well as full crystal structure relaxation. For the supercell calculations, an energy cut-off of 500 eV and a 10x8x8 \(\Gamma\)-centered k-mesh are used. The band structure is calculated for the fully relaxed crystal structures, to capture volume effects, both with and without spin-orbit coupling (SOC) and the density of states is calculated with SOC.
The defect formation energy is calculated using
\[E_{f}=E_{tot}[\mathrm{V_{Te}}]-E_{tot}[\mathrm{pristine}]-\Sigma_{i}n_{i}\mu_ {i} \tag{1}\]
where \(E_{tot}[\mathrm{V_{Te}}]\) and \(E_{tot}[\mathrm{pristine}]\) are the DFT calculated total energies of the crystal structure with a Te vacancy and the pristine crystal structure. \(n_{i}\) is the number of atoms of species \(i\) added or removed (in this case one Te atom) and \(\mu_{i}\) is the chemical potential of that species, here taken from a DFT calculation of bulk Te in the trigonal P3\({}_{2}\)21 space group as tabulated in the ICSD [44]. The standard additional terms to correct for charge effects are neglected because only neutral Te vacancies are considered [45].
## Acknowledgements
This work was supported by the U.S. DOE NNSA under Contract No. 89233218CNA000001. It was supported by the LANL LDRD Program, and in part by the Center for Integrated Nanotechnologies, an Office of Science User Facility operated by the U.S. Department of Energy (DOE) Office of Science, in partnership with the LANL Institutional Computing Program for computational resources. Additional computations were performed at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award ERCAP0020494.
|
2310.20492 | Log-based Anomaly Detection of Enterprise Software: An Empirical Study | Most enterprise applications use logging as a mechanism to diagnose
anomalies, which could help with reducing system downtime. Anomaly detection
using software execution logs has been explored in several prior studies, using
both classical and deep neural network-based machine learning models. In recent
years, the research has largely focused in using variations of sequence-based
deep neural networks (e.g., Long-Short Term Memory and Transformer-based
models) for log-based anomaly detection on open-source data. However, they have
not been applied in industrial datasets, as often. In addition, the studied
open-source datasets are typically very large in size with logging statements
that do not change much over time, which may not be the case with a dataset
from an industrial service that is relatively new. In this paper, we evaluate
several state-of-the-art anomaly detection models on an industrial dataset from
our research partner, which is much smaller and loosely structured than most
large scale open-source benchmark datasets. Results show that while all models
are capable of detecting anomalies, certain models are better suited for
less-structured datasets. We also see that model effectiveness changes when a
common data leak associated with a random train-test split in some prior work
is removed. A qualitative study of the defects' characteristics identified by
the developers on the industrial dataset further shows strengths and weaknesses
of the models in detecting different types of anomalies. Finally, we explore
the effect of limited training data by gradually increasing the training set
size, to evaluate if the model effectiveness does depend on the training set
size. | Nadun Wijesinghe, Hadi Hemmati | 2023-10-31T14:32:08Z | http://arxiv.org/abs/2310.20492v1 | # Log-based Anomaly Detection of Enterprise Software: An Empirical Study
###### Abstract
Most enterprise applications use logging as a mechanism to diagnose anomalies, which could help with reducing system downtime. Anomaly detection using software execution logs has been explored in several prior studies, using both classical and deep neural network-based machine learning models. In recent years, the research has largely focused in using variations of sequence-based deep neural networks (e.g., Long-Short Term Memory and Transformer-based models) for log-based anomaly detection on open-source data. However, they have not been applied in industrial datasets, as often. In addition, the studied open-source datasets are typically very large in size with logging statements that do not change much over time, which may not be the case with a dataset from an industrial service that is relatively new. In this paper, we evaluate several state-of-the-art anomaly detection models on an industrial dataset from our research partner, which is much smaller and loosely structured than most large scale open-source benchmark datasets. Results show that while all models are capable of detecting anomalies, certain models are better suited for less-structured datasets. We also see that model effectiveness changes when a common data leak associated with a random train-test split in some prior work is removed. A qualitative study of the defects' characteristics identified by the developers on the industrial dataset further shows strengths and weaknesses of the models in detecting different types of anomalies. Finally, we explore the effect of limited training data by gradually increasing the training set size, to evaluate if the model effectiveness does depend on the training set size.
Anomaly detection; Deep learning; Log mining; Software engineering;
## 1 Introduction
Software in the present day consists of many interconnected systems that rely on each other to handle requests. Recording run-time logs is a common practice in such systems [1], and the logs are frequently the main source for debugging [2]. Logs are routinely written by an application to a central location, often at various logging priority levels (e.g., error, warning, info etc.). As logging is done in real-time, logs provide an insight into the current system health and how it is failed. This then points to the natural next step: by monitoring the system logs in real-time, we can quickly and accurately detect when system failures occur, reducing system downtime.
Large-scale industrial applications often have a large amount of logs being printed every second [3], making manual inspection of those logs in real-time a difficult task. This is further compounded by different types of errors, and errors that require inspecting the entire event sequence, as opposed to a single log line. Servers also handle multiple requests in parallel, resulting in data from different requests being logged at the same time. All those issues, combined with the sheer volume of logs printed by large-scale complex applications, have rendered real-time manual inspection unfeasible. Therefore, an automated approach is required to shift through all the data and correctly detect anomalies.
A common automated method used in the industry to detect anomalies using logs is to utilize deterministic rules. By creating a set of regular expressions or search terms, it's possible to search through the log sequences to find potentially anomalous events. The search terms could include words such as "error" or "failure". While this does help with detecting the more obvious of anomalies, it is not accurate: those search terms could easily appear in non-anomalous sequence, or it could be an error that is self-corrected and does not result in a more severe anomaly. In addition, some anomalies could be a result of an incorrect sequence, which may appear benevolent on the surface. For example, if a file is opened, but not closed, the log sequence may not show any errors: it would just end before the closing of the file. Those anomalies cannot be detected by the use of a deterministic algorithm. In order to evaluate whether the sequence is anomalous or not, the entire sequence needs to be considered, not just the current log event. To evaluate its effects, this type of deterministic algorithm was implemented on our industrial partner's logs, and the results were less than satisfactory. Results in terms of F1-score, one of the metrics that will be defined in section 3-D, were 40% below that of the lowest performing state-of-the-art anomaly detection model.
Therefore, in this paper, our goal is to study the effectiveness of state of the art log anomaly detection techniques. Our data comes from our industrial partner's microservice hosted on AWS Lambda, whose logs are collected from AWS Cloud-Watch.
There have been several studies done on evaluating automatic detection of anomalies using execution logs in the past, but most these studies are on large open-source data. We identify some differences between the characteristics of such studies and our interest in this paper (industrial system with limited logs), as follows:
1. **Data size**: Open-source log collections have large amounts
of data, usually over 10 million lines of logs [4]. A recently developed microservice in a small company, for example, may only have logs in the range of thousands, often produced by the underlying platform during testing. If this microservice is deemed sensitive to the business, the anomaly detection model will need to be trained (or fine-tuned) and evaluated on the limited dataset, before being deployed to production. Therefore, the evaluation mechanism of the models stated above using large open-source datasets and the corresponding findings may not hold during many industrial practices.
2. **Data uniformity**: As most of the open-source log collections are typically retrieved from well-established industrial applications, they are often uniform in nature. This means the logs do not change much over time, due to modifications of source code logging statements. In a relatively new industrial application, the source code logs would be updated frequently to show missing data, and would undergo additions/removals of logging statements as well. In addition, the logs may lack structure, and have more of a natural language free format of logging information. These aspects are less studied in most of the automated log anomaly detection methods.
To fill this gap (evaluation of log anomaly detection for small industrial datasets), in this paper, we have selected several state of the art models and trained them on our datasets.
Replication package: The source code for analyzing data is publicly available at [https://zenodo.org/record/7553290](https://zenodo.org/record/7553290). This includes the open-source dataset as well. Please note that the industrial dataset has not been included due to confidentiality reasons.
## 2 Background
Log anomaly detection is generally performed in 3 steps: Pre-processing, Model training and Prediction. As logs are written by developers, they are often in free-form natural language format, and the pre-processing step includes converting them into a more structured form, which can then be fed into a model for training. Common pre-processing steps include:
1. **Log cleaning**: A single log line often comprises of several elements, such as timestamp, log level and log content. During this step, the log content is extracted and added as a property of an object. This object may also contain the log level and timestamp, based on the model requirements.
2. **Template mining**: A log comprises of two distinct parts: a constant and a variable. The constant part, often also called _log template or log key_, has the overall structure of the log statement, as well as words that do not change for each statement. The variable part includes the parameters of the log, which may change for each log statement of its kind. For example, a log message such as _"Received block 387508140051953248 of size 67108864"_ can be split into the log key _"Received block * of size **"_, and the parameter values _[3587508140051953248, 67108864]_. Popular log template miners include Spell [5], which uses a longest common sub-sequence based approach, and Drain [6], which uses a parse tree with a fixed depth during the log group search.
3. **Sequence generation**: Since multiple requests can be handled by a system at a given moment, log messages corresponding to a specific request would be scattered among other log statements. During this step, logs corresponding to each request/event is grouped together, by using a field that denotes the request/event that log belongs to. The field name differs between systems: Hadoop Distributed File System (HDFS) for instance uses _block ID_, while AWS CloudWatch uses _request ID_.
4. **Vectorization (optional)**: While log sequences can be directly used to train a model, several papers go a step further to vectorize the log keys. During this step, each word of the log key is converted to a vector (by using a method such as Word2Vec) and then aggregated over the entire log. Additional steps would include semantic information integration, such as replacement of synonyms by using a lexical database and domain knowledge. The final result is either a one-dimensional or 2-dimensional array for each log key.
The type of model varies between papers, but can be divided into 2 broad categories: classical techniques and deep learning techniques.
### Classical Techniques
Classical techniques for anomaly detection have been in use for over a decade. Statistical methods, such as statistical workflow execution [7], state machine-based modeling [8] and Hidden Markov Models [9] have been used in several papers. Frequent pattern mining, also called invariant mining, has also been used with promising results [10][11][12].
Classical machine learning based approaches have also been used in several papers, with varying degrees of success. Classical machine learning based approaches include modeling anomaly detection as a type of machine learning algorithm. One possible method is to treat this as a clustering problem, where normal log events need to be in one cluster and anomaly events should be far away from the normal cluster (which may themselves belong to either a single cluster or multiple clusters). LogCluster used a similar mechanism, and used the centroid of each cluster to depict the log sequence [13]. This log sequence could then be used to identify the underlying root cause. Log3C used a method called Cascading Clustering to group log sequences, and a linear regression model to find issues related to deterioration of system KPIs [14]. Another approach is to use classification methods to categorize input log sequences into normal or anomalous sequences, using supervised learning. Logistic Regression was used in one paper on several open-source datasets, using event count vectors [15]. Liang [16] used a Support Vector Machine (SVM) based model and a K-Nearest Neighbour (KNN) model on vectorized logs, using several features such as event counts. Another paper proposed an SVM based model that used empirical properties of logs, such as frequency and periodicity, to detect anomalies [17].
A different approach was taken in another experiment, where anomaly detection was modeled as a dimensionality reduction problem [18]. This was done by transforming the data to rely on limited dimensions (as opposed to the high dimensionality of the original dataset). Anomaly detection was then performed by identifying logs at a distance higher than a specified threshold. An important aspect to note here is that they used parameter value vectors in addition to log event count, as parameter value vectors are often unused in machine learning models. Graphical methods have also been used in this area of research, with promising results. A CFG mining method was used to detect sequence and distribution anomalies on synthetic traces and log datasets [19]. Another paper employed statistical inference methods to infer dependencies among log events, by using a 3 step algorithm [20]. NLP based log parsing methods were evaluated in a different paper, by using different n-gram models and hashing [21]. Modelling was done via Bisecting K-Means and Latent Dirichlet Allocation (LDA), followed by Random Forest for classification.
While classical methods have shown some performance, they have recently been out-shined by models based on deep neural networks.
### Deep Learning Techniques
In deep learning techniques, the log templates/vectors of the training set are fed into a neural network, and validated with a separate set of data. Most models use Long Short-Term Memory (LSTM), which is a type of Recurrent Neural Network (RNN), with promising results.
#### 2.2.1 LSTM Based Techniques
LogRobust is one such model [3]. LogRobust used Drain [6] to mine for log templates, followed by a textual preprocessing stage to assist with word vectorization. The words were then vectorized using FastText [22], then aggregated using TF-IDF of each word, so that a single vector represented a single log event. The goal behind semantic aggregation was to reduce noise in the data, which can occur from incorrect log template parsing and continuously evolving logs. The TF-IDF aggregation ensured that the effect of evolving log statements and incorrect log parsing was reduced, resulting in a more uniform vector for each variation of log event. An attention-based bidirectional LSTM was then trained using the semantic vectors, with injected instabilities in the logs. The model results were compared to classical methods, and showed promising results. Vinayakumar [23] proposed a similar model, which used a stacked-LSTM (created by adding recurrent LSTM layers on top of existing LSTM layers) and was trained with both normal and anomalous logs. Experiments were then performed to optimize the hyper-parameters of the model using the CDMC2016 dataset. Wang [24] compared performances of different natural language processing feature extraction methods, namely Word2Vec and TF-IDF. The extracted features were fed into an LSTM to perform anomaly detection, and compared against Gradient Boosting Decision Tree (GBDT) and Naive Bayes methods. Zhao [25] proposed a tokenization method based on ASCII values of log characters. Each log event was considered a sentence, and converted to a string of ASCII values normalized to start with 0. This was then used to train an LSTM model. Another paper explored the temporal-spatial information in microservices, in the form of logs (temporal) and query traces (spatial) [26]. By combining the data and training an LSTM model, they were able to segregate anomalies from normal data and detect failures.
Several other models in this category are evaluated on this paper (DeepLog[27], LogAnomaly[28] and LogBERT[29]), and are explained in detail in the next section. They were selected due to their high metrics, recentness and readily-available source code.
#### 2.2.2 Other Techniques
While RNNs have been most widely-used type of neural network, there have been some research on other types of neural networks as well. Liu [30] proposed a model based on Gated Recurrent Unit (GRU) networks, which is another type of RNN, combined with a Support Vector Data Description (SVDD) model. PCA was initially applied on the dataset to reduce dimensionality, followed by the GRU-SVDD model. This was evaluated on classical KDD Cup99 datasets. Lu [31] used a Convolutional Neural Network (CNN) for anomaly detection. After parsing log events into log templates, a custom trainable matrix called Logkey2Vec was used to map each log key into a vector. The CNN was then trained using the vectorized logs, from the HDFS dataset. Another model, LogGAN, was proposed as an LSTM-based Generative Adversarial Network (GAN) [32]. By using a generator and a discriminator, LogGAN was able to analyze distribution of the training set and create artificial data points. This was then used to mitigate data imbalance between normal and anomalous datasets.
Failure prediction and diagnosis are two other avenues of log-based reliability engineering, but are out of scope for this paper, and therefore have not been explored in detail.
Among these papers, only a limited set have applied models on industrial datasets. To our knowledge, this is the first time those state-of-the-art models (DeepLog, LogAnomaly and LogBERT) have been evaluated in an industrial dataset, with limitations on data size and uniformity.
## 3 Methodology
### Objectives and RQs
The main objective of this study is to evaluate the effectiveness of current state-of-the-art anomaly detection models on an industrial dataset, to determine which would be the best candidate to be potentially deployed to production. To address the this objective, we explore the following research questions in this paper:
**RQ1:** How effective are the current log anomaly detection models in detecting failures, in an industrial dataset?
The goal of this research question is to explore the effectiveness of each model applied to an industrial dataset. We compare and contrast our findings with reported results
on a well-known open source dataset in this domain (the HDFS dataset).
**RQ2:** Does the type of train-test splitting affect the reported effectiveness of the models?
There has been a recent observation in the literature of a data leak during the random sampling process [33] in many similar studies. The issue is that using a random split for generating train/test sets results in future logs potentially being used to predict past logs, resulting in incorrect reports of metrics. In this research question we explore the effect of using a time-based split instead of a random split on both the industrial dataset and HDFS dataset.
**RQ3:** How successful are the models in detecting different types of failures?
In RQ3, we conduct a qualitative analysis of different types of anomalies in the industrial dataset. The goal is to explore if the failures can be categorized to types (identified by our industrial partner), and to see if certain models are better at predicting certain types of failures.
**RQ4:** How does the size of training set affect the model effectiveness?
One of the driving factors of this research is the impact of limited data on training anomaly detection models. In this research question, we aim to find if the model effectiveness can be improved by increasing the training set size.
### Datasets
The models are evaluated on two datasets:
1. **Industrial dataset**: A collection of logs from an industrial application. This application is a microservice hosted on AWS Lambda, which deals with incoming requests from a user application. The logs themselves have been collected from AWS CloudWatch. Both CloudWatch and Lambda are part of the Amazon Web Services cloud platform [34]. As part of the larger AWS features, CloudWatch has built-in support for log correlation, visualization and monitoring. A set of sample log templates can be found in figure 1 (raw logs have not been shown here due to confidentiality reasons).
2. **Open-source dataset**: the HDFS dataset, from the Logpai repository [4]. The HDFS dataset has been widely used to benchmark anomaly detection models, and comes with over 10 million lines of logs that have been labeled. A selected set of log templates from this dataset can be found in figure 2.
The industrial microservice is a small-to-medium scale user-facing application, and is assumed to be dealing with approximately a hundred user requests per day in production. The endpoint itself has been in use for over 2 years, and has recently undergone changes to allow for containerization. Due to this, it was decided to retrieve the more recent logs, in order to ensure stale data are not used to train the models.
Anomalies on this application are not monitored. Therefore, the time to detect an anomaly could range between several minutes or hours, depending on the severity of the anomaly (which could raise alarms in a related but different application) and based on how often a developer may manually check the logs. In the event of an anomaly, the system could be unresponsive for some time, or it could be returning failure responses to the systems that call it. Those anomalies may require restarting the service, or deploying a code change to fix the underlying issue. Therefore, an anomaly detection model could have a significant impact on reducing the application downtime.
Table 1 outlines the properties of the two datasets. As can be seen from the table, the industrial dataset is much smaller than the HDFS dataset, which is in-line with the characteristic mentioned earlier regarding dataset size. The template count, on the other hand, is much higher in the industrial dataset, even with its small size. This shows that it is loosely-structured compared to the HDFS dataset, with the logs appearing in a more natural language free-form format. The time duration is also higher in the industrial dataset, which shows that as a new microservice, a longer time duration is required to gather logs (since it does not get requests as often).
### Models
For all models, the logs undergo a preprocessing stage, where data such as timestamps and large objects are removed from the logs. Afterwards, Drain3 [6] is used to mine for log templates (as explained in the Section 2), and log sequences are generated for each sequence ID. Then the logs are split into 3 parts: training set, normal test set and anomaly test set. Then a sliding window method is used to generate history sequences. After training the model with the given training set, the two test sets are used to evaluate model performance. It is important to note that the training set constitutes of only **normal logs**. This is due to the fact that anomalous logs are
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Dataset** & **Number of logs** & **Templates count** & **Time /duration** \\ \hline Industrial microservice & 170,566 & 142 & 2 months \\ vice & & & \\ HDFS & 11,175,629 & 53 & 39 hours \\ \hline \end{tabular}
\end{table}
Table 1: Dataset properties
Figure 1: Sample logs from the industrial microservice
Figure 2: Sample logs from the open-source HDFS dataset
rze to find in real-world, and there is a high data skew towards normal data points in any industrial dataset. This is followed in all the models evaluated in this study.
A diagram of the experiment workflow can be found in figure 3.
In this paper, we compare the following state-of-the-art models published in past papers:
* DeepLog [27]
* LogAnomaly [28]
* LogBERT [29]
* Baseline LSTM Model
These models are selected because (1) they have showed great results in their original paper, (2) they are relatively recent (all have been published in the last 6 years), and (3) their source-code is readily available and is replicable. In the following sub-section, we explain them in details.
#### 3.3.1 DeepLog
DeepLog [27] is an LSTM model for anomaly detection, but unlike most models, DeepLog considers both the log template as well as the parameter values of the log event as inputs. DeepLog consists of two main models: a log key anomaly detection model and a parameter value anomaly detection model. The log key sequence is used to train the log key anomaly detection model, which uses an LSTM. This is treated as a multi-class classification problem, where each distinct log key is assigned a class. By using the history of recent log keys (named _window size_), the model outputs the probability distribution over all log keys for the next log. If the actual log is included in the top \(k\) candidates, it is treated as non-anomalous. If it is not included, then it is classified as an anomaly. The parameter value anomaly detection model uses a separate LSTM, and is trained with parameter vectors, which includes variable parts of the log as well as the time difference between the current and previous log event. The model outputs a predicted parameter value vector, which is then compared with the one from the actual log event. Error is calculated as the difference between predicted and actual, and compared against previously generated Gaussian error distribution. If it is within a pre-specified confidence interval, it is classified as normal, and as an anomaly otherwise.
In addition to anomaly detection, DeepLog also includes a workflow creation aspect. By using the anomaly detection model and a density clustering approach, DeepLog can recreate the flow that resulted in a failure, allowing developers to investigate root causes of failures more effectively. While this is not part of anomaly detection itself, it provides a significant enhancement for failure diagnosis.
#### 3.3.2 LogAnomaly
LogAnomaly [28] is another model proposed within the last few years for anomaly detection. It considers anomalies of 2 types: sequential anomalies and quantitative anomalies. Sequential anomalies occur when a log sequence deviates from normal patterns, which are learned during the training phase as training data only consists of normal logs. This is a common method for anomaly detection, which is also used by several other models described earlier. Where LogAnomaly differs, however, is when it comes to the second anomaly type: Quantitative anomalies occur when linear relationships are broken between log sequences. For example, a normal sequence may have one log statement for opening a file and another for closing it. This is a 1:1 relationship between two logs, which means the number of file opening logs should equal the number of file closing ones. In a test sequence, if a file opens but does not close, this would constitute an anomaly, and would be detected as the linear relationship between the two logs has now been violated.
LogAnomaly starts off by using Frequent Template Tree (FT-Tree) [35] to parse logs and extract templates. The log templates are then encoded by using a mechanism named _Template2Vec_. In _Template2Vec_, first a set of synonyms and antonyms is created using the lexical database WordNet and domain knowledge. Then the distributed lexical-contrast embedding (dLCE) model is used to create word vectors. Finally, template vectors (for each log template) are calculated by taking the weighted average of each word vector in the log template.
Once the template vectors have been calculated, they are fed into an LSTM model for training, which is then used to detect sequential anomalies. For quantitative anomaly detection, the model counts the different templates present in the history. During detection phase, the model predicts the next log based on its sequential history, and patterns learned during quantitative relationships training phase. This outputs a vector of probabilities for the next log, with one probability for each log template. If the actual log is observed in the top \(k\) candidates, it is classified as normal. Otherwise, it is flagged as an anomaly.
LogAnomaly also does template approximation on new logs unseen during the training phase. This is done by extracting a temporary template using FT-Tree, calculating a template
Figure 3: High level illustration of the log anomaly detection process.
vector and matching it to an existing one based on similarity. The reasoning is that the majority of "new" templates are minor variants of existing ones, which have occurred due to small updates to a logging statement.
#### 3.3.3 LogBERT
A more recent deep learning model is LogBERT [29]. This uses a transformer, based on BERT (Bidirectional Encoder Representations from Transformers). During preprocessing, log templates are first mined from log sequences. The templates sequence is then represented as a summation of a log key embedding (a randomly generated matrix) and position embedding (generated using a sinusoidal function). Then a transformer encoder with multiple layers of transformers is used to learn relationships in a log template sequence.
The model itself is trained by using two self-supervised training tasks: Masked Log Key Prediction (MLKP) and Volume of Hypersphere Minimization (VHM). In MLKP, LogBERT is trained to predict several masked log keys in a sequence. This is done by replacing a set of random log keys in a sequence with a _[MASK]_ token, and having the model predict the masked log keys. This gives the model contextual knowledge of log sequences. The underlying goal is that if normal and anomalous log sequences are sufficiently different, the contextual knowledge can be used to differentiate anomalous from normal sequences, since the model is only trained on normal logs. In VHM, a hypersphere enclosing normal data is created. Similar to Deep SVDD, the goal is to minimize the volume of this hypersphere. The motivation is that normal logs should be similar to each other in the embedding space, while anomalous should be as far from the hypersphere as possible. The final objective function is the addition of the two training tasks, with a hyper-parameter \(\alpha\) that adjusts the weight of the VHM output.
The model's final output is a vector of probabilities for the next log key, similar to DeepLog and LogAnomaly above. This is of length \(g\), which includes the candidates with the highest probabilities. If the actual log key is in this vector, the log key is considered normal, and as an anomaly otherwise. If a log key sequence consists of more than \(r\) log keys, the sequence itself is considered anomalous.
#### 3.3.4 Baseline LSTM Model
To establish a baseline, we use a basic LSTM model. This model, similar to the models above, only use normal logs for training. The LSTM model itself is built using 4 LSTM layers, followed by a dropout layer composed of 100 neurons each. The last layer is a fully connected Dense layer with a size of \(X\) neurons, where \(X\) corresponds to the number of unique log templates. The last layer is a softmax, which outputs a list of probabilities for the next log, showing the most likely candidates. Anomaly detection is done by checking whether the next log is in the top \(k\) candidates. If it is within the candidates list, it is classified as a normal sequence, otherwise, as an anomalous sequence.
The anomaly detection mechanism is identical to the state-of-the-art models, which is intentional. This allows us to test whether the additional logic parts implemented in the state-of-the-art models are useful at giving better predictions compared to using a pure LSTM model.
The model was trained using the Adam optimizer, using categorical cross-entropy as the loss function. The model hyper-parameters were calculated using a simple grid-search approach. The model details can be found in table 2.
### Evaluation Metrics
Since anomaly detection is considered to be a type of classification problem, we use F1 score, precision, and recall to evaluate model performance. All values are calculated using Confusion matrix (true positives count, true negatives count, false positives count and false negatives count). We excluded the accuracy metric, which is calculated as the proportion of correct predictions out of all predictions for our anomaly detection models, due to the skew of data. Most of the logs in any dataset are normal logs, with anomaly logs being only a fraction of it. Therefore, a model can predict all logs as normal, and still end up with a high degree of accuracy.
**Precision** calculates the relevancy of the predicted results. It's calculated as the proportion of true positives out of all the positives detected. A high degree of precision is an indicator of a stable model that does not raise many unnecessary alarms.
**Recall** is calculated as the proportion of detected anomalies out of all anomalies. A model with high recall means it does not miss many anomalies.
**F1-Score** is the harmonic mean of precision and recall. The label F1 comes from the fact that both precision and recall are weighted evenly. This is often used in anomaly detection models as the single criterion to evaluate the model.
In this paper, we will be using the F1-score as the primary metric for evaluating the models. Recall will be used as the secondary metric, since it shows the ability of the model to detect actual anomalies.
\[Precision=\frac{TP}{TP+FP} \tag{1}\]
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Layer type** & **Output shape** & **Parameters count** \\ \hline LSTM & (3, 100) & 40800 \\ Dropout & (3, 100) & 0 \\ LSTM & (3, 100) & 80400 \\ Dropout & (3, 100) & 0 \\ LSTM & (3, 100) & 80400 \\ Dropout & (3, 100) & 0 \\ LSTM & (3, 100) & 80400 \\ Dropout & (3, 100) & 0 \\ Dense & (170) & 17170 \\ Dense & (170) & 29070 \\ \hline \end{tabular}
\end{table}
Table 2: Baseline LSTM model parameters
\[Recall=\frac{TP}{TP+FN} \tag{2}\]
\[F1-Score=2*\frac{Precision*Recall}{Precision+Recall} \tag{3}\]
### Experiment Setup
AWS SageMaker was used as the machine-learning platform to conduct the experiments. AWS SageMaker is a cloud-based machine learning platform, that allows the creation, testing and deployment of ML models [36]. It also facilitates connectivity between other AWS services, such as CloudWatch. The industrial dataset was imported from AWS CloudWatch, and the open-source HDFS dataset was downloaded from their GitHub repository.
All experiments were run on a high-performance AWS Sage-Maker Notebook Instance with a high virtual CPU count and RAM, using Jupyter Notebook. Details regarding the SageMaker Notebook Instance can be found in Table 3.
## 4 Empirical Study
### RQs design and results
1.1 RQ1: How effective are the current log anomaly detection models in detecting failures, in an industrial dataset?
#### 4.1.2 Design
To answer this research question, all four selected models were trained on both the industrial dataset as well as the HDFS dataset. As the state-of-the-art models all used random sampling in their original papers, we used the same split type for consistency. In the original papers of the selected models, a relatively small portion is used to train the model (in the case of DeepLog, 4855 logs were used for training, which is less than 1% of the dataset). In order to keep a valid amount of logs for testing, we opted for K-fold validation with 5 folds, with one fold used for training and the rest for testing. The results were then calculated as the mean of all the folds. Our focus in this paper is to examine the effect of having a small dataset for training, which could be even smaller than the amount of data we had for the industrial service. Therefore, using only one fold for training would give us a better idea of how each of the models perform when there is not an abundance of data. For consistency, the training size was kept the same with the HDFS dataset. As the HDFS dataset is quite large, this allowed us to use a significant amount of it for testing.
#### 4.1.3 Results
The models' effectiveness was measured using metrics outlined in the earlier section. The model performance on the industrial dataset is shown in figure 4 and the HDFS dataset shown on figure 5.
For the industrial dataset, LogAnomaly has the best performance by far, with all other models significantly behind (the F1-score of LogAnomaly is 70.3%, which is over 12% higher than the F1-score of the other models). Even though the baseline method achieves perfect recall, this is undermined by its very low precision, and results with the lowest F1-score of all the models.
On the contrary, when using the HDFS dataset, LogBERT outlines all other models with both F1-score (87.5%) and recall (78.7%). The F1-score of LogBERT is over 30% higher than that of the other models, while the recall is over 31% higher. While LogAnomaly is slightly higher in terms of precision, its low recall results in an overall lower effectiveness.
It should be noted that the baseline LSTM has the lowest performance out of all the models. This shows that a simple LSTM is not capable of inferring the patterns of log templates; additional logic is required.
Figure 4: Model effectiveness on the industrial microservice dataset using random split. Each bar reports the corresponding metric’s average over 5 folds.
Figure 5: Model effectiveness on the HDFS dataset using random split. Each bar reports the corresponding metric’s average over 5 folds.
\begin{table}
\begin{tabular}{|l|c|} \hline
**Parameter name** & **Value** \\ \hline Instance type & ml.m5.4xlarge \\ Platform & Amazon Linux 2 \\ vCPU & 16 \\ Memory & 64 GB \\ \hline \end{tabular}
\end{table}
Table 3: AWS SageMaker Notebook Instance properties
The effectiveness of the models on the two datasets can be explained by the metrics and the nature of the datasets themselves. Based on table 1, it can be seen that while being smaller, the industrial dataset has a higher number of log templates, showing it's largely unstructured. Comparatively, the HDFS dataset has a much lower template count, even though it's significantly larger. LogAnomaly is shown capable of inferring relationships between logs with less structure, resulting in its high performance in the industrial dataset. LogBERT, on the other hand, needs a highly structured dataset to be of proper use. Even though the HDFS dataset is structured, only a very small amount (only 0.01% of the entire dataset) was used as training, showing LogBERT can easily infer relationships even with a small training size, as long as the data itself is structured.
Comparing the mechanisms of the three state-of-the-art models, DeepLog and LogAnomaly both use LSTM networks and differ only in a few aspects. DeepLog makes use of parameter value vectors in addition to the log key anomaly detection LSTM, with the parameter values having their own models for each log key. LogAnomaly uses vectorization along with template approximation, and an added quantitative anomaly detection model. This vectorization step likely reduces noise in the logs, since minor updates to logging statements result in similar semantic vectors. It can also be hypothesized that template approximation helps with an unstructured dataset, as any templates that were not in the training set can be assigned a value. This could potentially be one of the reasons as to why LogAnomaly has a higher performance on the industrial dataset. On the other hand, having a large number of templates likely hinders the Masked Log Key Prediction (MLKP) task for LogBERT, since there can be more than one candidate for the missing [MASK] token. This likely results in mis-classifications during this step, reducing its effectiveness on loosely structured datasets.
The performance of DeepLog, LogAnomaly and LogBERT on the HDFS dataset is much lower than the metrics quoted in their original papers, showing the reduced training size severely impacted those models. The performance of LogBERT, however, is much closer to the metrics quoted in its origin paper, showing its ability to work even with a much smaller dataset.
**RQ1 Summary:** The experiment results from three state-of-the-art models and the baseline LSTM model show that LogAnomaly works best with the industrial dataset, which is much less structured. LogBERT works best with the HDFS dataset, showing it requires a structured dataset to work effectively. The baseline LSTM has the weakest performance out of all the models.
#### 4.1.2 RQ2: Does the type of train-test splitting affect the reported effectiveness of the models?
DesignAccording to a recent study [33], the data leak during random sampling makes the effectiveness of models seem higher than they actually are. Effectively, when doing random sampling, future logs are used as part of the training set, resulting in past logs being predicted using future logs. To explore this effect, we used a time-based split to train the models, while keeping the training size the same. This ensured that only past logs are used to predict future logs, which in turn helped us explore the realistic use-case of anomaly detection models: in practice, only past logs can be used to predict anomalies.
Figure 6 shows the train and test sets created by using the time-series split. Note that by keeping the train size constant, only 4 folds could be created with this type of split; the last fold cannot be used for training, as there are no logs after it.
Results
The model performance on the industrial dataset using a time-based split is shown in figure 7 and the HDFS dataset is shown on figure 8.
Similar to the results from random split, LogAnomaly has the highest effectiveness with an F1-score of 77.2%, but this time is closely followed by DeepLog with an F1-score of 77.0%. They both have a similar recall as well, at approximately 84.6%. LogBERT effectiveness has decreased significantly from earlier results, showing that the data leak plays an important role with increasing performance. Interestingly, DeepLog and LogAnomaly have increased in effectiveness. This is likely due to the fact that even with a time-based split, the amount of information conveyed through a single fold is sufficient for the model to be trained.
In the HDFS dataset, LogBERT outperforms the other models,
Figure 6: Splitting data based on time-series. Note that the dark-colored segments refer to the training set, while the light-colored segments are the test set.
Figure 7: Model effectiveness on the industrial microservice dataset using time-based split. Each bar reports the corresponding metric’s average over 4 folds.
similar to RQ1, with an F1-score of 85.5%. DeepLog has the highest recall (at 100%), but is undermined by its low precision, resulting in the lowest F1-score along with the baseline LSTM model (at 41.5%). The effectiveness of all models have decreased from RQ1, showing with a large dataset, the data leak plays a bigger role in improving the model performance, compared to a smaller dataset. Similar to the earlier research question, we can see that LogBERT works best with a well-structured dataset, while LogAnomaly is more performant on the dataset with less structure.
In all of the above experiments, the baseline LSTM model has shown the weakest performance. Due to this, we have omitted it from the subsequent research questions, and focus only on the 3 state-of-the-art models.
#### 4.1.3 RQ3: How successful are the models in detecting different types of failures?
#### 4.1.4 Design
In the previous research questions, several existing anomaly detection models were evaluated on different datasets. In this section, a novel evaluation is performed on the model effectiveness when it comes to detecting specific types of errors. This aligns with the goal of selecting a suitable anomaly detection model for the application under study as well, since it gives us insights into how each model works with each type of errors.
In this question, we perform a qualitative analysis of the 3 state-of-the-art methods and the errors they detect in the industrial microservice dataset. By discussing with domain experts, we found 4 anomaly types, which can be classified into 2 categories, listed in table 4. Note that the anomaly types have been enumerated instead of being given descriptive names, for confidentiality reasons.
A brief description of each of the anomaly categories is provided below:
* **Request error**: Occurs when the request made to the server is invalid. This could be due to various reasons, such as a feature not being available in a region, invalid request payload etc.
* **Redis error**: Redis [37] is an open-source data structure store, that's used as a cache for the servers. The errors of this type often stem from timeouts, which occur due to bandwidth limits, too many requests and high CPU/memory usages [38].
To explore the effectiveness of each model in detecting the different types of anomalies, we trained the models using a 5-fold time-series split, and examined whether each error type was detected by the model. Since the model was trained using only normal logs, we used the entire anomalous sequences set for testing.
#### 4.1.2 Results
Figure 9 shows the ability of each of the 3 models in detecting the 4 anomaly types (note that the detection rate for type 1, 2 and 3 anomalies for LogBERT is zero).
In the earlier research question, DeepLog and LogAnomaly had the best results with the industrial dataset time-series split, which is shown again by the results above. Both DeepLog and LogAnomaly are capable of fully detecting type 1, 2 and 3 anomalies, and has a comparatively higher performance on the type 4 anomalies as well. LogBERT, however, can only detect type 4 anomalies, but even this is poor compared to the other two models.
\begin{table}
\begin{tabular}{|c|l|c|} \hline
**Anomaly type** & **Anomaly category** & **Av. seq. length** \\ \hline
1 & Request error & 163.4 \\
2 & Request error & 170.25 \\
3 & Redis error & 109 \\
4 & Redis error & 7 \\ \hline \end{tabular}
\end{table}
Table 4: Anomaly types found in the industrial microservice dataset
Figure 8: Model effectiveness on the HDFS dataset using time-based split. Each bar reports the corresponding metric’s average over 4 folds.
Figure 9: Model anomaly detection rates
Comparing the log sequence length from Section 4, it seems LogBERT only partially works with very short sequences (type 4 has an average sequence length of 7, compared to other sequences which are over 100 in average length). Even though DeepLog has the same recall as LogAnomaly (at 84.6%), it should be noted that its precision is slightly less than that of LogAnomaly, showing LogAnomaly as the overall best model in this scenario.
**RQ3 Summary:** While LogBERT is capable of detecting short sequence anomalies, it does not work with longer sequences. DeepLog and LogAnomaly both work with long sequence anomalies, and compared to LogBERT has a better performance with short sequence anomalies as well.
#### 4.1.4 RQ4: How does the size of training set affect the model effectiveness?
DesignOne of the driving factors in our research has been the lack of rich training data during early development stages for most systems under test. Due to this, we have opted for small training sizes, with a split type comparable to that of papers done on open-source data. In this research question, we explore if it is possible to improve the effectiveness of the models by increasing the size of the training set. This would give us an insight as to which of the parameters play a bigger role in model performance: data size or data uniformity.
The experiment was performed using a time-series split, with increasing train sizes from 20% up to 80%, in 20% increments.
ResultsThe results are shown in figure 10. Here, we can see that LogBERT has the poorest performance, with an F1-score below 60% that does not increase with an increasing training size. This further shows that LogBERT is not efficient at inferring patterns in a loosely-structured dataset. DeepLog performance does get better with an increasing training size, peaking at 40% and then dropping, likely due to overfitting. LogAnomaly also gets better with increasing train size, and achieves a perfect score of 100% F1-score at a training size of 60%. This too, then drops at 80% training size, also possibly due to overfitting.
Overall, LogAnomaly works best when the training size is limited, and increases comparatively better when the training size is increased. Having a better structured dataset may help LogBERT achieve a better score, which we have not tested in this experiment.
**RQ4 Summary:** Increasing the training size using the industrial microservice data helps improve performance of DeepLog and LogAnomaly, which decreases after reaching a peak, showing effects of overfitting. LogBERT does not get much better with an increased training size, showing it does not work effectively with a loosely-structured dataset.
### Threats to Validity
**Internal validity.** Internal validity refers to unforeseen factors that may influence the outcome of the experiments. One such aspect are the hyper-parameters used for the models under test, as different values for the hyper-parameters can affect the performance of the models. For consistency, we have used the default values for each of the models in this test, barring several configuration values that needed to be set to allow for shorter sequences of logs from the industrial microservice. The parameters for the HDFS experiments, however, have not been changed from their default values. This should allow for a direct comparison with earlier papers using the open-source datasets. Further hyper-parameter turning of the models with the industrial dataset, however, may improve the performance of those models.
**External validity.** The external threats to validity include ability to generalize the results of the study. There are two such major external threats to validity in this study. First is the limited dataset, as the experiments were conducted on a single microservice. Application of these models on further services would increase the exposure of the research. Second is the limited amount of faults in the industrial dataset. As is the case with most stable industrial applications, the amount of anomalous sequences is very limited, compared to the amount of normal sequences. We believe, however, that it makes this problem represent real-world problem, and improves the applicability of this experiment on other stable applications.
## 5 Conclusion and Future Work
In this paper, we have conducted an application of several anomaly detection models on an industrial dataset, with a real-world limitation on the dataset size and log data uniformity. Results suggest that the LogAnomaly model works best on less structured datasets, such as our industrial dataset. A qualitative analysis on the anomaly types by the experts showed that LogAnomaly and DeepLog are both effective at detecting different types of anomalies with short and long sequences, while LogBERT struggles with even short sequence ones. Exploring the effect of the training size shows that LogAnomaly
Figure 10: F1-score dependency on training size
and DeepLog do get better results with a bigger training set, but over-sized training sets result in the model over-fitting and a reduced effectiveness. In conclusion, LogAnomaly was identified as the overall best performing model in our case study.
Future work on this paper can be pursued on several avenues. First, the model hyper-parameters can be tuned to work better with the industrial dataset, possibly improving their performance. Second, more models, including classical models such as SVM and LogClustering can be applied on the dataset, which can provide another set of baselines to compare against. State-of-the-art models such as LogRobust [3], which have built-in vectorization, would also be a good candidate for dealing with loosely-structured logs. Another possible avenue of future work is using vectorization, by using a natural language model such as a Generative Pre-trained Transformer (GPT). With this, the template mining step would become optional, and the log events can be directly vectorized. This would help with continuously evolving log messages, as well as noise within the logs. Another possible extension is to evaluate the models on more industrial datasets, which may come with their own limitations. This would shed more light into the applicability of those models on more real-world systems.
|
2303.18035 | On isometries of twin buildings | A twin building consists of two buildings that are twinned by a codistance
function. We prove that the local structure of a twin building uniquely
determines the two buildings up to isomorphism. This has been known for twin
buildings satisfying a technical condition (co). | Sebastian Bischof, Anton Chosson, Bernhard Mühlherr | 2023-03-31T13:10:52Z | http://arxiv.org/abs/2303.18035v1 | # On isometries of twin buildings
# On isometries of twin buildings
Sebastian _Bischof\({}^{1}\)_
email: [email protected]
Anton _Chosson\({}^{2}\)_
email: [email protected]
Bernhard _Muhlherr\({}^{1}\)_
email: [email protected]
\({}^{1}\) Mathematisches Institut, Arndtstrasse 2, 35392 Giessen, Germany
\({}^{2}\) IUT d'Orsay Universite Paris-Sud, F-91405 Orsay Cedex, France
## 1 Introduction
Twin buildings were introduced by Ronan and Tits in the late 1980s. Their definition was motivated by the theory of Kac-Moody groups over fields. Each such group acts naturally on a pair of buildings and the action preserves an opposition relation between the chambers of the two buildings. This opposition relation shares many important properties with the opposition relation on the chambers of a spherical building. Thus, twin buildings appear to be natural generalisations of spherical buildings with infinite Weyl groups.
One of the most celebrated results in the theory of abstract buildings is Tits' classification of irreducible spherical buildings of rank at least 3 in [19]. The decisive step in this classification is the proof of a local-to-global result for spherical buildings. In his survey paper [19] Tits proves several results that are inspired by his strategy in the spherical case and he discusses several obstacles for obtaining a similar local-to-global result for twin buildings. A first observation in this discussion is that the local-to-global principle seems to be valid only for 2-spherical twin buildings. But even in this case the question about the validity of the local-to-global principle remained open. Based on Tits' contributions in [19] the local-to-global principle was proved in [20] for 2-spherical twin buildings that satisfy an additional assumption, called Condition (co). In [20] Condition (co) is discussed in some detail and it turns out that it is rather mild. On the other hand, it follows from that discussion that there are affine twin buildings of type \(\widetilde{C}_{2}\) that do not satisfy Condition (co).
The question whether the local-to-global principle for 2-spherical buildings holds without Condition (co) is still open at present. The main result of this paper is a contribution to the local-to-global principle without assuming any additional condition. It was proved independently by A.C. in [15] and B.M. in [21] but never published. In the present article we follow the basic strategy of these references. However, several contributions to the theory of twin buildings that have been made in the meantime provided various improvements of the arguments and exposition. Our motivation to publish the paper at this point is provided by the fact that it can be used to prove the local-to-global principle for 2-spherical twin buildings
under a weaker assumption than Condition (co). This yields in particular the local-to-global principle for all affine twin buildings of rank at least \(3\) and in particular for those which do not satisfy Condition (co). This will be published in a subsequent paper. Thus, the present paper should be seen as the first in a series of two papers in which we intend to improve the main result of [10].
### The main result
In order to give the precise statement of the main result it is convenient to fix some notation.
Let \((W,S)\) be a Coxeter system. We call \((W,S)\)\(2\)_-spherical_ if \(st\) has finite order for all \(s,t\in S\).
A _building of type \((W,S)\)_ is a pair \(\Delta=(\mathcal{C},\delta)\) consisting of a non-empty set \(\mathcal{C}\) and a mapping \(\delta:\mathcal{C}\times\mathcal{C}\longrightarrow W\) (see Section 2 for the precise definition). The elements of \(\mathcal{C}\) are called the _chambers_ of \(\Delta\) and the mapping \(\delta\) is called the _Weyl-distance_. For \(J\subseteq S\) and \(c\in\mathcal{C}\), the set \(R_{J}(c):=\{d\in\mathcal{C}\mid\delta(c,d)\in\langle J\rangle\}\) is called the _\(J\)-residue_ of \(c\) and for \(s\in S\) the set \(\mathcal{P}_{s}(c):=R_{\{s\}}(c)\) is called the _\(s\)-panel_ of \(c\). The set
\[E_{2}(c):=\bigcup_{J\subseteq S,|J|\leq 2}R_{J}(c)\]
is called the _foundation of \(\Delta\) at \(c\)_. The building \(\Delta\) is said to be _thick_ if \(|\mathcal{P}_{s}(c)|\geq 3\) for all \((s,c)\in S\times\mathcal{C}\).
A _twin building of type \((W,S)\)_ is a triple \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) consisting of two buildings \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+})\) and \(\Delta_{-}=(\mathcal{C}_{-},\delta_{-})\) of type \((W,S)\) and a _codistance function_ (or _twinning_)
\[\delta_{*}:(\mathcal{C}_{+}\times\mathcal{C}_{-})\cup(\mathcal{C}_{-}\times \mathcal{C}_{+})\longrightarrow W\]
and we refer to Section 3 for the precise definition. For a chamber \(c\in\mathcal{C}_{+}\) (resp. \(c\in\mathcal{C}_{-}\)) the set \(E_{2}(c)\) denotes its foundation of \(\Delta_{+}\) (resp. \(\Delta_{-}\)) and \(\Delta\) is _thick_ if \(\Delta_{+}\) and \(\Delta_{-}\) are thick. Two chambers \(c_{+}\in\mathcal{C}_{+}\) and \(c_{-}\in\mathcal{C}_{-}\) are said to be _opposite in \(\Delta\)_ if \(\delta_{*}(c_{+},c_{-})=1_{W}\).
Let \(\Delta=((\mathcal{C}_{+},\delta_{+}),(\mathcal{C}_{-},\delta_{-}),\delta_{*})\) and \(\Delta^{\prime}=((\mathcal{C}_{+}^{\prime},\delta_{+}^{\prime}),(\mathcal{C}_{ -}^{\prime},\delta_{-}^{\prime}),\delta_{*}^{\prime})\) be twin buildings of type \((W,S)\) and let \(\mathcal{X}\subseteq\mathcal{C}_{+}\cup\mathcal{C}_{-},\mathcal{X}^{\prime} \subseteq\mathcal{C}_{+}^{\prime}\cup\mathcal{C}_{-}^{\prime}\) be sets of chambers of \(\Delta\) and \(\Delta^{\prime}\). An _isometry from \(\mathcal{X}\) to \(\mathcal{X}^{\prime}\)_ is a bijection from \(\mathcal{X}\) onto \(\mathcal{X}^{\prime}\) which preserves signs and the Weyl-distance (resp. codistance) for each pair \((x,y)\in\mathcal{X}^{2}\).
We are now in the position to give the precise statement of our main result.
**Main result:** Let \((W,S)\) be a \(2\)-spherical Coxeter system and let \(\Delta=((\mathcal{C}_{+},\delta_{+}),(\mathcal{C}_{-},\delta_{-}),\delta_{*})\) and \(\Delta^{\prime}=((\mathcal{C}_{+}^{\prime},\delta_{+}^{\prime}),(\mathcal{C}_{ -}^{\prime},\delta_{-}^{\prime}),\delta_{*}^{\prime})\) be thick twin buildings of type \((W,S)\). Let \(c_{+}\in\mathcal{C}_{+},c_{-}\in\mathcal{C}_{-}\) be opposite chambers in \(\Delta\) and let \(c_{+}^{\prime}\in\mathcal{C}_{+}^{\prime},c_{-}^{\prime}\in\mathcal{C}_{-}^{\prime}\) be opposite chambers in \(\Delta^{\prime}\).
Then each isometry
\[\varphi:E_{2}(c_{+})\cup\{c_{-}\}\to E_{2}(c_{+}^{\prime})\cup\{c_{-}^{\prime}\}\]
extends to an isometry
\[\psi:\mathcal{C}_{+}\cup E_{2}(c_{-})\to\mathcal{C}_{+}^{\prime}\cup E_{2}(c_ {-}^{\prime}).\]
Several remarks on the main result of this paper are in order.
1. Note that our main result does not assert the uniqueness of the extension \(\psi\). At present, the uniqueness of \(\psi\) is an open question that is most relevant for a possible proof of the local-to-global principle. Indeed, the key observation in [10] was that the extension \(\psi\) is unique if \(\Delta\) satisfies Condition (co).
2. A slightly weaker version of our main result was stated by Tits in the early 1990s (as Theoreme 1 in [14] and as Theorem 2 in [15]) and an outline of a proof is given in both references. However, as pointed out in Paragraph 2.8 of [15], one of the claims made in the outline remains unclear. That our main result holds for twin buildings satisfying Condition (co) was verified by Ronan (see Theorem (7.5) in [11]).
3. The proof of the main result combines an idea of Tits given in the outline mentioned in the previous remark with a technique that he used in [15]. More concretely, for a chamber \(c\) and an apartment containing \(c\) in a twin building one can define two retraction mappings. We call them \(\pi\)- and \(\omega\)-retractions. The outline in [14] and [15] uses \(\pi\)-retractions and \(\omega\)-retractions are used in [15] for the proof of the local-to-global principle for spherical buildings. The key observation in this paper is that the main result can proved by using them both.
## 2 Preliminaries
### Coxeter system
Let \(S\) be a set. A _Coxeter matrix_ over \(S\) is a matrix \(M=(m_{st})_{s,t\in S}\), whose entries are in \(\mathbb{N}\cup\{\infty\}\) such that \(m_{ss}=1\) for all \(s\in S\) and \(m_{st}=m_{ts}\geq 2\) for all \(s\neq t\in S\). For \(J\subseteq S\) we set \(M_{J}:=(m_{st})_{s,t\in J}\). The _Coxeter diagram_ corresponding to \(M\) is the labeled graph \((S,E(S))\), where \(E(S)=\{\{s,t\}\mid m_{st}>2\}\) and where each edge \(\{s,t\}\) is labeled by \(m_{st}\) for all \(s,t\in S\). As the Coxeter matrix and the corresponding Coxeter diagram carry the same information we do not distinguish between them formally. We call the Coxeter diagram _irreducible_, if the underlying graph is connected, and we call it \(2\)_-spherical_, if \(m_{st}<\infty\) for all \(s,t\in S\). The _rank_ of a Coxeter diagram is the cardinality of the set of its vertices.
Let \(M=(m_{st})_{s,t\in S}\) be a Coxeter matrix over a set \(S\). A _Coxeter system of type \(M\)_ is a pair \((W,S)\) consisting of a group \(W\) and a set \(S\subseteq W\) of generators of \(W\) such that the set \(S\) and the relations \((st)^{m_{st}}\) for all \(s,t\in S\) constitute a presentation of \(W\).
Let \((W,S)\) be a Coxeter system of type \(M\). The pair \((\langle J\rangle,J)\) is a Coxeter system of type \(M_{J}\) (cf. [11, Ch. IV, SS1 Theoreme 2]). For an element \(w\in W\) we put \(\ell(w):=\min\{k\in\mathbb{N}_{0}\mid\exists s_{1},\ldots,s_{k}\in S:w=s_{1} \cdots s_{k}\}\). The number \(\ell(w)\) is called the _length_ of \(w\). We call \(J\subseteq S\)_spherical_ if \(\langle J\rangle\) is finite. Given a spherical subset \(J\) of \(S\), there exists a unique element of maximal length in \(\langle J\rangle\), which we denote by \(r_{J}\) (cf. [1, Corollary 2.19]); moreover, \(r_{J}\) is an involution.
**(2.1) Convention.** For the rest of this paper let \(S\) be a set, let \(M\) be a Coxeter matrix over \(S\) and let \((W,S)\) be a Coxeter system of type \(M\).
### Buildings
A _building of type \((W,S)\)_ is a pair \(\Delta=(\mathcal{C},\delta)\) where \(\mathcal{C}\) is a non-empty set and where \(\delta:\mathcal{C}\times\mathcal{C}\to W\) is a _distance function_ satisfying the following axioms, where \(x,y\in\mathcal{C}\) and \(w=\delta(x,y)\):
1. \(w=1_{W}\) if and only if \(x=y\);
2. if \(z\in\mathcal{C}\) satisfies \(s:=\delta(y,z)\in S\), then \(\delta(x,z)\in\{w,ws\}\), and if, furthermore, \(\ell(ws)=\ell(w)+1\), then \(\delta(x,z)=ws\);
3. if \(s\in S\), there exists \(z\in\mathcal{C}\) such that \(\delta(y,z)=s\) and \(\delta(x,z)=ws\).
Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \((W,S)\). The _rank_ of \(\Delta\) is the rank of the underlying Coxeter system. The elements of \(\mathcal{C}\) are called _chambers_. Given \(s\in S\) and \(x,y\in\mathcal{C}\), then
is called _\(s\)-adjacent_ to \(y\), if \(\delta(x,y)\in\langle s\rangle\). The chambers \(x,y\) are called _adjacent_, if they are \(s\)-adjacent for some \(s\in S\). A _gallery_ joining \(x\) and \(y\) is a sequence \((x=x_{0},\ldots,x_{k}=y)\) such that \(x_{l-1}\) and \(x_{l}\) are adjacent for any \(1\leq l\leq k\); the number \(k\) is called the _length_ of the gallery.
Given a subset \(J\subseteq S\) and \(x\in\mathcal{C}\), the _\(J\)-residue_ of \(x\) is the set \(R_{J}(x):=\{y\in\mathcal{C}\mid\delta(x,y)\in\langle J\rangle\}\). Each \(J\)-residue is a building of type \((\langle J\rangle,J)\) with the distance function induced by \(\delta\) (cf. [1, Corollary 5.30]). A _residue_ is a subset \(R\) of \(\mathcal{C}\) such that there exists \(J\subseteq S\) and \(x\in\mathcal{C}\) with \(R=R_{J}(x)\). Since the subset \(J\) is uniquely determined by \(R\), the set \(J\) is called the _type_ of \(R\) and the _rank_ of \(R\) is defined to be the cardinality of \(J\). A residue is called _spherical_ if its type is a spherical subset of \(S\). A _panel_ is a residue of rank \(1\). An _\(s\)-panel_ is a panel of type \(\{s\}\) for \(s\in S\). The building \(\Delta\) is called _thick_, if each panel of \(\Delta\) contains at least three chambers.
Given \(x\in\mathcal{C}\) and \(k\in\mathbb{N}_{0}\) then \(E_{k}(x)\) denotes the union of all residues of rank at most \(k\) containing \(x\). It is a fact, that the set \(E_{k}(x)\) determines the chamber \(x\) uniquely, if \(k<|S|\).
Given \(x\in\mathcal{C}\) and a \(J\)-residue \(R\subseteq\mathcal{C}\), then there exists a unique chamber \(z\in R\) such that \(\ell(\delta(x,y))=\ell(\delta(x,z))+\ell(\delta(z,y))\) for any \(y\in R\) (cf. [1, Proposition 5.34]). The chamber \(z\) is called the _projection of \(x\) onto \(R\)_ and is denoted by \(\operatorname{proj}_{R}x\). Moreover, if \(z=\operatorname{proj}_{R}x\) we have \(\delta(x,y)=\delta(x,z)\delta(z,y)\) for each \(y\in R\).
A subset \(\Sigma\subseteq\mathcal{C}\) is called _convex_ if \(\operatorname{proj}_{P}c\in\Sigma\) for every \(c\in\Sigma\) and every panel \(P\subseteq\mathcal{C}\) which meets \(\Sigma\). A subset \(\Sigma\subseteq\mathcal{C}\) is called _thin_ if \(P\cap\Sigma\) contains exactly two chambers for every panel \(P\subseteq\mathcal{C}\) which meets \(\Sigma\). An _apartment_ is a non-empty subset \(\Sigma\subseteq\mathcal{C}\), which is convex and thin. It is a basic fact that in an apartment the map \(\sigma_{c}:\Sigma\to W,x\mapsto\delta(c,x)\) is a bijection for any \(c\in\mathcal{C}\).
### Chamber systems
Let \(I\) be a set. A _chamber system_ over \(I\) is a pair \(\mathbf{C}=(\mathcal{C},(\sim_{i})_{i\in I})\) where \(\mathcal{C}\) is a non-empty set whose elements are called _chambers_ and where \(\sim_{i}\) is an equivalence relation on the set of chambers for each \(i\in I\). Given \(i\in I\) and \(c,d\in\mathcal{C}\), then \(c\) is called _\(i\)-adjacent_ to \(d\) if \(c\sim_{i}d\). The chambers \(c,d\) are called _adjacent_ if they are \(i\)-adjacent for some \(i\in I\).
A _gallery_ in \(\mathbf{C}\) is a sequence \((c_{0},\ldots,c_{k})\) such that \(c_{\mu}\in\mathcal{C}\) for all \(0\leq\mu\leq k\) and such that \(c_{\mu-1}\) is adjacent to \(c_{\mu}\) for all \(1\leq\mu\leq k\). The number \(k\) is called the _length_ of the gallery. Given a gallery \(G=(c_{0},\ldots,c_{k})\), then we put \(\beta(G):=c_{0}\) and \(\varepsilon(G):=c_{k}\). If \(G\) is a gallery and if \(c,d\in\mathcal{C}\) such that \(c=\beta(G),d=\varepsilon(G)\), then we say that \(G\) is a _gallery from \(c\) to \(d\)_ or \(G\)_ joins \(c\) and \(d\). The chamber system \(\mathbf{C}\) is said to be _connected_, if for any two chambers there exists a gallery joining them. A gallery \(G\) will be called _closed_ if \(\beta(G)=\varepsilon(G)\).
Given a gallery \(G=(c_{0},\ldots,c_{k})\) then \(G^{-1}\) denotes the gallery \((c_{k},\ldots,c_{0})\) and if \(H=(c_{0}^{\prime},\ldots,c_{l}^{\prime})\) is a gallery such that \(\varepsilon(G)=\beta(H)\), then \(GH\) denotes the gallery \((c_{0},\ldots,c_{k}=c_{0}^{\prime},\ldots,c_{l}^{\prime})\).
Let \(J\) be a subset of \(I\). A _\(J\)-gallery_ is a gallery \((c_{0},\ldots,c_{k})\) such that for each \(1\leq\mu\leq k\) there exists an index \(j\in J\) with \(c_{\mu-1}\sim_{j}c_{\mu}\). Given two chambers \(c,d\), then we say that \(c\) is _\(J\)-equivalent_ with \(d\) if there exists a \(J\)-gallery joining \(c\) and \(d\) and we write \(c\sim_{J}d\) in this case. Given a chamber \(c\) and a subset \(J\) of \(I\) then the set \(R_{J}(c):=\{d\in\mathcal{C}\mid c\sim_{J}d\}\) is called the _\(J\)-residue_ of \(c\).
Let \(\Delta=(\mathcal{C},\delta)\) be a building of type \((W,S)\). Then we define the chamber system \(\mathbf{C}(\Delta)\) as follows: The set of chambers is identified with \(\mathcal{C}\) and two chambers \(x,y\) are defined to be \(s\)-adjacent if \(\delta(x,y)\in\langle s\rangle\).
### Homotopy of galleries and simple connectedness
In the context of chamber systems there is the notation of \(m\)-homotopy and \(m\)-simple connectedness for each \(m\in\mathbb{N}\). In this paper we are only concerned with the case \(m=2\). Therefore our definitions are always to be understood as a specialisation of the general theory to the case \(m=2\).
Let \(\mathbf{C}=(\mathcal{C},(\sim_{i})_{i\in I})\) be a chamber system over a set \(I\). Two galleries \(G\) and \(H\) are said to be _elementary homotopic_ if there exists two galleries \(X,Y\) and two \(J\)-galleries \(G_{0},H_{0}\) for some \(J\subseteq I\) of cardinality at most \(2\) such that \(G=XG_{0}Y,H=XH_{0}Y\). Two galleries \(G,H\) are said to be _homotopic_ if there exists a finite sequence \(G_{0},\ldots,G_{l}\) of galleries such that \(G_{0}=G,G_{l}=H\) and such that \(G_{\mu-1}\) is elementary homotopic to \(G_{\mu}\) for all \(1\leq\mu\leq l\).
If two galleries \(G,H\) are homotopic, then it follows by definition that \(\beta(G)=\beta(H)\) and \(\varepsilon(G)=\varepsilon(H)\). A closed gallery \(G\) is said to be _null-homotopic_ if it is homotopic to the gallery (\(\beta(G)\)). The chamber system \((\mathcal{C},(\sim_{i})_{i\in I})\) is called _simply connected_ if it is connected and if each closed gallery is null-homotopic.
Let \(\mathcal{X}\subseteq\mathcal{C}\) and let \(\mathbf{X}=(\mathcal{X},(\sim_{i})_{i\in I})\) be the chamber system obtained by restricting the equivalence relations \(\sim_{i}\) to \(\mathcal{X}\). The subset \(\mathcal{X}\) will be called _simply connected_ if the chamber system \(\mathbf{X}\) is simply connected.
**(2.1) Proposition**.: _Let \(\Delta\) be a building of type \((W,S)\). Then the chamber system \(\mathbf{C}(\Delta)\) is simply connected._
Proof.: This is [10, (4.3) Theorem].
## 3 Twin buildings
### Definitions and Notations
Let \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+}),\Delta_{-}=(\mathcal{C}_{-},\delta_{-})\) be two buildings of the same type \((W,S)\). A _codistance_ (or a _twinning_) between \(\Delta_{+}\) and \(\Delta_{-}\) is a mapping \(\delta_{*}:(\mathcal{C}_{+}\times\mathcal{C}_{-})\cup(\mathcal{C}_{-}\times \mathcal{C}_{+})\to W\) satisfying the following axioms, where \(\varepsilon\in\{+,-\},x\in\mathcal{C}_{\varepsilon},y\in\mathcal{C}_{-\varepsilon}\) and \(w=\delta_{*}(x,y)\):
1. \(\delta_{*}(y,x)=w^{-1}\);
2. if \(z\in\mathcal{C}_{-\varepsilon}\) is such that \(s:=\delta_{-\varepsilon}(y,z)\in S\) and \(\ell(ws)=\ell(w)-1\), then \(\delta_{*}(x,z)=ws\);
3. if \(s\in S\), there exists \(z\in\mathcal{C}_{-\varepsilon}\) such that \(\delta_{-\varepsilon}(y,z)=s\) and \(\delta_{*}(x,z)=ws\).
A _twin building of type \((W,S)\)_ is a triple \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) where \(\Delta_{+},\Delta_{-}\) are buildings of type \((W,S)\) and where \(\delta_{*}\) is a twinning between \(\Delta_{+}\) and \(\Delta_{-}\).
**(3.1) Convention**.: For the rest of this paper let \(\Delta=(\Delta_{+},\Delta_{-},\delta_{*})\) be a twin building of type \((W,S)\) where \(\Delta_{+}=(\mathcal{C}_{+},\delta_{+})\) and \(\Delta_{-}=(\mathcal{C}_{-},\delta_{-})\).
We put \(\mathcal{C}:=\mathcal{C}_{+}\cup\mathcal{C}_{-}\) and define the distance function \(\delta:\mathcal{C}\times\mathcal{C}\to W\) by setting \(\delta(x,y):=\delta_{+}(x,y)\) (resp. \(\delta_{-}(x,y),\delta_{*}(x,y)\)) if \(x,y\in\mathcal{C}_{+}\) (resp. \(x,y\in\mathcal{C}_{-},(x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\) for some \(\varepsilon\in\{+,-\}\)).
Given \(x,y\in\mathcal{C}\) then we put \(\ell(x,y):=\ell(\delta(x,y))\). If \(\varepsilon\in\{+,-\}\) and \(x,y\in\mathcal{C}_{\varepsilon}\), then we put \(\ell_{\varepsilon}(x,y):=\ell(\delta_{\varepsilon}(x,y))\) and for \((x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\) we put \(\ell_{*}(x,y):=\ell(\delta_{*}(x,y))\).
Let \(\varepsilon\in\{+,-\}\). For \(x\in\mathcal{C}_{\varepsilon}\) we put \(x^{op}:=\{y\in\mathcal{C}_{-\varepsilon}\mid\delta_{*}(x,y)=1_{W}\}\). It is a direct consequence of (Tw1) that \(y\in x^{op}\) if and only if \(x\in y^{op}\) for any pair \((x,y)\in\mathcal{C}_{\varepsilon}\times\mathcal{C}_{-\varepsilon}\). If \(y\in x^{op}\) then we say that \(y\) is _opposite_ to \(x\) or that \((x,y)\)_is a pair of opposite chambers_.
Let \(\overline{\mathcal{C}}:=\{(c_{+},c_{-})\in\mathcal{C}_{+}\times\mathcal{C}_{-}\mid \delta_{*}(c_{+},c_{-})=1_{W}\}\). Then \(\big{(}\overline{\mathcal{C}},(\sim_{s})_{s\in S}\big{)}\) is a chamber system, where \((c_{+},c_{-})\in\overline{\mathcal{C}}\) is \(s\)-adjacent (\(s\in S\)) to \((d_{+},d_{-})\in\overline{\mathcal{C}}\) if \(c_{\varepsilon}\) is \(s\)-adjacent to \(d_{\varepsilon}\) in \(\mathbf{C}(\Delta_{\varepsilon})\) for each \(\varepsilon\in\{+,-\}\). We denote this chamber system by \(\operatorname{\mathrm{Opp}}(\Delta)\). For \(\overline{c}:=(c_{+},c_{-})\in\overline{\mathcal{C}}\) we define \(E_{2}(\overline{c}):=E_{2}(c_{+})\cup E_{2}(c_{-})\).
A _residue_ (resp. _panel_) of \(\Delta\) is a residue (resp. panel) of \(\Delta_{+}\) or \(\Delta_{-}\); given a residue \(R\subseteq\mathcal{C}\) then we define its type and rank as before. Two residues \(R,T\subseteq\mathcal{C}\) are called _opposite_ if they have the same type and if there exists a pair of opposite chambers \((x,y)\) such that \(x\in R,y\in T\).
Let \(\varepsilon\in\{+,-\}\), let \(J\) be a spherical subset of \(S\) and let \(R\) be a \(J\)-residue of \(\Delta_{\varepsilon}\). Given a chamber \(x\in\mathcal{C}_{-\varepsilon}\) then there exists a unique chamber \(z\in R\) such that \(\ell_{*}(x,y)=\ell_{*}(x,z)-\ell_{\varepsilon}(z,y)\) for any chamber \(y\in R\) (cf. [1, Lemma 5.149]). The chamber \(z\) is called the _projection of \(x\) onto \(R\)_; it will be denoted by \(\operatorname{\mathrm{proj}}_{R}x\). Moreover, if \(z=\operatorname{\mathrm{proj}}_{R}x\) we have \(\delta_{*}(x,y)=\delta_{*}(x,z)\delta_{\varepsilon}(z,y)\) for each \(y\in R\).
Let \(\Sigma_{+}\subseteq\mathcal{C}_{+}\) and \(\Sigma_{-}\subseteq\mathcal{C}_{-}\) be apartments of \(\Delta_{+}\) and \(\Delta_{-}\), respectively. Then the set \(\Sigma=\Sigma_{+}\cup\Sigma_{-}\) is called _twin apartment_ if \(|x^{op}\cap\Sigma|=1\) for each \(x\in\Sigma\). If \((x,y)\) is a pair of opposite chambers, then there exists a unique twin apartment containing \(x\) and \(y\). We will denote it by \(A(x,y)\) and for \(\varepsilon\in\{+,-\}\) we put \(A_{\varepsilon}(x,y):=A(x,y)\cap\mathcal{C}_{\varepsilon}\). It is a fact that \(A(x,y)=\{z\in\mathcal{C}\mid\delta(x,z)=\delta(z,y)\}\) (cf. Proposition 5.179 in [1]).
**(3.2) Lemma**.: _Let \(\Sigma\subseteq\mathcal{C}\) be a twin apartment, let \(x\in\Sigma\) and let \(R\) be a spherical residue of \(\Delta\) which meets \(\Sigma\). Then \(\operatorname{\mathrm{proj}}_{R}x\in\Sigma\)._
Proof.: This is [1, Lemma 5.173 (6)].
### Pairs of opposite spherical residues
Throughout this subsection we assume that \(R\subseteq\mathcal{C}_{+},T\subseteq\mathcal{C}_{-}\) are opposite residues and that the type \(J\) of \(R\) and \(T\) is spherical.
**(3.1) Lemma**.: _For each \(x\in R\) there exists \(y\in T\) such that \(x\) and \(y\) are opposite and we have \(\delta_{*}(u,v)\in\langle J\rangle\) for all \((u,v)\in R\times T\)._
Proof.: These are immediate consequences of [1, Lemma 5.139 (1)].
**(3.2) Lemma**.: _Let \((x,y)\in R\times T\). Then the following are equivalent:_
1. \(\operatorname{\mathrm{proj}}_{T}x=y\)_;_
2. \(\delta_{*}(x,y)=r_{J}\)_;_
3. \(\operatorname{\mathrm{proj}}_{R}y=x\)_._
Proof.: Suppose \(y=\operatorname{\mathrm{proj}}_{T}x\) and let \(z\in T\) be such that \(\delta_{-}(y,z)=r_{J}\). Then \(\ell_{*}(x,z)=\ell_{*}(x,y)-\ell(r_{J})\) and hence \(\ell_{*}(x,y)\geq\ell(r_{J})\). As \(\delta_{*}(x,y)\in\langle J\rangle\) by the previous Lemma, the claim follows.
Suppose now that \(\delta_{*}(x,y)=r_{J}\) and let \(z:=\operatorname{\mathrm{proj}}_{T}x\). Since \(\ell_{*}(x,z)\geq\ell_{*}(x,y)=\ell(r_{J})\) and \(\delta_{*}(x,z)\in\langle J\rangle\), it follows that \(\delta_{*}(x,z)=r_{J}\). Now \(\ell(r_{J})=\ell_{*}(x,y)=\ell_{*}(x,z)-\ell_{-}(z,y)=\ell(r_{J})-\ell_{-}(z,y)\) which implies \(z=y\).
We have shown that \((i)\) and \((ii)\) are equivalent; the equivalence of \((ii)\) and \((iii)\) follows by symmetry and we are done.
**(3.3) Lemma**.: _The mappings \(\operatorname{\mathrm{proj}}_{R}^{T}:T\to R,x\mapsto\operatorname{\mathrm{proj}} _{R}x\) and \(\operatorname{\mathrm{proj}}_{T}^{R}:R\to T,x\mapsto\operatorname{\mathrm{proj}} _{T}x\) are bijections inverse to each other._
Proof.: This is Proposition (4.3) in [10].
A technical result
In this paragraph we prove a technical result which will be needed in the proof of Theorem (6.1).
**(3.1) Lemma**.: _Let \(c\in\mathcal{C}_{-\varepsilon},x\in\mathcal{C}_{\varepsilon}\) be two opposite chambers and let \((x=d_{0},d_{1},\ldots,d_{k},d_{k+1}=d)\) be a gallery such that \(\ell_{*}(c,d_{i})=i\) for each \(0\leq i\leq k\) and \(\ell_{*}(c,d)\leq k\). Then there exist chambers \(x^{\prime},z\in\mathcal{C}_{\varepsilon}\) such that \(x^{\prime}\in c^{op}\), \(\delta_{*}(c,z)=\delta(x,z)=\delta(x^{\prime},z)\) and \(\ell(x^{\prime},d)<k+1\)._
Proof.: We put \(w:=\delta_{\varepsilon}(x,d_{k})\) and remark that our assumption implies \(w=\delta_{*}(c,d_{k})\). Furthermore we put \(s:=\delta_{\varepsilon}(d_{k},d)\) and let \(P\) denote the \(s\)-panel containing \(d_{k}\) and \(d\). By our assumptions we have \(\delta_{\varepsilon}(x,d)\in\{w,ws\}\). We have two cases:
\(\ell(ws)=\ell(w)-1\): As \(\delta_{*}(c,d_{k})=w\) it follows that \(d_{k}=\operatorname{proj}_{P}c\) and \(\delta_{*}(c,d)=ws\). Let \(x^{\prime}\in\mathcal{C}_{\varepsilon}\) be a chamber such that \(\delta_{\varepsilon}(x^{\prime},d)=ws\). Then we have \(x^{\prime}\in c^{op}\) and \(\delta_{\varepsilon}(x^{\prime},d_{k})=w=\delta_{\varepsilon}(x,d_{k})= \delta_{*}(c,d_{k})\) and \(\ell_{\varepsilon}(x^{\prime},d)=k-1<k+1\). Thus the assertion follows by setting \(z:=d_{k}\).
\(\ell(ws)=\ell(w)+1\): We put \(z:=\operatorname{proj}_{P}c\). As \(\ell_{*}(c,d)\leq k\) it follows that \(z\neq d\) and \(\delta_{*}(c,d)=w\). Let \(x^{\prime}\in\mathcal{C}_{\varepsilon}\) be a chamber such that \(\delta_{\varepsilon}(x^{\prime},d)=w\). Then \(\delta_{\varepsilon}(x^{\prime},z)=ws=\delta_{\varepsilon}(x,z)=\delta_{*}(c,z)\), and \(x^{\prime}\in c^{op}\) and the assertion follows.
**(3.2) Lemma**.: _Let \(\varepsilon\in\{+,-\},c\in\mathcal{C}_{-\varepsilon}\) and let \(x,y\in c^{op}\). Then there exist \(k\in\mathbb{N}\), a sequence \(x=x_{0},\ldots,x_{k}=y\) of chambers in \(\mathcal{C}^{op}\) and a sequence \(z_{1},\ldots,z_{k}\) of chambers in \(\mathcal{C}_{\varepsilon}\) such that \(\delta_{*}(c,z_{1})=\delta_{\varepsilon}(x_{\lambda-1},z_{\lambda})=\delta_{ \varepsilon}(x_{\lambda},z_{\lambda})\) for each \(1\leq\lambda\leq k\)._
Proof.: Let \((x=d_{0},\ldots,d_{m}=y)\) be a minimal gallery joining \(x\) and \(y\). We will prove the assertion by induction on \(m:=\ell_{\varepsilon}(x,y)\). Setting \(z:=x=y\) the assertion is trivial for \(m=0\) and we may assume that \(m>0\).
Let \(k:=\max\{0\leq i\leq m\mid\ell_{*}(c,d_{i})=i\}\) and put \(d:=d_{k+1}\). By the previous lemma there are chambers \(x^{\prime},z\in\mathcal{C}_{\varepsilon}\) such that \(x^{\prime}\in c^{op}\), \(\delta_{\varepsilon}(x,z)=\delta_{\varepsilon}(x^{\prime},z)=\delta_{*}(c,z)\) and \(\ell_{\varepsilon}(x^{\prime},d)\leq k\). It follows \(\ell_{\varepsilon}(x^{\prime},y)<m\) and we may apply induction to \(x^{\prime}\) and \(y\) in order to obtain the desired sequences \(x=x_{0},x_{1}=x^{\prime},\ldots,x_{k}=y\) and \(z_{1}=z,z_{2},\ldots,z_{k}\) of chambers.
## 4 Isometries
Let \((W,S)\) be \(2\)-spherical and of rank at least \(3\). Let \(\Delta\) be thick and let \(\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be a thick twin building of type \((W,S)\). We define \(\mathcal{C}^{\prime},\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}, \ell^{\prime}\) as in the case of \(\Delta\).
### Definition and basic facts about isometries
Let \(\mathcal{X}\subseteq\mathcal{C},\mathcal{X}^{\prime}\subseteq\mathcal{C}^{\prime}\). A mapping \(\varphi:\mathcal{X}\to\mathcal{X}^{\prime}\) is called _isometry_ if the following conditions are satisfied:
* The mapping \(\varphi\) is bijective.
* For \(\varepsilon\in\{+,-\}\) we have \(\varphi(\mathcal{X}\cap\mathcal{C}_{\varepsilon})\subseteq\mathcal{C}^{\prime}_ {\varepsilon}\).
* If \(x,y\in\mathcal{X}\) then \(\delta^{\prime}(\varphi(x),\varphi(y))=\delta(x,y)\).
Given \(\mathcal{X}\subseteq\mathcal{C},\mathcal{X}^{\prime}\subseteq\mathcal{C}^{\prime}\), an isometry \(\varphi:\mathcal{X}\to\mathcal{X}^{\prime}\) and \((y,y^{\prime})\in\mathcal{C}\times\mathcal{C}^{\prime}\), then the pair \((y,y^{\prime})\) will be called \(\varphi\)_-admissible_ if the mapping \(y\mapsto y^{\prime}\) extends \(\varphi\) to an isometry from \(\mathcal{X}\cup\{y\}\) onto \(\mathcal{X}^{\prime}\cup\{y^{\prime}\}\). In particular, \((x,\varphi(x))\) is \(\varphi\)-admissible for any \(x\in\mathcal{X}\). For \(x,y\in\mathcal{X}\) with \((x,y)\in\overline{\mathcal{C}}\) we define \(\varphi((x,y)):=(\varphi(x),\varphi(y))\). Since the building has rank at least three it is a fact that for \((x,x^{\prime})\in\mathcal{C}\times\mathcal{C}^{\prime}\) and \(\varphi:E_{2}(x)\to E_{2}(x^{\prime})\) an isometry, we have \(\varphi(x)=x^{\prime}\).
**(4.1) Lemma**.: _Let \(\mathcal{S},\mathcal{X}\subseteq\mathcal{C},\mathcal{S}^{\prime},\mathcal{X}^{ \prime}\subseteq\mathcal{C}^{\prime}\) be such that \(\mathcal{S}\cap\mathcal{X}=\emptyset\) and \(\mathcal{S}^{\prime}\cap\mathcal{X}^{\prime}=\emptyset\). Let \(\varphi:\mathcal{S}\to\mathcal{S}^{\prime}\) and \(\psi:\mathcal{X}\to\mathcal{X}^{\prime}\) be two isometries such that \((z,\psi(z))\) is \(\varphi\)-admissible for any \(z\in\mathcal{X}\). Then the mapping_
\[\varphi\cup\psi:\mathcal{S}\cup\mathcal{X}\to\mathcal{S}^{\prime}\cup \mathcal{X}^{\prime},x\mapsto\begin{cases}\varphi(x)&\text{if }x\in\mathcal{S},\\ \psi(x)&\text{if }x\in\mathcal{X}.\end{cases}\]
_is an isometry._
Proof.: Let \(\Phi:=\varphi\cup\psi\). Clearly, \(\Phi\) is a bijection satisfying (Iso2). It suffices to show, that \(\delta(x,y)=\delta^{\prime}(\Phi(x),\Phi(y))\) for any \(x\in\mathcal{S},y\in\mathcal{X}\). Let \(x\in\mathcal{S}\) and \(y\in\mathcal{X}\). Then we have \(\delta^{\prime}(\Phi(x),\Phi(y))=\delta^{\prime}(\varphi(x),\psi(y))=\delta(x,y)\), because \((y,\psi(y))\) is \(\varphi\)-admissible. This finishes the claim.
**(4.2) Lemma**.: _Let \(J\) be a spherical subset of \(S\), let \(R\subseteq\mathcal{C},R^{\prime}\subseteq\mathcal{C}^{\prime}\) be \(J\)-residues, let \(\varphi:R\to R^{\prime}\) be an isometry, and let \((x,x^{\prime})\) be a \(\varphi\)-admissible pair. Then \(\varphi(\operatorname{proj}_{R}x)=\operatorname{proj}_{R^{\prime}}x^{\prime}\)._
Proof.: This is Lemma (4.4) of [10].
**(4.3) Lemma**.: _Let \(J\) be a spherical subset of \(S\), let \(R_{+},R_{-}\subseteq\mathcal{C}\) (resp. \(R^{\prime}_{+},R^{\prime}_{-}\subseteq\mathcal{C}^{\prime}\)) be opposite \(J\)-residues in \(\Delta\) (resp. \(\Delta^{\prime}\)), let \(\varphi:R_{+}\cup R_{-}\to R^{\prime}_{+}\cup R^{\prime}_{-}\) be an isometry and let \(\varepsilon\in\{+,-\}\). Then \(\varphi(x)=\operatorname{proj}_{R^{\prime}_{\varepsilon}}\varphi(\operatorname {proj}_{R_{-\varepsilon}}x)\) for each \(x\in R_{\varepsilon}\)._
Proof.: This is a consequence of the previous Lemma and Lemma (3.3).
**(4.4) Lemma**.: _Let \(x\in\mathcal{C},x^{\prime}\in\mathcal{C}^{\prime}\), let \(\Sigma\subseteq\mathcal{C}\) be an apartment containing \(x\) and let \(\varphi,\psi:E_{2}(x)\to E_{2}(x^{\prime})\) be two isometries which agree on \(E_{1}(x)\). If they also agree on \(\Sigma\cap E_{2}(x)\), then we have \(\varphi=\psi\)._
Proof.: For each subset \(J\) of \(S\) of cardinality \(2\) we denote the restriction of \(\varphi\) (resp. \(\psi\)) on \(R_{J}(x)\) by \(\varphi_{J}\) (resp. \(\psi_{J}\)).
Let \(J\subseteq S\) be of cardinality \(2\) and let \(\Sigma\) be as in the statement. Then \(\varphi_{J}\) and \(\psi_{J}\) agree on \(\Sigma\cap R_{J}(x)\) which is an apartment of \(R_{J}(x)\). The claim follows from Theorem 4.1.1 in [17].
**(4.5) Lemma**.: _Let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be a map and let \((\varphi_{x}:E_{2}(x)\to E_{2}(\varphi_{+}(x)))_{x\in\mathcal{C}_{+}}\) be a family of isometries such that \(\varphi_{x}\) and \(\varphi_{y}\) agree on \(E_{2}(x)\cap E_{2}(y)\) whenever \(x,y\in\mathcal{C}_{+}\) are adjacent. Then \(\varphi_{+}\) is an isometry and \(\varphi_{x}\) is the restriction of \(\varphi_{+}\) on \(E_{2}(x)\) for each \(x\in\mathcal{C}_{+}\)._
Proof.: Let \(x,y\in\mathcal{C}_{+}\) be such that \(y\in E_{2}(x)\), then we can find a gallery \((x=x_{0},\ldots,x_{k}=y)\) in a rank \(2\) residue containing \(x\) and \(y\). It follows that \(y\in E_{2}(x_{\lambda})\) for each \(0\leq\lambda\leq k\) and using induction one obtains \(\varphi_{x}(y)=\varphi_{y}(y)=\varphi_{+}(y)\). This shows that \(\varphi_{x}\) coincides with the restriction of \(\varphi_{+}\) on \(E_{2}(x)\).
Now we will show that \(\varphi_{+}\) is surjective. Let \(y^{\prime}\in\mathcal{C}^{\prime}_{+}\). Let \(x\in\mathcal{C}_{+}\) and let \(x^{\prime}:=\varphi_{+}(x)\). As \(\varphi_{x}:E_{2}(x)\to E_{2}(x^{\prime})\) is an isometry, it follows that \(E_{2}(x^{\prime})\subseteq\varphi_{+}(\mathcal{C}_{+})\). By induction on the length of a minimal gallery joining \(x^{\prime}\) and \(y^{\prime}\) in \(\mathcal{C}^{\prime}_{+}\) it follows that \(y^{\prime}\in\varphi_{+}(\mathcal{C}^{\prime}_{+})\) and hence the surjectivity of \(\varphi_{+}\).
The restriction of \(\varphi_{+}\) on the rank \(2\) residues being isometries it follows that \(\varphi_{+}:\mathbf{C}(\Delta_{+})\to\mathbf{C}(\Delta_{+}^{\prime})\) is a \(2\)-covering. Now the injectivity of \(\varphi_{+}\) follows from Proposition (2.1).
**(4.6) Lemma**.: _Let \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+}\) be an isometry, let \((x,x^{\prime})\in\mathcal{C}_{-}\times\mathcal{C}^{\prime}_{-}\) and suppose that \(\varphi_{+}(x^{op})\subseteq(x^{\prime})^{op}\). Then \((x,x^{\prime})\) is a \(\varphi_{+}\)-admissible pair._
Proof.: This is Lemma (7.4) in [10].
### Main results on local extensions of isometries
In this subsection we let \(\overline{c}:=(c_{+},c_{-})\in\overline{C},\overline{c}^{\prime}:=(c^{\prime}_{+}, c^{\prime}_{-})\in\overline{C}^{\prime}\).
**(4.1) Proposition**.: _Let \(\varphi:E_{2}(c_{+})\cup\{c_{-}\}\to E_{2}(c^{\prime}_{+})\cup\{c^{\prime}_{-}\}\) be an isometry. Then \(\varphi\) extends uniquely to an isometry from \(E_{2}(c_{+})\cup E_{2}(c_{-})\) onto \(E_{2}(c^{\prime}_{+})\cup E_{2}(c^{\prime}_{-})\)._
Proof.: For a proof see Proposition (6.2) of [10].
**(4.2) Proposition**.: _Let \(\overline{d}\in\overline{C}\) such that \(\overline{c}\) is adjacent to \(\overline{d}\) in \(\operatorname{\mathrm{Opp}}(\Delta)\) and let \(\varphi:E_{2}(\overline{c})\to E_{2}(\overline{c}^{\prime})\) be an isometry. Then there exists a unique isometry \(\psi:E_{2}(\overline{d})\to E_{2}(\varphi(\overline{d}))\) such that \(\varphi\) and \(\psi\) agree on the intersection of their domains._
Proof.: This is Proposition (6.4) of [10].
**(4.3) Theorem**.: _Let \(J\) be a subset of \(S\) of cardinality at most \(2\) and let \(R_{\pm}:=R_{J}(c_{\pm})\). Let \(\overline{R}:=(R_{+}\times R_{-})\cap\overline{C}\) and let \(\varphi:E_{2}(\overline{c})\to E_{2}(\overline{c}^{\prime})\) be an isometry. Then there exists a unique system of isometries \((\varphi_{\overline{\sigma}}\colon E_{2}(\overline{x})\to E_{2}(\varphi( \overline{x})))_{\overline{x}\in\overline{R}}\) such that the following is satisfied:_
1. \(\varphi_{\overline{c}}=\varphi\)_;_
2. _If_ \(\overline{x},\overline{y}\in\overline{R}\) _are adjacent in_ \(\operatorname{\mathrm{Opp}}(\Delta)\)_, then_ \(\varphi_{\overline{x}}\) _and_ \(\varphi_{\overline{y}}\) _agree on the intersection of their domains._
Proof.: This is a consequence of Proposition (6.6) and Corollary (6.7) in [10].
### Using \(\operatorname{\mathrm{Opp}}(\Delta)\) to extend isometries
Let \(\overline{c}\in\overline{C},\overline{c}^{\prime}\in\overline{C}^{\prime}, \varphi:E_{2}(\overline{c})\to E_{2}(\overline{c}^{\prime})\) be an isometry and let \(\overline{G}=(\overline{c}=\overline{x}_{0},\ldots,\overline{x}_{k}= \overline{d})\) be a gallery in \(\operatorname{\mathrm{Opp}}(\Delta)\). Then - by Proposition (4.2) - we obtain recursively a unique chamber \(\overline{d}_{\varphi,\overline{G}}\) and a unique isometry \(\varphi_{\overline{d},\overline{G}}:E_{2}(\overline{d})\to E_{2}(\overline{d }_{\varphi,\overline{G}})\).
**(4.1) Lemma**.: _The following hold:_
1. _Given any gallery_ \(\overline{G}\) _starting at_ \(\overline{c}\)_, then_ \(\overline{c}^{\prime}_{\varphi,\overline{G}\,\overline{G}^{-1}}=\overline{c} ^{\prime}\) _and_ \(\varphi_{\overline{c},\overline{G}\,\overline{G}^{-1}}=\varphi\)_._
2. _Given any closed gallery_ \(\overline{G}\) _in a rank_ \(2\) _residue of_ \(\overline{c}\)_, then_ \(\overline{c}^{\prime}_{\varphi,\overline{G}}=\overline{c}^{\prime}\) _and_ \(\varphi_{\overline{c},\overline{G}}=\varphi\)_._
3. _If two galleries_ \(\overline{G},\overline{H}\) _joining_ \(\overline{c}\) _and_ \(\overline{d}\) _are homotopic, then_ \(\overline{d}^{\prime}_{\varphi,\overline{G}}=\overline{d}^{\prime}_{\varphi, \overline{H}}\) _and_ \(\varphi_{\overline{d},\overline{G}}=\varphi_{\overline{d},\overline{H}}\)_._
Proof.: Part \((i)\) follows from the uniqueness assertion in Proposition (4.2); part \((ii)\) follows from Theorem (4.3), and part \((iii)\) is a consequence of \((i)\) and \((ii)\).
**(4.2) Proposition**.: _Let \(\overline{X}\subset\overline{C}\) be simply connected and suppose that \(\overline{c}\in\overline{X}\). Then there exists a mapping \(\overline{\varphi}:\overline{X}\to\overline{C}^{\prime}\) and a system of isometries \((\varphi_{\overline{x}}\colon E_{2}(\overline{x})\to E_{2}(\overline{\varphi }(\overline{x})))_{\overline{x}\in\overline{X}}\) such that \(\varphi_{\overline{x}}=\varphi\) and such that \(\varphi_{\overline{x}}\) and \(\varphi_{\overline{y}}\) agree on the intersection of their domains for any two adjacent chambers \(\overline{x},\overline{y}\in\overline{X}\). The mapping \(\overline{\varphi}\) and the family of isometries \(\varphi_{\overline{x}}\) are uniquely determined by these properties._
Proof.: As \(\overline{X}\) is simply connected it is connected by definition. Given \(\overline{x}\in\overline{X}\) we obtain for each gallery \(\overline{G}\) joining \(\overline{c}\) and \(\overline{x}\) a unique chamber \(\overline{x}^{\prime}_{\varphi,\overline{G}}\) and an isometry \(\varphi_{\overline{x},\overline{G}}:E_{2}(\overline{x})\to E_{2}(\overline{x}^ {\prime}_{\varphi,\overline{G}})\). It follows by part \((iii)\) of the previous Lemma that \(\overline{x}^{\prime}_{\overline{G}}=\overline{x}^{\prime}_{\overline{H}}\) for any two galleries \(\overline{G},\overline{H}\) from \(\overline{c}\) to \(\overline{x}\) because \(\overline{X}\) is simply connected. Thus we obtain a mapping \(\overline{\varphi}\) and a system of isometries \((\varphi_{\overline{x}})_{\overline{x},\overline{G}}\overline{x}\).
Let \(\overline{x},\overline{y}\in\overline{\mathcal{X}}\) be adjacent. By considering a gallery joining \(\overline{c}\) and \(\overline{x}\) which passes through \(\overline{y}\) it is seen that it follows by construction that \(\varphi_{\overline{x}}\) and \(\varphi_{\overline{y}}\) agree on the intersection of their domains.
The uniqueness of \(\overline{\varphi}\) and \((\varphi_{\overline{x}})_{\overline{x}\in\overline{\mathcal{X}}}\) follows from the uniqueness assertion of Proposition (4.2) and an obvious induction.
## 5 Retractions
**(5.1) Convention**.: For the rest of this paper let \((W,S)\) be \(2\)-spherical and of rank at least \(3\). Furthermore, let \(\Delta\) be thick and let \(\Delta^{\prime}=(\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime}_{*})\) be a thick twin building of type \((W,S)\). We define \(\mathcal{C}^{\prime},\Delta^{\prime}_{+},\Delta^{\prime}_{-},\delta^{\prime},\ell^{\prime}\) as in the case of \(\Delta\).
### \(\pi\)-retractions
Let \(c\in\mathcal{C}_{-}\), let \(\Sigma\subseteq\mathcal{C}_{-}\) be an apartment of \(\Delta_{-}\) containing \(c\) and put \(\gamma:=(c,\Sigma)\). Then we define the mapping \(\pi_{\gamma}:\mathcal{C}_{+}\to\Sigma\) via \(\delta_{-}(c,\pi_{\gamma}(x))=\delta_{*}(c,x)\) for all \(x\in\mathcal{C}_{+}\) and we put \(\Pi_{\gamma}:=\{(x,\pi_{\gamma}(x))\mid x\in\mathcal{C}_{+}\}\).
**(5.1) Lemma**.: _Let \(\gamma=(c,\Sigma)\) be as above, then the following hold:_
1. \(\pi_{\gamma}\) _preserves_ \(s\)_-adjacency._
2. _The chamber_ \(\pi_{\gamma}(x)\) _is opposite to_ \(x\) _for each chamber_ \(x\in\mathcal{C}_{+}\)_._
3. _Given_ \(x\in\mathcal{C}_{+}\)_, then_ \(c\in A(x,\pi_{\gamma}(x))\)_._
Proof.: The first two assertions are proved in Lemma (7.1) in [10]. For the third assertion we notice that \(\delta_{-}(c,\pi_{\gamma}(x))=\delta_{*}(c,x)\) by definition and hence the claim follows.
**(5.2) Lemma**.: _Let \(\gamma=(c,\Sigma)\) be as above, then the mapping \(\mathcal{C}_{+}\to\Pi_{\gamma},x\mapsto(x,\pi_{\gamma}(x))\) is an \(s\)-adjacence preserving bijection. In particular, \(\Pi_{\gamma}\) is a simply connected subset of \(\overline{\mathcal{C}}\)._
Proof.: The first statement is immediate from Lemma (5.1). The second follows from Proposition (2.1).
### \(\omega\)-retractions
Let \(\overline{c}:=(c_{+},c_{-})\) be a pair of opposite chambers and let \(\Sigma=A_{-}(c_{+},c_{-})\). Then we define the mapping \(\omega_{\overline{c}}:\mathcal{C}_{+}\to\Sigma\) via \(\delta_{-}(c_{-},\omega_{\overline{c}}(x))=\delta_{+}(c_{+},x)\) for all \(x\in\mathcal{C}_{+}\). Furthermore we set \(\Omega_{\overline{c}}:=\{(x,\omega_{\overline{c}}(x))\mid x\in\mathcal{C}_{+}\}\). A gallery \((\overline{x}=\overline{x}_{0},\ldots,\overline{x}_{k}=\overline{y})\) in \(\operatorname{Opp}(\Delta)\) will be called \(\omega\)_-gallery_ if there exists a chamber \(\overline{c}\in\overline{\mathcal{C}}\) such that \(\overline{x}_{\lambda}\in\Omega_{\overline{c}}\) for each \(0\leq\lambda\leq k\).
**(5.1) Lemma**.: _Let \(\overline{c}\in\overline{\mathcal{C}}\). Then the following hold:_
1. \(\omega_{\overline{c}}\) _preserves_ \(s\)_-adjacency._
2. _The chamber_ \(\omega_{\overline{c}}(x)\) _is opposite to_ \(x\) _for each chamber_ \(x\in\mathcal{C}_{+}\)_._
3. _Given_ \(x\in\mathcal{C}_{+}\)_, then_ \(c_{+}\in A(x,\omega_{\overline{c}}(x))\)_._
Proof.: Let \(x,y\in\mathcal{C}_{+}\) and \(s\in S\) such that \(x,y\) are \(s\)-adjacent, and let \(w\in W\) such that \(\delta_{+}(c_{+},x)=w\). Then \(\delta_{+}(c_{+},y)\in\{w,ws\}\) by (Bu2). If \(\delta_{+}(c_{+},y)=w\) then \(\delta_{-}(c_{-},\omega_{\overline{c}}(x))=\delta_{-}(c_{-},\omega_{\overline{c }}(y))\). Since \(c_{-},\omega_{\overline{c}}(x),\omega_{\overline{c}}(y)\in\Sigma\) we obtain \(\omega_{\overline{c}}(x)=\omega_{\overline{c}}(y)\). Now we assume that \(\delta_{+}(c_{+},y)=ws\). Let \(P\) be the \(s\)-panel containing \(\omega_{\overline{c}}(y)\). Since \(\omega_{\overline{c}}(y)\in P\cap\Sigma\) we obtain
\(|P\cap\Sigma|=2\) because any apartment is thin. Let \(\omega_{\overline{c}}(y)\neq y^{\prime}\in P\cap\Sigma\). Using (Bu2) we obtain \(\delta_{-}(c_{-},y^{\prime})\in\{w,ws\}\). Since \(c_{-},y^{\prime},\omega_{\overline{c}}(x)\in\Sigma\) and \(\delta_{-}(c_{-},y^{\prime})=\delta_{-}(c_{-},\omega_{\overline{c}}(x))\), we obtain \(y^{\prime}=\omega_{\overline{c}}(x)\) as above. Thus \(\omega_{\overline{c}}\) preserves \(s\)-adjacency.
Let \(x\in\mathcal{C}_{+}\) and \(w\in W\) such that \(\delta_{+}(x,c_{+})=w\). Then \(\delta_{-}(c_{-},\omega_{\overline{c}}(x))=\delta_{+}(c_{+},x)=w^{-1}\). Since \(\omega_{\overline{c}}(x)\in A(c_{+},c_{-})\) we have \(\delta_{*}(\omega_{\overline{c}}(x),c_{+})=\delta_{-}(\omega_{\overline{c}}(x ),c_{-})=w\). Now we have \(\delta_{*}(\omega_{\overline{c}}(x),x)=ww^{-1}=1_{W}\) by Lemma 5.140 of [1] and the claim follows.
Let \(x\in\mathcal{C}_{+}\). Since \(\omega_{\overline{c}}\in A(c_{+},c_{-})\) we obtain \(\delta_{*}(c_{+},\omega_{\overline{c}}(x))=\delta_{-}(c_{-},\omega_{\overline {c}}(x))\). Furthermore, we have \(\delta_{+}(c_{+},x)=\delta_{-}(c_{-},\omega_{\overline{c}}(x))\). Combining these two facts we obtain \(c_{+}\in A(x,\omega_{\overline{c}}(x))\) as required.
**(5.2) Lemma**.: _Let \(P\) be an \(s\)-panel in \(\Delta_{+}\), let \(x,y\in P\) be such that \(\ell_{+}(c_{+},y)=\ell_{+}(c_{+},x)+1\) and let \(Q\) denote the \(s\)-panel of \(\Delta_{-}\) containing \(\omega_{\overline{c}}(x)\) and \(\omega_{\overline{c}}(y)\). Then the following hold:_
1. \(\operatorname{proj}_{P}c_{+}=x\)_;_
2. \(\operatorname{proj}_{Q}c_{+}=\omega_{\overline{c}}(y)\)_;_
3. \(\operatorname{proj}_{P}\omega_{\overline{c}}(y)=x\)_;_
4. \(\operatorname{proj}_{Q}x=\omega_{\overline{c}}(y)\)_._
Proof.: Part \((i)\) follows from \(\ell_{+}(c_{+},y)=\ell_{+}(c_{+},x)+1\). Since \(c_{+}\in A(y,\omega_{\overline{c}}(y))\cap A(x,\omega_{\overline{c}}(x))\) we have \(\ell_{*}(c_{+},\omega_{\overline{c}}(y))=\ell_{+}(c_{+},y)=\ell_{+}(c_{+},x)+ 1=\ell_{*}(c_{+},\omega_{\overline{c}}(x))+1\) which yields part \((ii)\). To prove part \((iii)\) we use the fact that \(c_{+}\in A(y,\omega_{\overline{c}}(y))\). As \(\operatorname{proj}_{P}c_{+}=x\), it follows by Lemma (3.2), that \(x\in A(y,\omega_{\overline{c}}(y))\). Applying Lemma (3.2) again we obtain that \(\operatorname{proj}_{P}\omega_{\overline{c}}(y)\in\{x,y\}\), since \(A_{+}(y,\omega_{\overline{c}}(y))\) is thin. As \(\ell_{*}(y,\omega_{\overline{c}}(y))=0\) we have \(\operatorname{proj}_{P}\omega_{\overline{c}}(y)=x\) as claimed. Part \((iv)\) follows now from part \((iii)\) and Lemma (3.3).
**(5.3) Lemma**.: _The mapping \(\mathcal{C}_{+}\to\Omega_{\overline{c}}\colon x\mapsto(x,\omega_{\overline{c} }(x))\) is an \(s\)-adjacence preserving bijection between \(\mathcal{C}_{+}\) and \(\Omega_{\overline{c}}\). In particular, \(\Omega_{\overline{c}}\) is a simply connected subset of \(\overline{\mathcal{C}}\)._
Proof.: The first statement is immediate from Lemma (5.1). The second follows from Proposition (2.1).
**(5.4) Lemma**.: _Let \(c\in\mathcal{C}_{-}\), let \(\Sigma\) be an apartment of \(\Delta_{-}\) containing \(c\) and let \(\gamma:=(c,\Sigma)\). Let \(x,y\in c^{\text{op}}\) and suppose that there exists a chamber \(z\in\mathcal{C}_{+}\) such that \(\delta_{+}(x,z)=\delta_{*}(c,z)=\delta_{+}(y,z)\). Then there exists an \(\omega\)-gallery joining \((x,c)\) and \((y,c)\) in \(\Pi_{\gamma}\cap\Omega_{(z,\pi_{\gamma}(z))}\)._
Proof.: We put \(\overline{z}:=(z,\pi_{\gamma}(z))\). Then we obtain that \(\omega_{\overline{z}}(z)=\pi_{\gamma}(z),\omega_{\overline{z}}(x)=\pi_{\gamma} (x)=c\) and \(\delta_{-}(\omega_{\overline{z}}(x),\omega_{\overline{z}}(z))=\delta_{+}(x,z)= \delta_{-}(\pi_{\gamma}(x),\pi_{\gamma}(z))\). Since \(\pi_{\gamma}\) and \(\omega_{\overline{z}}\) preserve \(s\)-adjacency by \((\pi 1)\) and \((\omega 1)\), it follows that they map any chamber on a minimal gallery joining \(x\) any \(z\) to a chamber on a minimal gallery joining \(\pi_{\gamma}(x)=\omega_{\overline{z}}(x)\) to \(\pi_{\gamma}(z)=\omega_{\overline{z}}(z)\). Thus we obtain \(\pi_{\gamma}(v)=\omega_{\overline{z}}(v)\) for each chamber \(v\) on a minimal gallery joining \(x\) and \(z\). The same is true for \(y\) instead of \(x\) and we obtain \(\pi_{\gamma}(u)=\omega_{\overline{z}}(u)\) for each chamber \(u\) on a minimal gallery joining \(y\) and \(z\). This yields the claim.
## 6 Constructing an isometry
We recall that the set \(S\) has at least three elements. In this subsection let \(\overline{c}:=(c_{+},c_{-})\in\overline{\mathcal{C}},\overline{c}^{\prime}:=(c ^{\prime}_{+},c^{\prime}_{-})\in\overline{\mathcal{C}^{\prime}}\) and let \(\varphi:E_{2}(\overline{c})\to E_{2}(\overline{c}^{\prime})\) be an isometry. We set \(\Sigma:=A_{-}(c_{+},c_{-}),\Sigma^{\prime}:=A_{-}(c^{\prime}_{+},c^{\prime}_{-})\) and denote the unique isometry from \(\Sigma\) onto \(\Sigma^{\prime}\) extending the mapping \(c_{-}\mapsto c^{\prime}_{-}\) by \(\alpha\). We set \(\omega:=\omega_{\overline{c}_{\overline{c}}},\omega^{\prime}:=\omega_{\overline{c} ^{\prime}}\) and \(\Omega:=\Omega_{\overline{c}}\). For \(x\in\mathcal{C}_{+}\) we put \(\overline{x}:=(x,\omega(x))\).
By Lemma (5.3) and Proposition (4.2) we get a mapping \(\overline{\varphi}:\Omega\to\overline{\mathcal{C}^{\prime}}\) and a system of isometries \((\varphi_{\overline{x}}:E_{2}(\overline{x})\to E_{2}(\overline{\varphi}( \overline{x})))_{x\in\mathcal{C}_{+}}\) such that
1. \(\varphi_{\overline{\mathcal{C}}}=\varphi\);
2. \(\varphi_{\overline{\mathcal{T}}}\) and \(\varphi_{\overline{\mathcal{T}}}\) coincide on the intersection of their domains whenever \(x,y\) are adjacent.
Furthermore, we define the mapping \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}^{\prime}_{+},x\mapsto\varphi_{ \overline{\mathcal{T}}}(x)\) and denote the restriction of \(\varphi_{\overline{\mathcal{T}}}\) on \(E_{2}(x)\) by \(\varphi_{x}\).
**(6.1) Lemma**.: _The mapping \(\varphi_{+}\) is an isometry from \(\mathcal{C}_{+}\) to \(\mathcal{C}^{\prime}_{+}\) and \(\varphi_{x}\) is the restriction of \(\varphi_{+}\) on \(E_{2}(x)\) for each \(x\in\mathcal{C}_{+}\)._
Proof.: Given two adjacent chambers \(x,y\in\mathcal{C}_{+}\), then \(\overline{x}\) and \(\overline{y}\) are adjacent. By property \((ii)\) above it follows that \(\varphi_{x}\) and \(\varphi_{y}\) coincide on \(E_{2}(x)\cap E_{2}(y)\). This shows that \(\varphi_{+}\) and \((\varphi_{x})_{x\in\mathcal{C}_{+}}\) satisfy the conditions of Lemma (4.5) and we are done.
**(6.2) Lemma**.: _Let \(P\subseteq\mathcal{C}_{+},P^{\prime}\subseteq\mathcal{C}^{\prime}_{+}\) be panels of \(\Delta_{+}\) and \(\Delta^{\prime}_{+}\) having the same type and let \(x:=\operatorname{proj}_{P}c_{+},x^{\prime}:=\operatorname{proj}_{P^{\prime}} c_{+}\). Suppose that \(\delta_{+}(c_{+},x)=\delta^{\prime}_{+}(c^{\prime}_{+},x^{\prime})\) and let \(\psi:E_{2}((x,\omega(x)))\to E_{2}((x^{\prime},\omega^{\prime}(x^{\prime})))\) be an isometry. Given \(y\in P\), then \(\psi(\omega(y))=\omega^{\prime}(\psi(y))\)._
Proof.: If \(x=y\) there is nothing to prove, so we may assume that \(x\neq y\). As \(x=\operatorname{proj}_{P}c_{+}\) we have \(\ell_{+}(c_{+},y)=\ell_{+}(c_{+},x)+1\). We put \(y^{\prime}:=\psi(y)\). We have \(y^{\prime}\in P^{\prime}\) and \(x^{\prime}\neq y^{\prime}\) because \(\psi\) is an isometry. As \(x^{\prime}=\operatorname{proj}_{P^{\prime}}c^{\prime}_{+}\) it follows \(\ell^{\prime}_{+}(c^{\prime}_{+},y^{\prime})=\ell^{\prime}_{+}(c^{\prime}_{+}, x^{\prime})+1\). Let \(Q\) (resp. \(Q^{\prime}\)) denote the panel containing \(\omega(x)\) (resp. \(\omega^{\prime}(x^{\prime})\)) opposite to \(P\) (resp. \(P^{\prime}\)). By Lemma (5.2) and Lemma (4.3) it follows that \(\psi(\omega(y))=\operatorname{proj}_{Q^{\prime}}\psi(\operatorname{proj}_{P} \omega(y))=\operatorname{proj}_{Q^{\prime}}\psi(x)=\operatorname{proj}_{Q^{ \prime}}x^{\prime}=\omega^{\prime}(y^{\prime})\) which yields the claim.
**(6.3) Proposition**.: _Let \(x,y\in\mathcal{C}_{+}\) be such that \(\omega(x)=\omega(y)\), then the restrictions of \(\varphi_{\overline{x}}\) and \(\varphi_{\overline{y}}\) on \(E_{2}(\omega(x))\) coincide._
Proof.: Using Lemma (6.2) it follows by induction on \(\ell_{+}(c_{+},u)\) that \(\varphi_{\overline{u}}(\omega(u))=\omega^{\prime}(\varphi_{+}(u))\) for each \(u\in\mathcal{C}_{+}\). As \(\varphi_{+}\) is an isometry mapping \(c_{+}\) onto \(c^{\prime}_{+}\) (cf. Lemma (6.1)) it follows that \(\alpha(\omega(u))=\omega^{\prime}(\varphi_{+}(u))\) for each \(u\in\mathcal{C}_{+}\).
Let \(u\in\mathcal{C}_{+}\) and let \(z\in E_{2}(\omega(u))\cap\Sigma\). Then there exists \(v\in E_{2}(u)\) with \(z=\omega(v)\). Let \((u=x_{0},\ldots,x_{k}=v)\) be a gallery in a rank 2 residue joining \(u\) and \(v\). It follows that \(v\in E_{2}(x_{\lambda})\) and hence \(z\in E_{2}(\omega(x_{\lambda}))\) for each \(0\leq\lambda\leq k\). Using property \((ii)\) of the system \((\varphi_{\overline{u}})_{u\in\mathcal{C}_{+}}\), it follows by induction on \(k\) that \(\varphi_{\overline{u}}(z)=\varphi_{\overline{v}}(z)\). Combining this with the previous considerations we obtain \(\varphi_{\overline{u}}(z)=\alpha(z)\) for each \(z\in E_{2}(\omega(u))\cap\Sigma\).
We complete the proof of the proposition by induction on \(\ell_{+}(c_{+},x)=\ell_{+}(c_{+},y)\). If \(\ell_{+}(c_{+},x)=0\) then \(x=c_{+}=y\) and there is nothing to prove. Let \(\ell_{+}(c_{+},x)>0\), then there exists \(s\in S\) such that \(\ell(\delta_{+}(c_{+},x)s)=\ell_{+}(c_{+},x)-1\). Let \(P_{x},P_{y}\) denote the \(s\)-panels containing \(x\) and \(y\), respectively, and put \(x_{1}:=\operatorname{proj}_{P_{x}}c_{+},y_{1}:=\operatorname{proj}_{P_{y}}c_{+}\). Then \(\ell_{+}(c_{+},x_{1})=\ell_{+}(c_{+},x)-1\) and we obtain \(\omega(x_{1})=\omega(y_{1})\). Using property \((ii)\) of the system \((\varphi_{\overline{x}})_{x\in\mathcal{C}_{+}}\) and the induction assumption we obtain \(\varphi_{\overline{x}}(z)=\varphi_{\overline{x}_{1}}(z)=\varphi_{\overline{y}_{1 }}(z)=\varphi_{\overline{y}}(z)\) for each \(z\in E_{1}(\omega(x))\). By the previous considerations we have that \(\varphi_{\overline{x}}\) and \(\varphi_{\overline{y}}\) agree on \(E_{2}(\omega(x))\cap\Sigma\) and therefore the claim follows from Lemma (4.4).
A consequence of Proposition (6.3) is the following corollary which will be needed in the next subsection.
**(6.4) Corollary**.: _Let \(\overline{x}:=(x,z),\overline{y}:=(y,z)\in\overline{\mathcal{C}}\), let \(\overline{x}^{\prime}\in\overline{\mathcal{C}^{\prime}}\) and let \(\psi:E_{2}(\overline{x})\to E_{2}(\overline{x}^{\prime})\) be an isometry. Let \(\overline{G}\) be an \(\omega\)-gallery joining \(\overline{x}\) and \(\overline{y}\) in \(\operatorname{Opp}(\Delta)\). Then \(\psi_{\overline{y},\overline{G}}\) and \(\psi\) coincide on \(E_{2}(z)\)._
Proof.: Since \(\omega(x)=z=\omega(y)\), the claim follows from Proposition (6.3) and the definition of \(\psi_{\overline{y},\overline{G}}\)
### Proof of the main theorem
**(6.1) Theorem**.: _Let \(\overline{c}:=(c_{+},c_{-})\in\overline{\mathcal{C}},\overline{c}^{\prime}:=(c_{+ }^{\prime},c_{-}^{\prime})\in\overline{\mathcal{C}}^{\prime}\). Then every isometry \(\varphi:E_{2}(c_{+})\cup\{c_{-}\}\to E_{2}(c_{+}^{\prime})\cup\{c_{-}^{\prime}\}\) extends to an isometry from \(\mathcal{C}_{+}\cup E_{2}(c_{-})\) onto \(\mathcal{C}_{+}^{\prime}\cup E_{2}(c_{-}^{\prime})\)._
Proof.: By Proposition (4.1) the isometry \(\varphi\) extends to an isometry from \(E_{2}(\overline{c})\) onto \(E_{2}(\overline{c}^{\prime})\). We choose an apartment \(\Sigma\subseteq\mathcal{C}_{-}\) containing \(c_{-}\) and set \(\pi:=\pi_{(c_{-},\Sigma)},\Pi:=\Pi_{(c_{-},\Sigma)}\). For \(x\in\mathcal{C}_{+}\) we put \(\overline{x}:=(x,\pi(x))\). By Lemma (5.2) and Proposition (4.2) we obtain a mapping \(\overline{\varphi}:\Pi\to\overline{\mathcal{C}}^{\prime}\) and a system of isometries \((\varphi_{\overline{x}}:E_{2}(\overline{x})\to E_{2}(\overline{\varphi}( \overline{x})))_{x\in\mathcal{C}_{+}}\) with the properties \((i)\) and \((ii)\) of the previous subsection. We define the mapping \(\varphi_{+}:\mathcal{C}_{+}\to\mathcal{C}_{+}^{\prime},x\mapsto\varphi_{ \overline{x}}(x)\) and we denote the restriction of \(\varphi_{\overline{x}}\) on \(E_{2}(x)\) by \(\varphi_{x}\). Then \(\varphi_{+}\) is an isometry from \(\mathcal{C}_{+}\) onto \(\mathcal{C}_{+}^{\prime}\) and \(\varphi_{+},\varphi_{x}\) agree on \(E_{2}(x)\) for each \(x\in\mathcal{C}_{+}\) by Lemma (6.1).
Let \(x,y\in c_{-}^{op}\). Using Lemma (3.2) there exist \(k\in\mathbb{N}\), a sequence \(x_{0}:=x,\ldots,x_{k}:=y\) of chambers in \(c_{-}^{op}\) and a sequence \(z_{1},\ldots,z_{k}\) of chambers in \(\mathcal{C}_{+}\) such that \(\delta_{*}(c_{-},z_{\lambda})=\delta_{+}(x_{\lambda-1},z_{\lambda})=\delta_{+ }(x_{\lambda},z_{\lambda})\) for each \(1\leq\lambda\leq k\). By Lemma (5.4) there exists for any \(1\leq\lambda\leq k\) an \(\omega\)-gallery joining \((x_{\lambda-1},c_{-})\) and \((x_{\lambda},c_{-})\) in \(\Pi_{\gamma}\cap\Omega_{(z_{\lambda},\pi_{\gamma}(z_{\lambda}))}\). Now we obtain that \(\varphi_{\overline{x}},\varphi_{\overline{y}}\) agree on \(E_{2}(c_{-})\) by Corollary (6.4). We let \(\varphi_{-}:E_{2}(c_{-})\to E_{2}(c_{-}^{\prime}),z\mapsto\varphi_{\overline{x }}(z)\) denote this common restriction for some \(x\in c_{-}^{op}\) and for \(z\in E_{2}(c_{-})\) we put \(z^{\prime}:=\varphi_{-}(z)\).
We want to show now, that \(\varphi_{+}(z^{op})\subseteq(z^{\prime})^{op}\) for each \(z\in E_{2}(c_{-})\). Let \(v\in z^{op}\) then there exists \(x\in c_{-}^{op}\) such that \(v\in E_{2}(x)\) by Lemma (3.1). Since \(\varphi_{+}(v)=\varphi_{x}(v),\varphi_{x}=\varphi_{\overline{x}}|_{E_{2}(x)}\) and since \(\varphi_{\overline{x}}\) is an isometry from \(E_{2}(\overline{x})\) onto \(E_{2}(\overline{\varphi}(\overline{x}))\) whose restriction on \(E_{2}(c_{-})\) is \(\varphi_{-}\) it follows that \(\varphi_{+}(v)\in(z^{\prime})^{op}\).
By Lemma (4.6) the pair \((z,z^{\prime})=(z,\varphi_{-}(z))\) is \(\varphi_{+}\)-admissible. Applying Lemma (4.1) to the isometries \(\varphi_{+}\) and \(\varphi_{-}\) we obtain an isometry \(\varphi_{+}\cup\varphi_{-}\) as required.
|
2309.03328 | Heat Current Properties of a Rotor Chain Type Model with
Next-Nearest-Neighbor Interactions | In this article, to study the heat flow behavior, we perform analytical
investigations in a rotor chain type model (involving inner stochastic noises)
with next and next-nearest-neighbor interactions. It is known in the literature
that the chain rotor model with long range interactions presents an insulating
phase for the heat conductivity. But we show, in contrast with such a behavior,
that the addition of a next-nearest-neighbor potential increases the thermal
conductivity, at least in the low temperature regime, indicating that the
insulating property is a genuine long range interaction effect. We still
establish, now by numerical computations, the existence of a thermal
rectification in systems with graded structures. | Humberto C. F. Lemos, Emmanuel Pereira | 2023-09-06T19:18:18Z | http://arxiv.org/abs/2309.03328v1 | # Heat Current Properties of a Rotor Chain Type Model with Next-Nearest-Neighbor Interactions
###### Abstract
In this article, to study the heat flow behavior, we perform analytical investigations in a rotor chain type model (involving inner stochastic noises) with next and next-nearest-neighbor interactions. It is known in the literature that the chain rotor model with long range interactions presents an insulating phase for the heat conductivity. But we show, in contrast with such a behavior, that the addition of a next-nearest-neighbor potential increases the thermal conductivity, at least in the low temperature regime, indicating that the insulating property is a genuine long range interaction effect. We still establish, now by numerical computations, the existence of a thermal rectification in systems with graded structures.
## I Introduction
A central question in nonequilibrium statistical physics is the derivation of the macroscopic currents and their properties from the underlying microscopic models. As an example, one challenging problem that drew much attention a few decades ago was the onset of Fourier law from first principles. Fourier law states that the heat current is proportional to the gradient of temperature, i.e., to the difference of the temperatures at the ends of the system divided by its length. In a seminal work, Rieder, Lebowitz, and Lieb [1] found an anomalous heat conductivity for a chain of harmonic oscillators driven by Hamiltonian equations of motion submitted to different temperatures at the boundaries of the chain: the heat conductivity grows linearly with the system size, i.e. in other words, Fourier law does not hold, the heat current is proportional to the difference of temperature only. In Ref.[2], Bolsterli, Rich, and Visscher found a normal heat conductivity (Fourier law holds) for the harmonic chain when it is under the influence of thermal reservoirs all along the chain. The temperature for the boundaries of the chain can be freely chosen, but for the inner sites, the temperatures are determined by the self-consistency condition (SC), which means that there is no net heat flow between the inner site and its linked reservoir in the steady state. With this setup, the authors showed that Fourier law holds for this model. A few decades later, the same chain of harmonic oscillators was revisited [3], and the question was revived. The main change is that the authors studied a \(d\)-dimensional system of oscillators, with \(d\geq 1\). Again, all the sites of the chain are under the influence of its own thermal reservoir under SC, but now the heat baths are modeled by white noises, and so the microscopic dynamics is given by a large number of coupled stochastic ordinary differential equations. This paper triggered an avalanche of works on the subject, many of them numerical, trying to understand the necessary and/or sufficient conditions for the onset of the Fourier law. As an example, among many other microscopic models studied since then, in Refs.[4; 5] the authors numerically studied the rotor model with nearest-neighbor (NN) nonlinear bounded interaction, finding that Fourier law holds for this one-dimensional anharmonic chain with conserved momentum, which was thought to be forbidden [6]. One of us has analytically studied a type of rotor model [7], and found a sort of "phase transition": Fourier law holds only at the high-temperature regime.
Despite this approach was not able to close this Fourier law onset question, the intensive study of the heat flow on one-dimensional chains led to a more deep understanding of the subject, which allowed as a byproduct the theoretical proposal of a thermal diode [8]: a device which conducts heat preferably in one direction, and presents a new phenomenon called thermal rectification. Again we saw a boom of works on this subject, the majority of them studied numerically, and many of them by coupling two different chains in different regimes of heat conduction, no matter if they present normal (Fourier law) or ballistic thermal conductivity. Trying to elucidate the conditions for the onset of thermal rectification, first is straightforward that the system must be inhomogeneous, but that is not sufficient: in Ref.[9] we proved the absence of thermal rectification in classical Hamiltonian harmonic chains, for any
distribution for the masses along the chain, so some kind of anharmonicity is a necessary condition. In Ref.[10], one of us established sufficient conditions for thermal rectification in general graded materials.
Recently, the rotor model was revisited in Ref.[11]: the authors studied the rotor model with long-range (LR) attractive couplings, and they found that Fourier law holds only for sufficiently short-range interactions. In the LR regime, they found that an insulator behavior emerges, a very interesting and counter-intuitive effect. Motivated by this result, in this present paper we investigate a type of one-dimensional rotor model, but now we go beyond the NN interaction between the particles of the chain - actually, we set up our model with a general range for the interparticle interaction potential, and we remind our analytical approach to evaluate the heat flux in section II. Using tools from stochastic calculus [12], we construct an integral formalism to evaluate the heat flow given any temperatures at the boundaries of the chain. Later, for technical reasons, we considered only a low-temperature regime for our perturbative analysis. It is worth recalling that a similar perturbative approach was proven to be rigorous in Ref.[13]. In section III, we use this recently built integral formalism to evaluate heat flow for some cases. We start recapping previous known results, to assure the correctness of our results. Then we turn our attention to our model: we analytically study the linearly graded masses chain with next-nearest-neighbor (NNN) interparticle interaction. That is, we avoid the huge difficulty of the analytical investigation of rotor chain with LR interactions, but give one step in such a direction by considering a NNN potential. It is worth recalling that the investigation of the heat flow in a model with NNN interactions is interesting by itself, see, e.g., Ref.[14]. The NN interaction coupling is always positive, while the NNN interaction coupling can be either positive or negative. In a loose way to say it, it is like we always have an attractive NN interaction between the particles, but the NNN interaction can be either attractive or repulsive. One of our goals is to find out if this model presents thermal rectification, but we also aim to investigate if a repulsive-like NNN interaction would hinder the heat flow, inspired in Ref.[11]: as we said before, they found an insulator behavior for LR attractive couplings, and this result deserves further investigation. Our analytical results show that such NNN interaction, no matter if it is attractive or repulsive, only increases the heat flow, so the insulator regime of the rotor must be a genuine LR effect, at least on the low-temperature regime. Further, we implement numerical calculations to evaluate heat flux for our NNN-interaction model, and we show that, for a graded mass chain, our system presents thermal rectification.
The rest of this paper is organized as follows. In section II we present the model, and the used approach and derive some analytical expressions for the heat flow. In section III we describe the main results. In section IV we give our concluding remarks, and the Appendix is devoted to some technical notes.
## II Model
Let us introduce our model. We consider a chain of \(N\) oscillators given by the Hamiltonian
\[\mathcal{H}=\sum_{j=1}^{N}\left[\frac{p_{j}^{2}}{2m_{j}}+U^{(1)}(q_{j})+\frac {1}{2}\sum_{\begin{subarray}{c}1\leq l\leq N;\\ l\neq j\end{subarray}}U^{(2)}(q_{j}-q_{l})\right], \tag{1}\]
where \(q_{j}\) and \(p_{j}\) give us, respectively, position and momentum for \(j\)-th particle of the chain, \(m_{j}\) is particle mass, and it is pinned to its equilibrium position \(q_{j}=0\) by a harmonic interaction \(U^{(1)}(q_{j})=M_{j}q_{j}^{2}/2\), henceforth named on-site potential. The particles interact with each other by a bounded anharmonic interparticle potential
\[U^{(2)}(q_{j}-q_{l})=\lambda_{j,l}[1-\cos(\kappa(q_{j}-q_{l}))], \tag{2}\]
where \(\lambda_{j,l}\) is the coupling strength, and \(\kappa\) is a parameter usually taken as 1 in the other studies of the rotor model. In other words, we study heat flux on a version of a well-known rotor model [7]. Definition (2) above is quite general, but in this work, we take only symmetric interaction coupling \(\lambda_{j,l}=\lambda_{l,j}\). It is worth noticing that the Hamiltonian (1) poses no restriction on the range of the interparticle interaction, and we can both study nearest-neighbor (NN) or long-range (LR) models, among others. The dynamics is given by Hamilton equations of motion coupled to stochastic white noises which mimic the contact of the system with thermal reservoirs (at least for the noise at the boundaries, details ahead), namely
\[dq_{j}=\frac{\partial\mathcal{H}}{\partial p_{j}}\,dt=\frac{p_{ j}}{m_{j}}\,dt, \tag{3a}\] \[dp_{j}=-\frac{\partial\mathcal{H}}{\partial p_{j}}-\zeta_{j}p_{ j}dt+\gamma_{j}^{1/2}dB_{j}=-M_{j}q_{j}dt-\sum_{l\neq j}U^{\prime(2)}\,dt- \zeta_{j}p_{j}dt+\gamma_{j}^{1/2}dB_{j}, \tag{3b}\]
where prime denotes the derivative with respect to \(q_{j}\), viz.
\[U^{\prime(2)}(q_{j}-q_{l})=\lambda_{j,l}\,\kappa\sin(\kappa(q_{j}-q_{l}))=U^{ \prime(2)}_{j,l}, \tag{4}\]
where the last equality above is just a definition for the shortcut notation \(U^{\prime(2)}_{j,l}\). On Eq. (3b), each \(dB_{j}\) is a zero mean independent Wiener process, i.e.
\[\langle dB_{j}(t)\rangle=0,\quad\langle dB_{j}(t)dB_{j^{\prime}}(t^{\prime}) \rangle=\delta_{j,j^{\prime}}\delta(t-t^{\prime})dt, \tag{5}\]
for any given sites \(j,j^{\prime}\) of the chain and times \(t,t^{\prime}>0\). We also have \(\gamma_{j}=2m_{j}\zeta_{j}T_{j}\), where \(\zeta_{j}\) is heat bath coupling constant for \(j\)-th site, and \(T_{j}\) is the temperature of the \(j\)-th heat bath.
From now on, for the sake of understanding, we recall the main steps of our approach. Further details can be found in previous works [7; 9; 15]. Symmetrically defining the energy \(\mathcal{H}_{j}\) for the \(j\)-th particle as \(\mathcal{H}=\sum_{j}\mathcal{H}_{j}\), we get
\[\mathcal{H}_{j}=\frac{p_{j}^{2}}{2m_{j}}+\frac{1}{2}\,M_{j}q_{j}^{2}+\frac{1}{ 2}\sum_{l\neq j}U^{(2)}(q_{j}-q_{l}). \tag{6}\]
Using mathematical tools from Ito stochastic calculus [12], we can obtain
\[\left\langle\frac{d\mathcal{H}_{j}}{dt}\right\rangle=\langle\mathcal{F}_{ \to j}\rangle-\langle\mathcal{F}_{j\rightarrow}\rangle+\langle R_{j}\rangle\,, \tag{7}\]
where \(\langle\cdot\rangle\) denotes expectation with respect to white noise distribution, and
\[R_{j} =\zeta_{j}\left(T_{j}-\frac{p_{j}^{2}}{m_{j}}\right)\,, \tag{8a}\] \[\mathcal{F}_{\to j} =\frac{1}{2}\sum_{l<j}U^{\prime(2)}(q_{l}-q_{j})\left(\frac{p_{j} }{m_{j}}+\frac{p_{l}}{m_{l}}\right)\,,\] (8b) \[\mathcal{F}_{j\rightarrow} =\frac{1}{2}\sum_{l>j}U^{\prime(2)}(q_{j}-q_{l})\left(\frac{p_{j} }{m_{j}}+\frac{p_{l}}{m_{l}}\right)\,. \tag{8c}\]
Detailing, \(R_{j}\) tells us about the average energy exchange between the \(j\)-th site and its thermal reservoir, while \(\mathcal{F}_{\to j}(\mathcal{F}_{j\rightarrow})\) gives us the energy flux from (to) \(l\)-th sites to (from) \(j\)-th site; in other words, the heat flux inside the chain.
We aim to study heat flux on the nonequilibrium stationary state (NESS), so we take \(T_{1}\neq T_{N}\) for temperatures at the boundaries of the chain. For inner sites, \(T_{j}\) will be given by self-consistency condition, which means that on NESS there will be, on average, no energy exchange between \(j\)-th site of chain and its bath, i.e. \(\langle R_{j}\rangle=0\). In other words, the inner stochastic reservoirs are not real thermal baths, they only represent some phonon scattering process given by interactions not directly presented in the Hamiltonian. Since NESS is characterized by stationary energy flux, we have
\[\left\langle\frac{d\mathcal{H}_{j}}{dt}\right\rangle=0, \tag{9}\]
and therefore \(\langle\mathcal{F}_{\to j}\rangle=\langle\mathcal{F}_{j\rightarrow}\rangle\), for any \(2\leq j\leq N-1\). In other words, if for example, we have \(T_{1}>T_{N}\), the thermal reservoir connected to the left site injects energy on the chain, the energy flows through it and leaves it on the right boundary. Hence, to know heat flux on NESS, we must evaluate \(\langle\mathcal{F}_{\to j}\rangle\) or \(\langle\mathcal{F}_{j\rightarrow}\rangle\) for any inner site \(j\).
Aiming to solve stochastic ODE's (3), we now define phase space vector \(\varphi=(q,p)^{\dagger}\in\mathbb{R}^{2N}\), i.e. \(\varphi_{j}=q_{j}\) and \(\varphi_{j+N}=p_{j}\), for any \(1\leq j\leq N\). We rewrite dynamics (3) as
\[d\varphi=-A\varphi dt-\mathcal{U}^{\prime}(\varphi)dt+\sigma dB, \tag{10}\]
where \(A\) and \(\sigma\) are \(2N\times 2N\) matrices respectively given by
\[A=\begin{pmatrix}0&-m^{-1}\\ M&\zeta\end{pmatrix},\quad\sigma=\begin{pmatrix}0&0\\ 0&\sqrt{2m\zeta T}\end{pmatrix}. \tag{11}\]
In equation above, both matrices are described in four \(N\times N\) blocks, and despite redundant notation, \(m\) means the diagonal matrix for the masses, \(m_{j,l}=m_{j}\delta_{j,l}\), and the same holds for \(N\times N\) diagonal matrices \(M\), \(\zeta\) and \(T\); and the nonlinear term \(\mathcal{U}^{\prime}\) in Eq. (10) reminds us about \(U^{(2)}\) derivative with respect to \(q\) - note that \(\mathcal{U}^{\prime}\) is nonzero only for indices \(j>N\). Also, again using a redundant notation, \(dB\) is a \(2N\)-vector whose components are \(dB_{j}=0\), and \(dB_{j+N}\) is the white noise acting on the \(j\)-th site of the chain - see Eq. (3b) - for any \(1\leq j\leq N\). To obtain the heat flux on NESS, we fix any site \(\alpha\) in the bulk of the chain and evaluate \(\left\langle\mathcal{F}_{\alpha\rightarrow}\right\rangle\) given by Eq.(8c), which will be defined below as
\[\left\langle\Omega(\varphi)\right\rangle=\lim_{t\rightarrow\infty}\left\langle \mathcal{F}_{\alpha\rightarrow}(\varphi(t))\right\rangle=\lim_{t\to \infty}\frac{1}{2}\sum_{\beta>\alpha}\lambda_{\alpha,\beta\kappa}\left\langle \sin\left(\kappa\big{(}\varphi_{\alpha}(t)-\varphi_{\beta}(t)\big{)}\right) \left(\frac{\varphi_{\alpha+N}(t)}{m_{\alpha}}+\frac{\varphi_{\beta+N}(t)}{m_ {\beta}}\right)\right\rangle, \tag{12}\]
where we have used \(U^{\prime(2)}\) given by Eq.(4). We emphasize that the average of \(\Omega(\varphi)\) defined above gives us the heat flux on NESS, and our main goal is to evaluate it. But as we can see from Eq.(3), the equations of motion for this system are a set of \(2N\) first-order coupled nonlinear stochastic ODEs, and to find a solution for such a set of equations is a really hard, if not impossible, task. We then proceed as follows: first, we find the solution for a simplified process denoted as \(\phi\), which is related to the complete one, named \(\varphi\). This easier problem is obtained by taking interparticle coupling as identically zero, i.e. \(\lambda_{j,l}=0\). So now we have \(2N\) linear decoupled stochastic ODEs, written as
\[d\phi=-A\phi\,dt+\sigma dB. \tag{13}\]
The solution for Eq. (13) is the well known Ornstein-Uhlenbeck process
\[\phi(t)=e^{-tA}\phi(0)+\int_{0}^{t}e^{-(t-s)A}\sigma dB(s). \tag{14}\]
Defining \(\left\langle\cdot\right\rangle_{0}\) as the average over noises realisations for simplified process (13), we have
\[\left\langle\phi(t)\right\rangle_{0}=e^{-tA}\left\langle\phi(0)\right\rangle_{ 0},\]
where we have used an important property from Ito stochastic calculus that guarantees that
\[\left\langle\int_{S}^{T}\psi(s)dB(s)\right\rangle_{0}=0,\]
for some class of well behaved functions \(\psi\), details in Ref.[12]. Since \(A\) is a stable matrix [16], we have \(e^{-tA}\phi(0)\to 0\) as \(t\rightarrow+\infty\), for any given initial condition, so without loss of generality we take \(\phi(0)=0\). Then (13) is a zero mean Gaussian process, whose covariance is
\[\left\langle\phi(t)\phi^{\dagger}(t^{\prime})\right\rangle_{0}=\mathcal{C}(t,t ^{\prime}), \tag{15}\]
where
\[\mathcal{C}(t,t^{\prime})=\begin{cases}e^{-(t-t^{\prime})A}\mathcal{C}(t^{ \prime},t^{\prime})&,\text{ if }t\geq t^{\prime}\\ \mathcal{C}(t,t)e^{-(t^{\prime}-t)A^{\dagger}}&,\text{ if }t\leq t^{\prime}, \end{cases} \tag{16}\]
with
\[\mathcal{C}(t,t)=\int_{0}^{t}ds\,e^{-sA}\sigma^{2}e^{-sA^{\dagger}}. \tag{17}\]
From a straightforward computation, it follows that, for a single site \(j\)
\[e^{-tA_{(j)}}=e^{-\frac{\zeta_{j}}{2}t}\left(\cosh(\rho_{j}t)I_{2}+\frac{ \sinh(\rho_{j}t)}{\rho_{j}}\,B_{(j)}\right), \tag{18}\]
where \(\rho_{j}=[(\zeta_{j}/2)^{2}-M_{j}/m_{j}]^{1/2}\) and \(A_{(j)}\) is the \(2\times 2\) matrix related to \(A\) for a single site \(j\), \(I_{2}\) is the identity matrix and
\[B_{(j)}=\begin{pmatrix}\frac{\zeta_{j}}{2}&m_{j}^{-1}\\ -M_{j}&-\frac{\zeta_{j}}{2}\end{pmatrix}.\]
Evaluating Eq. (17) for \(t\rightarrow+\infty\), we get NESS covariance for isolated process \(\phi\)
\[C=\int_{0}^{\infty}ds\,e^{-sA}\sigma^{2}e^{-sA^{\dagger}}=\begin{pmatrix}M^{-1}T& 0\\ 0&mT\end{pmatrix}, \tag{19}\]
and we can see that the covariance \(C\) is a diagonal matrix for the simplified process \(\phi\). As a final remark for covariance, if \(t\) and \(t^{\prime}\) are sufficient large, we can approach Eq.(16) as
\[\mathcal{C}(t,t^{\prime})=\begin{cases}e^{-(t-t^{\prime})A}C+\mathcal{O} \left(e^{-(t+t^{\prime})\zeta}\right)&,\text{ if }t\geq t^{\prime}\\ Ce^{-(t^{\prime}-t)A^{\dagger}}+\mathcal{O}\left(e^{-(t+t^{\prime})\zeta} \right)&,\text{ if }t\leq t^{\prime}.\end{cases} \tag{20}\]
To recover the effects of anharmonic interparticle potential \(U^{(2)}\) on the system, we use the Girsanov theorem [12], which says that to evaluate the average for any quantity \(f\) that depends on the complete process \(\varphi\), we can compute the average for the same quantity \(f\) depending on simplified process \(\phi\), corrected by a factor \(Z(t)\)
\[\left\langle f(\varphi(t))\right\rangle=\left\langle f(\phi(t))Z(t)\right\rangle _{0},\]
which is given by
\[Z(t)=\exp\left(\int_{0}^{t}u\cdot dB(s)-\frac{1}{2}\int_{0}^{t}\|u\|^{2}ds \right)\,, \tag{21}\]
where \(u\in\mathbb{R}^{2N}\) is related to the difference between complete and simplified processes. Namely, for any index \(1\leq j\leq N\), we have
\[u_{j} = 0 \tag{22}\] \[\gamma_{j}^{1/2}u_{j+N} = \sum_{l\neq j}U_{j,l}^{\prime(2)}=\sum_{l\neq j}\lambda_{j,l} \kappa\sin(\kappa(\phi_{j}-\phi_{l})).\]
After some tedious but straightforward calculations, we find
\[Z(t)=\exp\left[-\Delta F(\phi(t))-\int_{0}^{t}W(\phi(s))\,ds\right]\,, \tag{23}\]
where \(\Delta F(\phi(t))=F(\phi(t))-F(\phi(0))\), with
\[F(\phi(t))=\frac{1}{2\zeta_{j}m_{j}T_{j}}\left(\sum_{l\neq j}\lambda_{j,l} \kappa\sin\left(\kappa\big{(}\phi_{j}(t)-\phi_{l}(t)\big{)}\right)\right)\phi _{j+N}(t), \tag{24}\]
and \(W(\phi(s))=W_{1}(\phi(s))+W_{2}(\phi(s))+W_{3}(\phi(s))+W_{4}(\phi(s))\), with
\[W_{1}(\phi(s)) =\sum_{j}\sum_{l\neq j}\frac{\lambda_{j,l}\kappa M_{j}\phi_{j}(s) }{2\zeta_{j}m_{j}T_{j}}\,\sin\left(\kappa\big{(}\phi_{j}(s)-\phi_{l}(s)\big{)}\right) \tag{25a}\] \[W_{2}(\phi(s)) =\sum_{j}\sum_{l\neq j}\frac{\lambda_{j,l}\kappa\zeta_{j}\phi_{j +N}(s)}{2\zeta_{j}m_{j}T_{j}}\,\sin\left(\kappa\big{(}\phi_{j}(s)-\phi_{l}(s) \big{)}\right),\] (25b) \[W_{3}(\phi(s)) =-\sum_{j}\sum_{l\neq j}\frac{\lambda_{j,l}\kappa^{2}\phi_{j+N}( s)}{2\zeta_{j}m_{j}T_{j}}\,\cos\left(\kappa\big{(}\phi_{j}(s)-\phi_{l}(s) \big{)}\right)\,\,\left(\frac{\phi_{j+N}(s)}{m_{j}}-\frac{\phi_{l+N}(s)}{m_{l} }\right),\] (25c) \[W_{4}(\phi(s)) =\sum_{j}\sum_{l,l^{\prime}\neq j}\frac{\lambda_{j,l}\lambda_{j,l^ {\prime}}\kappa^{2}}{4\zeta_{j}m_{j}T_{j}}\,\sin\left(\kappa\big{(}\phi_{j}(s) -\phi_{l}(s)\big{)}\right)\sin\left(\kappa\big{(}\phi_{j}(s)-\phi_{l^{\prime}}( s)\big{)}\right). \tag{25d}\]
We now develop a perturbative approach for our calculations, taking nonlinear coupling \(\lambda\) as a small perturbative parameter. We note from Eq.(25d) that \(W_{4}\) depends on \(\lambda^{2}\), and so this term will be dropped on a first-order expansion. A first-order expansion on \(\lambda\) gives us
\[\left\langle\Omega(\varphi)\right\rangle=\frac{\left\langle\Omega(\phi)e^{- \Delta F-f\,Wds}\right\rangle_{0}}{\left\langle e^{-\Delta F-f\,Wds}\right\rangle _{0}}=\left\langle\Omega(\phi)\right\rangle_{0}-\left\langle\Omega(\phi); \Delta F\right\rangle_{0}-\left\langle\Omega(\phi);\int Wds\right\rangle_{0}+ \mathcal{O}\left(\lambda^{3}\right), \tag{26}\]
where the semicolon means truncated expectation value given by
\[\left\langle f;g\right\rangle=\left\langle fg\right\rangle-\left\langle f\right\rangle \left\langle g\right\rangle.\]
We remind our definition (12) and emphasize that all averages above must be taken on limit \(t\rightarrow\infty\). It may be confusing to see on Eq.(26) an expression up to order \(\mathcal{O}\left(\lambda^{3}\right)\): despite we have taken only first-order terms in our perturbative parameter, we already have a \(\lambda\) on \(\Omega\) definition, as one can see on Eq.(12). This will be clear after we evaluate our first term on expression - see, e.g. Eq.(29).
To obtain the heat flux we must now evaluate each term on Eq.(26). It is easy to see that \(\left\langle\Omega(\phi)\right\rangle_{0}=0\). Indeed, Eq. (12) shows that it depends on \(C_{k,k^{\prime}+N}=0\), as we can see on Eq. (19). For a similar reason we get \(\left\langle\Omega(\phi)F(\phi(0))\right\rangle_{0}=0\). As an example of a non-vanishing average, we show the main steps in evaluation for
\[\left\langle\Omega;F_{t}\right\rangle_{0} := \lim_{t\rightarrow+\infty}\left\langle\Omega(\phi(t));F(t) \right\rangle_{0}=\lim_{t\rightarrow+\infty}\sum_{\beta>\alpha;j;l\neq j} \frac{\lambda_{\alpha,\beta}\lambda_{j,l\mathcal{K}}}{4\zeta_{j}m_{j}T_{j}}\times \tag{27}\] \[\times\left\langle\sin\left(\kappa\big{(}\phi_{\alpha}(t)-\phi_{ \beta}(t)\big{)}\right)\left(\frac{\phi_{\alpha+N}(t)}{m_{\alpha}}+\frac{\phi_{ \beta+N}(t)}{m_{\beta}}\right);\sin\left(\kappa\big{(}\phi_{j}(t)-\phi_{l}(t) \big{)}\right)\!\phi_{j+N}(t)\right\rangle_{0}.\]
To deal with such expressions, we write sine functions as complex exponentials, i.e. \(\sin(\kappa(\phi_{\alpha}-\phi_{\beta}))=(e^{+i\kappa(\phi_{\alpha}-\phi_{ \beta})}-e^{-i\kappa(\phi_{\alpha}-\phi_{\beta})})/2i\). And since our average is over a Gaussian measure, we use the following approach to evaluate such quantities. Since
\[\left\langle\cdot\right\rangle_{0}=\mathcal{N}^{-1}\int\cdot\left.e^{-\frac{1 }{2}(\phi,\mathcal{C}^{-1}\phi)}d\phi=\mathcal{N}^{-1}\int\cdot\left.e^{- \frac{1}{2}(\phi,\mathcal{C}^{-1}\phi)}e^{i\kappa(h,\phi)}d\phi\right|_{h=0}= G(h)\right|_{h=0},\]
where \(\mathcal{N}\) is a normalization factor, and \((\phi,\mathcal{C}^{-1}\phi)\) is the canonical inner product on \(\mathbb{R}^{2N}\). On the last equation, we have defined an auxiliary function \(G(h)\), where \(h\in\mathbb{R}^{2N}\) is an arbitrary vector which, for the quantity above, is taken as zero after we evaluate the integral. This procedure can also help us to evaluate other quantities, for example
\[\left\langle\phi_{j+N}(t)e^{+i\phi_{\alpha}(t)}\right\rangle_{0}=\frac{1}{i \kappa}\frac{\partial}{\partial h_{j+N}}G(h)\bigg{|}_{h_{\alpha}=1}=\mathcal{ C}_{\alpha,j+N}(t,t)\,e^{-\frac{1}{2}\mathcal{C}_{\alpha,\alpha}(t,t)}, \tag{28}\]
where \(h_{\alpha}=1\) is taken after evaluate derivative to keep a remaining \(\phi_{\alpha}\) on imaginary exponential, all other components of vector \(h\) are taken as zero. By choosing properly the derivatives and non-zero components, we can show that
\[\left\langle\Omega;F_{t}\right\rangle_{0} = \sum_{\beta>\alpha}\sum_{l\neq\alpha}\frac{\lambda_{\alpha,\beta} \lambda_{\alpha,l}}{8\zeta_{\alpha}m_{\alpha}\kappa}e^{-\frac{1}{2}\left(C_{ \beta,\beta}+C_{l,l}\right)}\left(e^{-\left(C_{\beta,l}+2C_{\alpha,\alpha} \right)}-e^{+C_{\beta,l}}\right)+ \tag{29}\] \[+\sum_{\beta>\alpha}\sum_{l\neq\beta}\frac{\lambda_{\alpha,\beta} \lambda_{\beta,l}}{8\zeta_{\beta}m_{\beta}\kappa}e^{-\frac{1}{2}\left(C_{ \alpha,\alpha}+C_{l,l}\right)}\left(e^{+C_{\alpha,l}}-e^{-\left(C_{\alpha,l}+2 C_{\beta,\beta}\right)}\right).\]
Equation (29) can be evaluated for any regime of temperatures, but it does not tell us much in this form. We, from now on, develop an approach for studying heat flux in a low-temperature regime, i.e. when \(T_{j}\) is small for any site on the chain. Here, a small temperature means that \(T_{j}<1\), we give more details in appendix A ahead. We can see from equations (15)-(19) that the covariance \(\mathcal{C}\) is proportional to the temperature, so from the leading term of Taylor series for exponentials on Eq. (29) we get
\[-\left\langle\Omega;F_{t}\right\rangle_{0}=\sum_{\beta>\alpha}\frac{\lambda_{ \alpha,\beta}^{2}}{4\kappa}\left[\left(\frac{T_{\alpha}}{M_{\alpha}}+\frac{T_{ \beta}}{M_{\beta}}\right)\left(\frac{1}{\zeta_{\beta}m_{\beta}}-\frac{1}{ \zeta_{\alpha}m_{\alpha}}\right)\right]+\sum_{\beta>\alpha}\sum_{l\neq\alpha, \beta}\frac{\lambda_{\alpha,\beta}}{4\kappa}\left[\frac{\lambda_{\beta,l}}{ \zeta_{\beta}m_{\beta}}\frac{T_{\beta}}{M_{\beta}}-\frac{\lambda_{\alpha,l}}{ \zeta_{\alpha}m_{\alpha}}\frac{T_{\alpha}}{M_{\alpha}}\right]. \tag{30}\]
A first glance at Eq. (30) may be deceptive and lead someone to believe that we have a first-order approach on covariance \(\mathcal{C}\), but a further look at Eq. (27) show us that we had a \(T_{j}^{-1}\) from the start. So actually our leading term is of order \(\mathcal{O}\left(\mathcal{C}^{2}\right)\), and it will be the leading term as we use the same approach to handle the remaining terms. For example, for \(W_{1}\) given in (25a), we have
\[-\left\langle\Omega;W_{1}\right\rangle_{0} = -\lim_{t\rightarrow+\infty}\left\langle\Omega(\phi(t));\int_{0}^{t}W _{1}(\phi(s))\,ds\right\rangle_{0}=\] \[= -\frac{1}{2}\lim_{t\rightarrow+\infty}\sum_{\beta>\alpha}\sum_{j} \sum_{l\neq j}\frac{\lambda_{\alpha,\beta}\lambda_{j,l}\kappa M_{j}}{2\zeta_{j} m_{j}T_{j}}\times\] \[\times\int_{0}^{t}ds\left\langle\sin\Big{(}\phi_{\alpha}(t)-\phi_ {\beta}(t)\Big{)}\bigg{(}\frac{\phi_{\alpha+N}(t)}{m_{\alpha}}+\frac{\phi_{ \beta+N}(t)}{m_{\beta}}\bigg{)};\sin\Big{(}\phi_{j}(s)-\phi_{l}(s)\Big{)}\phi_ {j}(s)\right\rangle_{0}.\]
Calculations are extensive from now on. We again use the auxiliary function \(G(h)\) approach, as we did on (28), but now we will come up with a second-order derivative on \(h\). It will raise many terms, but they are all like
\[\lim_{t\rightarrow+\infty}\sum_{\beta,j,l}\frac{\lambda_{\alpha,\beta} \lambda_{j,l}M_{j}}{8\zeta_{j}m_{\alpha}m_{j}\kappa T_{j}}\int_{0}^{t}ds\ \mathcal{C}_{\alpha+N,j}(t,s)e^{-\frac{1}{2}(h,\mathcal{C}h)}\bigg{|}_{h_{1}- h_{2}},\]
or like
\[\lim_{t\rightarrow+\infty}\sum_{\beta,j,l}\frac{\lambda_{\alpha,\beta} \lambda_{j,l}M_{j}}{8\zeta_{j}m_{\alpha}m_{j}\kappa T_{j}}\int_{0}^{t}ds\ \mathcal{C}_{\alpha+N,j}(t,s)\mathcal{C}_{\alpha,j}(t,s)e^{-\frac{1}{2}(h, \mathcal{C}h)}\bigg{|}_{h_{1}+h_{2}},\]
where \(h_{1}\) or \(h_{2}\) refer to the signs that came from imaginary exponentials that define sine functions. Namely, for \(h_{1}\) we take \(h_{\alpha}=+1\), \(h_{\beta}=-1\), \(h_{j}=+1\) and \(h_{l}=-1\), while for \(h_{2}\) we only change to \(h_{j}=-1\) and \(h_{l}=+1\). To deal with those integrals on \(ds\), we use approximation presented on Eq. (20), and analytically calculate them. Calculations are tedious but straightforward, and after them we obtain
\[-\left\langle\Omega;W_{1}\right\rangle_{0} = \sum_{\beta>\alpha}\left(\frac{\lambda_{\alpha,\beta}^{2}}{4m_{ \alpha}M_{\alpha}\kappa}\,\frac{T_{\alpha}}{\zeta_{\alpha}}-\frac{\lambda_{ \alpha,\beta}\lambda_{\beta,\alpha}}{4m_{\beta}M_{\beta}\kappa}\,\frac{T_{ \beta}}{\zeta_{\beta}}\right)+\sum_{\beta>\alpha}\sum_{l\neq\alpha,\beta} \left(\frac{\lambda_{\alpha,\beta}\lambda_{\alpha,l}}{4m_{\alpha}M_{\alpha} \kappa}\,\frac{T_{\alpha}}{\zeta_{\alpha}}-\frac{\lambda_{\alpha,\beta} \lambda_{\beta,l}}{4m_{\beta}M_{\beta}\kappa}\,\frac{T_{\beta}}{\zeta_{\beta} }\right)+\] \[+ \sum_{\beta>\alpha}\frac{\lambda_{\alpha,\beta}}{4m_{\alpha}m_{ \beta}\kappa}\,\frac{\zeta_{\alpha}+\zeta_{\beta}}{D_{\alpha,\beta}}\left( \lambda_{\beta,\alpha}T_{\alpha}-\lambda_{\alpha,\beta}T_{\beta}\right)+\sum_{ \beta>\alpha}\frac{\lambda_{\alpha,\beta}}{4m_{\alpha}m_{\beta}\kappa D_{ \alpha,\beta}}\,\left(\frac{M_{\alpha}}{m_{\alpha}}-\frac{M_{\beta}}{m_{\beta }}\right)\left(\frac{\lambda_{\beta,\alpha}T_{\alpha}}{\zeta_{\beta}}+\frac{ \lambda_{\alpha,\beta}T_{\beta}}{\zeta_{\alpha}D_{\alpha,\beta}}\right)+\] \[+ \sum_{\beta>\alpha}\left(\frac{\lambda_{\alpha,\beta}^{2}M_{\alpha }}{4m_{\alpha}^{2}M_{\beta}\kappa}\,\frac{\zeta_{\beta}(\zeta_{\alpha}+\zeta_ {\beta})}{\zeta_{\alpha}D_{\alpha,\beta}}\,T_{\beta}-\frac{\lambda_{\alpha, \beta}\lambda_{\beta,\alpha}M_{\beta}}{4m_{\beta}^{2}M_{\alpha}\kappa}\, \frac{\zeta_{\alpha}(\zeta_{\alpha}+\zeta_{\beta})}{\zeta_{\beta}D_{\alpha, \beta}}\,T_{\alpha}\right)+\] \[+ \sum_{\beta>\alpha}\left(\frac{\lambda_{\alpha,\beta}^{2}M_{\alpha }}{4m_{\alpha}^{2}M_{\beta}\kappa}\,\left(\frac{M_{\alpha}}{m_{\alpha}}-\frac{ M_{\beta}}{m_{\beta}}\right)\frac{T_{\beta}}{\zeta_{\alpha}D_{\alpha,\beta}}+\frac{ \lambda_{\alpha,\beta}\lambda_{\beta,\alpha}M_{\beta}}{4m_{\beta}^{2}M_{\alpha }\kappa}\,\left(\frac{M_{\alpha}}{m_{\alpha}}-\frac{M_{\beta}}{m_{\beta}} \right)\frac{T_{\alpha}}{\zeta_{\beta}D_{\alpha,\beta}}\right),\]
where
\[D_{\alpha,\beta}=(\zeta_{\alpha}+\zeta_{\beta})\left(\zeta_{\beta}\frac{M_{ \alpha}}{m_{\alpha}}+\zeta_{\alpha}\frac{M_{\beta}}{m_{\beta}}\right)+\left( \frac{M_{\alpha}}{m_{\alpha}}-\frac{M_{\beta}}{m_{\beta}}\right)^{2}. \tag{32}\]
Following the same approach for the remaining terms, we get
\[-\left\langle\Omega;W_{2}\right\rangle_{0}=\sum_{\beta>\alpha}\frac{\lambda_{ \alpha,\beta}}{2m_{\alpha}m_{\beta}\kappa}\,\frac{\zeta_{\alpha}+\zeta_{\beta} }{D_{\alpha,\beta}}\,\big{(}\lambda_{\beta,\alpha}T_{\alpha}-\lambda_{\alpha, \beta}T_{\beta}\big{)}, \tag{33}\]
and
\[-\left\langle\Omega;W_{3}\right\rangle_{0}=-\sum_{\beta>\alpha}\frac{\lambda_{ \alpha,\beta}}{2m_{\alpha}m_{\beta}\kappa}\left(\frac{M_{\alpha}}{m_{\alpha}}- \frac{M_{\beta}}{m_{\beta}}\right)\frac{1}{D_{\alpha,\beta}}\left(\frac{ \lambda_{\beta,\alpha}T_{\alpha}}{\zeta_{\beta}}+\frac{\lambda_{\alpha,\beta}T_{ \beta}}{\zeta_{\alpha}}\right) \tag{34}\]
Summarizing, adding each term (30)-(34) for the flux (26), we get
\[\mathcal{F}_{\alpha\rightarrow}=-\left\langle\Omega;F_{t}\right\rangle_{0}- \left\langle\Omega;W_{1}\right\rangle_{0}-\left\langle\Omega;W_{2}\right\rangle_{0}- \left\langle\Omega;W_{3}\right\rangle_{0}. \tag{35}\]
It is not worth obtaining a closed expression for (35) right now, it is better to do it for each case in the next section.
## III Results
### Short reminder of previous results
We start checking results (30)-(34) on previously studied models. Initially, we consider the homogeneous chain, i.e. mass \(m_{j}=m\), on-site harmonic potential \(M_{j}=M\), and bath coupling to the chain \(\zeta_{j}=\zeta\) are the same for any site \(1\leq j\leq N\). We also take only homogeneous NN interactions given by
\[\lambda_{j,l}=\begin{cases}\lambda>0&\text{, if }|j-l|=1,\\ 0&\text{, otherwise.}\end{cases}\]
In such case, for any \(1\leq\alpha\leq N-1\), and after evaluating all contributions we obtain
\[\mathcal{F}=\mathcal{F}_{\alpha\to\alpha+1}=\frac{\lambda^{2}}{2\zeta mM} \left(T_{\alpha}-T_{\alpha+1}\right)\!, \tag{36}\]
where notation \(\mathcal{F}_{\alpha\to\alpha+1}\) emphasizes that heat only flows from the site \(\alpha\) to its nearest-neighbor, while \(\mathcal{F}\) reminds us that actually it does not depend on which site \(\alpha\) we are evaluating it. This is the same result obtained in Ref.[15]. We now use Eq.(36) to recall the next steps: since heat flux is the same all along the chain, we can add \(\mathcal{F}_{\alpha\to\alpha+1}\) for \(1\leq\alpha\leq N-1\), and noticing that we will get a telescoping sum on the right-hand side (RHS) of Eq. (36), we have
\[\left(N-1\right)\mathcal{F}=\frac{\lambda^{2}}{2\zeta mM}\left(T_{1}-T_{N} \right)\!,\]
and so Fourier law holds for this model, with a thermal conductivity
\[\kappa=\frac{\lambda^{2}}{2\zeta mM}\,, \tag{37}\]
that does not depend on temperature.
Now we quickly remind results for another previously studied model that is related to this first one, namely the NN "almost" homogeneous chain, where the coupling \(\zeta_{j}\) between site \(j\) and its heat bath may arbitrarily change over the chain. We get
\[\mathcal{F}=\mathcal{F}_{\alpha\to\alpha+1}=\frac{\lambda^{2}}{mM}\frac{T_{ \alpha}-T_{\alpha+1}}{\zeta_{\alpha}+\zeta_{\alpha+1}}, \tag{38}\]
for any \(\alpha\) in the bulk of the chain, in agreement with the result obtained in Ref.[7]. Since the flux \(\mathcal{F}\) must be the same all along the chain since the system is on NESS, we can mimic the previous approach to find out
\[\mathcal{F}=\frac{\lambda^{2}}{mM}\left[\sum_{1\leq j\leq N-1}\left(\zeta_{j} +\zeta_{j+1}\right)\right]^{-1}(T_{1}-T_{N}). \tag{39}\]
### Main results for NNN rotor model
We now turn to our main object of study, the rotor model. It is an almost homogeneous chain, but with NNN interaction: again we take \(m_{j}=m\) and \(M_{j}=M\), for any site, and we start with arbitrary heath bath-site coupling \(\zeta_{j}\). Concerning anharmonic interparticle interaction, now we study the NNN model, i.e.
\[\lambda_{j,l}=\begin{cases}\lambda>0&\text{, if }|j-l|=1,\\ \nu&\text{, if }|j-l|=2,\\ 0&\text{, otherwise.}\end{cases} \tag{40}\]
For the sake of perturbative calculations performed in (26), \(\nu\) will be taken in the same order as \(\lambda\), but we stress out that \(\nu\) may be positive or negative, in contrast with always positive NN coupling \(\lambda\). To elucidate this point, let us
show an intermediary step and evaluate, for example, (30) using values for this NNN model. We get
\[-\left\langle\Omega;F_{t}\right\rangle_{0} = \frac{\lambda^{2}}{4mM}\left[\left(\frac{1}{\zeta_{\alpha+1}}- \frac{2}{\zeta_{\alpha}}\right)T_{\alpha}+\left(\frac{2}{\zeta_{\alpha+1}}- \frac{1}{\zeta_{\alpha}}\right)T_{\alpha+1}\right]+ \tag{41}\] \[+\frac{\lambda\nu}{2mM}\left[-\frac{2}{\zeta_{\alpha}}T_{\alpha}+ \frac{1}{\zeta_{\alpha+1}}T_{\alpha+1}+\frac{1}{\zeta_{\alpha+2}}T_{\alpha+2} \right]+\] \[+\frac{\nu^{2}}{4mM}\left[\left(\frac{1}{\zeta_{\alpha+2}}-\frac{ 2}{\zeta_{\alpha}}\right)T_{\alpha}+\left(\frac{2}{\zeta_{\alpha+2}}-\frac{1}{ \zeta_{\alpha}}\right)T_{\alpha+2}\right].\]
The first term on RHS of Eq. (41) above is proportional to \(\lambda^{2}\) and it is due to NN interaction only - as we can see inside the brackets, it only depends on \(\alpha\) and \(\alpha+1\). The second term is proportional to \(\lambda\nu\), and it is due to \(\alpha\)-th site interaction both with its nearest- and next-nearest-neighbors. The third and last term is proportional to \(\nu^{2}\), and it depends only on \(\alpha\)-th site interaction with his NNN, the \((\alpha+2)\)-th site.
If we take \(\nu=0\), only the first term above will be non-vanishing, and we obviously recover the NN model. However, as we turn on the NNN interaction, we could have different behaviors if \(\nu\) is positive or negative. If \(\nu>0\), the second and third terms have the same sign as the first one [18], so we are only increasing its contribution to thermal conductivity \(\kappa\). However, if \(\nu<0\), the third term still increases \(\kappa\), but the second term could decrease it. In essence, a negative NNN interaction could inhibit heat flow. Up to this point, this discussion refers only to (30) contribution to heat flow, we still must evaluate (31)-(34), but we claim that this same behavior holds. In summary, after calculations, the heat flux can be written as
\[\mathcal{F}_{\alpha\rightarrow}=\lambda^{2}c_{\lambda^{2}}(T)+\lambda\nu c_{ \lambda\nu}(T)+\nu^{2}c_{\nu^{2}}(T),\]
where those coefficients \(c_{\lambda^{2}}\), \(c_{\lambda\nu}\) and \(c_{\nu^{2}}\) are either all positive or all negative, so again \(\lambda\nu\) term could decrease the intensity of thermal conductivity for \(\nu<0\). Nevertheless, as we evaluate all contributions (30)-(34) to the heat flux, we get
\[\mathcal{F}_{\alpha\rightarrow}=\frac{\lambda^{2}}{mM}\frac{T_{\alpha}-T_{ \alpha+1}}{\zeta_{\alpha}+\zeta_{\alpha+1}}+\frac{\nu^{2}}{mM}\frac{T_{\alpha} -T_{\alpha+2}}{\zeta_{\alpha}+\zeta_{\alpha+2}}\,. \tag{42}\]
In other words, we have \(c_{\lambda\nu}=0\), and thermal conductivity on NESS can only increase, even with a negative interaction between next-nearest neighbors. This result is in contrast with that one obtained by [11], as they saw an insulator regime for the rotor model with LR interactions. Our result suggests that this insulator regime must be a genuine LR effect. Just as an illustration, from the equations above we obtain an expression for the heat flow in terms of the temperatures at the ends. Indeed, summing up the equations (we make \(\zeta_{\alpha}=\zeta\) and consider \(N\) even)
\[\mathcal{F}_{1\rightarrow} = \frac{\lambda^{2}}{mM}\frac{T_{1}-T_{2}}{2\zeta}+\frac{\nu^{2}}{mM }\frac{T_{1}-T_{3}}{2\zeta}\] \[\mathcal{F}_{2\rightarrow} = \frac{\lambda^{2}}{mM}\frac{T_{2}-T_{3}}{2\zeta}+\frac{\nu^{2}}{ mM}\frac{T_{2}-T_{4}}{2\zeta}\] \[\mathcal{F}_{3\rightarrow} = \frac{\lambda^{2}}{mM}\frac{T_{3}-T_{4}}{2\zeta}+\frac{\nu^{2}}{ mM}\frac{T_{3}-T_{5}}{2\zeta}\] \[= \cdots\] \[\mathcal{F}_{N-2\rightarrow} = \frac{\lambda^{2}}{mM}\frac{T_{N-2}-T_{N-1}}{2\zeta}+\frac{\nu^{2 }}{mM}\frac{T_{N-2}-T_{N}}{2\zeta}\] \[\mathcal{F}_{N-1\rightarrow} = \frac{\lambda^{2}}{mM}\frac{T_{N-1}-T_{N}}{2\zeta},\]
considering \(\mathcal{F}_{\alpha\rightarrow}=\mathcal{F}\), we obtain
\[(N-1)\mathcal{F}=\frac{\lambda^{2}}{mM2\zeta}(T_{1}-T_{N})+\frac{\nu^{2}}{mM2 \zeta}(T_{1}-T_{N-1})+\frac{\nu^{2}}{mM2\zeta}(T_{2}-T_{N}),\]
that is,
\[(N-1)\mathcal{F}\approx\frac{\lambda^{2}+2\nu^{2}}{mM2\zeta}(T_{1}-T_{N}).\]
We aim now to investigate thermal rectification for a NNN interaction-related model. A necessary ingredient for (possible) thermal rectification is that the chain must have some asymmetry, so we set a linearly graded mass chain - on the other hand, we simplify calculations taking \(\zeta_{j}=\zeta>0\) for all sites of the chain. If we take, without loss of generality, \(m_{1}>m_{N}\), then we have \(m_{j}=[(N-j)m_{1}+(j-1)m_{N}]/(N-1)\), for any \(1\leq j\leq N\). Analytical evaluations are too hard for a graded mass, so we perform numerical computations for the flux (35). We emphasize that we are not performing computer simulations to find dynamics evolution for this system from scratch, but rather we have first developed a perturbative analytical approach to evaluate heat flux (35). For a case of anharmonic interaction with inner noises and unbounded potential, a perturbative approach was rigorously proven to be convergent in previous works [17]. We start from this point to numerically obtain heat flux. The following ten parameters must be given as inputs: the size of the chain \(N\), the heat bath-site coupling \(\zeta>0\), the on-site pinning \(M>0\), both NN and NNN interactions strength, respectively \(\lambda>0\) and \(\nu\in\mathbb{R}\), the factor \(\kappa\), the masses \(m_{1}>m_{N}\) and the temperatures \(T_{1}\) and \(T_{N}\) for the boundaries sites of the chain. Concerning temperatures, we remind that only temperatures at the boundaries of the chain are given, and they are labeled as \(T_{H}\) and \(T_{C}\), where the indices respectively stand for hot and cold baths. As previously said, remaining temperatures \(T_{j}\), for any \(j\) in the bulk of the chain, must be found using the self-consistency condition. And so, for each set of parameters, we find two temperature profiles: the first one for \(T_{1}=T_{H}\) and \(T_{N}=T_{C}\), and the other when we exchange temperatures at the boundaries. With both profiles at hand, we can evaluate the flux from left to right of the chain \(\mathcal{F}_{L}\), when \(T_{1}>T_{N}\), and reversed flux \(\mathcal{F}_{R}\), when \(T_{1}<T_{N}\). Since analytical expression (35) was obtained considering the flux to the right, we obviously expect \(\mathcal{F}_{R}<0<\mathcal{F}_{L}\). However, despite heat flows in opposite directions for \(\mathcal{F}_{R}\) and \(\mathcal{F}_{L}\), they could have the same magnitude, i.e. \(|\mathcal{F}_{R}|=\mathcal{F}_{L}\), and if this is the case our model presents no thermal rectification, at least for low-temperature regime. On the other hand, if we find that \(|\mathcal{F}_{R}|\neq\mathcal{F}_{L}\), we can conclude that the model is a thermal rectificator.
We used _Mathics_ to perform numerical calculations. Despite we have listed ten parameters as inputs in the previous paragraph above, we can change all our variables to dimensionless ones, as we show in appendix A, and by doing so we will always have dimensionless unit values for the on-site potential \(M=1.0\), the largest mass \(m_{1}=1.0\), and for the NN interaction coupling \(\lambda=1.0\). In such a scenario, a low-temperature regime means that the hot thermal reservoir temperature is \(T_{H}<1.0\). So, for a small chain with \(N=16\) sites, given the fixed parameters: \(m_{N}=0.5\), \(\zeta=1.0\), \(\kappa=1.0\), and \(\nu=-0.11<0\), we set hot and cold temperatures as \(T_{H}=0.2\) and \(T_{C}=0.1\), and our program returns a left flux \(\mathcal{F}_{L}=0.00659\) and a right flux \(\mathcal{F}_{R}=-0.00215\), and so we have a thermal rectification. If we change NNN-coupling to a positive value \(\nu=0.11>0\), keeping all the other parameters at their same values, we get the same fluxes \(\mathcal{F}_{L}=0.00659\) and \(\mathcal{F}_{R}=-0.00215\), and this was expected, since the heat flux (42) only depends on \(\nu^{2}\). If we double chain size to \(N=32\), we roughly obtain half the fluxes, i.e. \(\mathcal{F}_{L}=0.00328\) and \(\mathcal{F}_{R}=-0.00104\), which suggests that the flux decays on chain size \(N\), but our chains are too small to conjecture any conclusion on such dependence. In the table below we list some values for the parameters and the fluxes.
\begin{tabular}{|c|c|c|c|c|c|} \hline \(N\) & \(T_{H}\) & \(T_{C}\) & \(\mathcal{F}_{L}\) & \(\mathcal{F}_{R}\) & \(\mathcal{F}_{L}+\mathcal{F}_{R}\) \\ \hline & 0.2 & 0.1 & 0.0065949 & -0.00214865 & 0.00444625 \\
16 & 0.3 & 0.1 & 0.0117077 & -0.00577938 & 0.00592834 \\ & 0.5 & 0.1 & 0.0219334 & -0.0130409 & 0.00889251 \\ & 0.4 & 0.2 & 0.0131898 & -0.0042973 & 0.00889251 \\ \hline & 0.2 & 0.1 & 0.00321816 & -0.00103947 & 0.00217869 \\
32 & 0.3 & 0.1 & 0.0571009 & -0.00280516 & 0.00290493 \\ & 0.5 & 0.1 & 0.0106939 & -0.0633656 & 0.00435739 \\ & 0.4 & 0.2 & 0.0643632 & -0.00207893 & 0.00435739 \\ \hline \end{tabular}
## IV Final remarks
In this article, we investigate the heat flow and rectification in a version of the rotor model (here, with inner stochastic noises), a version involving interactions with next-nearest neighbors. It is worth recalling that the original rotor chain, in the case of long-range interactions, presents an interesting behavior, precisely, the existence of an insulating regime. Hence, our main interest was to find some hint of such an insulating regime, that is, a possible decay in the heat flow with the addition of the next-neighbor interaction to the nearest-neighbor one. We show, however, in a perturbative analytical computation, that such an addition of the next-neighbor interaction increases the heat flow even so the sign of the coupling interaction is negative (or positive). Our result indicates that the insulating regime is characteristic of real long-range interaction, that is, at least up to the next-neighbor case, it seems to be absent for short-range interaction. We reinforce that the detailed analytical study of models recurrently investigated by numerical methods is important and may help us to better understand what is happening.
Besides the investigation of homogeneous chains, we consider graded asymmetric systems by performing numerical computations. In this case, we show the occurrence of thermal rectification.
###### Acknowledgements.
Some acknowledgements.
|
2301.00003 | Emotion in Cognitive Architecture: Emergent Properties from Interactions
with Human Emotion | This document presents endeavors to represent emotion in a computational
cognitive architecture. The first part introduces research organizing with two
axes of emotional affect: pleasantness and arousal. Following this basic of
emotional components, the document discusses an aspect of emergent properties
of emotion, showing interaction studies with human users. With these past
author's studies, the document concludes that the advantage of the cognitive
human-agent interaction approach is in representing human internal states and
processes. | Junya Morita | 2022-12-28T23:50:27Z | http://arxiv.org/abs/2301.00003v1 | # Emotion in Cognitive Architecture:
###### Abstract.
This document presents endeavors to represent emotion in a computational cognitive architecture. The first part introduces research organizing with two axes of emotional affect: pleasantness and arousal. Following this basic of emotional components, the document discusses an aspect of emergent properties of emotion, showing interaction studies with human users. With these past author's studies, the document concludes that the advantage of the cognitive human-agent interaction approach is in representing human internal states and processes.
cognitive architecture, emotion, arousal, valence +
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
+
Footnote †: ccs: 2022 Copyright held by the owner/author(s).
## 1. Introduction
This document begins by asking the following question:
_How can emotion emerge in a computational system?_
This is a kind of ultimate question that has attracted enormous numbers of scientists and engineers, including the author himself. Toward the complete answer to the question, the author has developed several models of mental functions (possible components of emotion process) in ACT-R (Adaptive Control of Thought-Rational (Acharya et al., 2018)), which is one of the most widely used cognitive architectures in the world.
Cognitive architectures generally integrate knowledge concerning the human mind in the form of computer programs. The knowledge accumulated in ACT-R ranges from perceptual-motor components to abstract and goal-related concepts. Varieties of mental functions are controlled by symbols stored in modules (corresponding brain regions) and subsymbolic parameters (corresponding neurotransmitters). By utilizing these, this architecture aims to realize human-level activities in every field of human life.
The author considers that the above characteristic of the architecture is crucial to answering the question. Thus, this document presents the author's works utilizing ACT-R as ingredients of discussions on the conditions for enabling emotion in a computational system.
## 2. Issues of Emotional Process
As the background of the discussion, this section presents the author's view on the emotion process. It is assumed that the emotion process relies on subcortical brain regions, such as the amygdala, insula, and basal ganglia. The process was considered to be the product of an adaptation to ancestors' surrounding environments, because these regions were formed early in the evolution of the brain (Becker et al., 2010; Becker et al., 2010). However, modern human emotion is more complex than the purely physical processes in the following senses:
1. Emotion is not a static entity but rather emergent properties accompanied by dynamic interaction with the environment. This means a human's emotional response always fluctuates.
2. Our environment has drastically changed from the environment in which our ancestors lived. Therefore, many emotional mechanisms have become maladaptive in the modern age.
3. This biological process is somehow modulated by an intentional strategy, as many theories of emotion have suggested (Becker et al., 2010; Becker et al., 2010). Therefore, there is room for intervention in the emotion process by our will or technology.
## 3. Representing Emotion in ACT-R
Even through such complexities, it is possible to model the basis of emotion (affect) using the fundamental axes, arousal and pleasantness, as presented in Russel (Russel, 2016)'s circumplex model (Fig 1). The present author's approach begins with these simple components. The following studies represent these components with primitive cognitive functions, such as activation noise and pattern matching, implemented in ACT-R.
### Representation of Emotional Arousal
According to a dictionary in psychology (Ransel, 2016), arousal is defined as follows:
* a state of physiological activation or cortical responsiveness, associated with sensory stimulation and activation of fibers from the reticular activating system.
As indicated by this definition, past researchers have frequently connected arousal with the activation (attentional) process, especially the degree of concentration (e.g., (Becker et al., 2010)). In a concentrated situation, humans can continue monotonous tasks accurately. However, as such a process long lasts long, people get bored and begin to think about things outside of the task (i.e., mind-wandering).
From this phenomenon and the consideration that emotional arousal relates to biological fluctuations affecting the cognitive process, the author and their colleagues have represented arousal as the noise factor for memory activation in ACT-R. Controlling this subsymbolic parameter, the authors have demonstrated changes in memory recollection (Han et al., 2017) and task goal switching (Kumar et al., 2018). This view is consistent with the discussion in which emotion is a modulator of cognitive architecture (Han et al., 2017; Kumar et al., 2018).
### Representation of Pleasantness
People feel joy when receiving a reward. Thus, the pleasantness axis can be considered as a reward in reinforcement learning. Various triggering events for rewards can be assumed in the real world. Among them, internally generated ones are important to developing autonomous agents.
Based on the above assumptions, Nagashima et al. (Nagashima et al., 2018; Nagashima et al., 2019) developed a model of intrinsic motivation, assuming a pattern discovery to be a source of curiosity. In the ACT-R model, patterns in the data are discovered with the process of pattern matching, and the experience of the pattern matching is utilized to build procedural rules (the compilation of production rules). As a task proceeds, opportunities for pattern matching (internal rewards) gradually decrease. Thus, the model can explain how intrinsic motivation decreases with experience and increases with discovering novel patterns. This pattern-discovery-focused view of intrinsic motivation is consistent with discussions in the entertainment industry (Bradbury et al., 2017). In addition, it is supported by the theory emphasizing the role of pattern-seeking in the history of human civilization (Bradbury et al., 2017).
## 4. Needs of Interaction
From the discussion so far, the importance of environment for emotion stands out. Capturing the full complexities of the real-world dynamics of the two mechanisms (arousal and pleasantness) requires environmental changes. Among various environmental factors, the existence of other organisms seems most crucial. In other words, the author considers that human emotion can be modeled only when the computational system actually interacts with humans. By interacting with humans, the system is able to learn human's emotion generation and expression.
Based on this consideration, the author and colleagues have developed several interactive systems (Han et al., 2017; Kumar et al., 2018), implementing an emotion model as a component. Those systems receive the user's biological signals such as heart rates to automatically modulate the abovementioned ACT-R parameters (noises and rewards) for guiding the user's emotion to an optimal state. Especially, Morita et al. (Kurita et al., 2018) demonstrated that a web advertisement system containing an ACT-R memory model could prevent human repetitive thinking (rumination) when the model behaves in a counterbalanced manner (Fig 2). The author believes that such a system will eventually lead to a new human homeostatic process with the help of artificial emotional systems.
## 5. Conclusion
This document presents the author's attempts to answer the ultimate question about the computational conditions that enable emotion. Future work must represent the ideas presented in the document as a general framework and evaluate it in human experiments. The author considers that such a cognitive-architecture-based framework is advantageous in constructing trustful human-agent relations. Academic knowledge implemented in architecture is the result of continuous endeavors in human history. Including the formal knowledge agreed in human society is an essential ingredient of making common ground between humans and artifacts.
## Acknowledgement
This document summarizes ideas obtained from past collaborative studies with colleagues at Nagoya University, Shizuoka University, collaborators from Panasonic Corp. and Mazda Corp., and members of the Applied Cognitive Modeling Lab (ACML) at Shizuoka university. The author thanks everyone for valuable discussions.
Figure 1. Axes of emotional affect and model parameters.
Figure 2. Framework of interacting human emotion with machine emotion (Kurita et al., 2018) |
2309.05981 | Learning Unbiased News Article Representations: A Knowledge-Infused
Approach | Quantification of the political leaning of online news articles can aid in
understanding the dynamics of political ideology in social groups and measures
to mitigating them. However, predicting the accurate political leaning of a
news article with machine learning models is a challenging task. This is due to
(i) the political ideology of a news article is defined by several factors, and
(ii) the innate nature of existing learning models to be biased with the
political bias of the news publisher during the model training. There is only a
limited number of methods to study the political leaning of news articles which
also do not consider the algorithmic political bias which lowers the
generalization of machine learning models to predict the political leaning of
news articles published by any new news publishers. In this work, we propose a
knowledge-infused deep learning model that utilizes relatively reliable
external data resources to learn unbiased representations of news articles
using their global and local contexts. We evaluate the proposed model by
setting the data in such a way that news domains or news publishers in the test
set are completely unseen during the training phase. With this setup we show
that the proposed model mitigates algorithmic political bias and outperforms
baseline methods to predict the political leaning of news articles with up to
73% accuracy. | Sadia Kamal, Jimmy Hartford, Jeremy Willis, Arunkumar Bagavathi | 2023-09-12T06:20:34Z | http://arxiv.org/abs/2309.05981v1 | # Learning Unbiased News Article Representations: A Knowledge-Infused Approach*
###### Abstract
Quantification of the political leaning of online news articles can aid in understanding the dynamics of political ideology in social groups and measures to mitigating them. However, predicting the accurate political leaning of a news article with machine learning models is a challenging task. This is due to (i) the political ideology of a news article is defined by several factors, and (ii) the innate nature of existing learning models to be biased with the political bias of the news publisher during the model training. There is only a limited number of methods to study the political leaning of news articles which also do not consider the algorithmic political bias which lowers the generalization of machine learning models to predict the political leaning of news articles published by any new news publishers. In this work, we propose a knowledge-infused deep learning model that utilizes relatively reliable external data resources to learn unbiased representations of news articles using their global and local contexts. We evaluate the proposed model by setting the data in such a way that news domains or news publishers in the test set are completely unseen during the training phase. With this setup we show that the proposed model mitigates algorithmic political bias and outperforms baseline methods to predict the political leaning of news articles with up to \(73\%\) accuracy.
Fair model, Political leaning prediction, Mitigating news domain bias
## I Introduction
News media houses have endured through time to disseminate political news to the people while also influencing their political perceptions. They play a vital role to impersonate public opinion on several issues like diseases [1], elections [2], and natural calamities [3]. Recent events from COVID-19 prove that news media bias is capable of polarizing individual ideologies [1, 4]. Moreover, the rise of the web and social media has also increased the capacity of news media to propagate information at a rapid pace. Over the last decade, several small-scale and focused online news forums have harnessed online platforms to spread news content across the general public. News articles reflect the news forums' opinion on politicians, laws, and policies which in turn defines the _political bias_ of a news media house. A news forum is politically biased in the scale of _far-left_ to _far-right_ based on the political ideology (republican/democrat in the USA) that they approve and criticize in their news articles. An example of news domains from different political leaning covering the story of 'attack on Mr. Pelosi' news from multiple angles is given in Figure 1. _Right-leaning_ news domains attribute the problem of illegal immigrants, _left-leaning_ domains emphasize Mrs.Pelosi's criticisms on republicans, and _center-leaning_ news domains cover details of the attack at a high-level.
Polically biased news contents are some of the well known sources for the widespread increase in filter bubbles [5], echo-chambers [6], misinformation [7], and propaganda in online communities [8]. Our primary motivation in this work is echo-chambers created by biased news recommendations in online forums [9, 10]. In other words, user profiles with conservative ideology would prefer news articles from right
Fig. 1: Biased projections of news contents on the same story of ‘Nancy Pelosi Discusses Attack on Husband’ covered by news media houses with political leaning of _left_ (blue), _center_ (white), and _right_ (red)
leaning news media like Fox News and CBN, whereas profiles with liberal ideology may prefer news from left leaning news domains like CNN and CBS News. Some of the current AI based recommendation models intensify the echo chamber effect by giving news article suggestions to users that align with their ideology and preferences. Thus there are works to refine such recommender models to expose online users to news articles from news domains of diverse political ideologies [11, 12, 13]. Thus characterizing political bias in news articles is crucial to quantify its effects and take measures to mitigate them in online forums like Twitter, Reddit, and Youtube. Accurately mapping the bias of news articles will also provide social media platforms and news recommendation platforms like Google News to show personalized news from multiple political ideologies [14].
Algorithmic bias is one of the primary concerns in existing real-world AI applications. The machine learning models by default undergo algorithmic bias as the training data given to such models are collected on human assumptions [15, 16]. Although these biased models give good performance on new data that have similar characteristics to the training set, the models discriminate features that are completely unseen during the training phase. Such model biases are prevalent in many real-world scenarios like gender, race, and religion [17]. Such unfairness in machine learning models reduces the human reliability of model outcomes and their decision-making capabilities in various domains like medicine, agriculture, and autonomous systems. In this paper, we analyze machine learning classifiers undergoing _algorithmic political bias_ of news domains that are available during model training and make discriminatory predictions. We demonstrate in upcoming sections that existing language models are politically biased on news domains and fail to map the political leaning of news articles published by news domains that are unseen during the training phase. It is important to mitigate such bias in the learning models to (i) learn unbiased feature representations of news articles, and (ii) predict political bias of news articles from news media sources whose bias is not evidenced during model training.
The existing work [14] showcases the effect of political bias present in news domains to predict the political leaning of news articles. Mitigating such political bias in news articles is important to understand the political bias of a news article from any new news domain. We introduce such a bias mitigation method in a machine learning setup to learn news article representations by infusing external knowledge. Our method is an improvement over all the prior work on political leaning prediction of news articles, as summarized in Table I. The proposed method of encoding features of topics learned from external data resources can enable the machine learning models to mitigate political bias in classification tasks. Overall, we have the following _three-fold_ contributions in this paper:
1. **Proposed work:** We propose a novel framework that mitigates political bias in machine learning classifier models by infusing external knowledge sources for political leaning prediction on news articles. Most importantly, the proposed model uses representations of topics in news articles and knowledge of news domains from external data sources in the prediction task
2. **Experiments:** We give in-depth experiments on the performance of the proposed framework using multiple training and test sets, multiple language models and external knowledge sources. We show that the proposed model achieves up to _73%_ prediction accuracy
3. **Mitigating political bias:** We demonstrate the efficacy of the proposed framework to mitigate political bias of news domains even when they are not available during the model training. Specifically, we demonstrate robustness by evaluating the proposed model with multiple splits of training and test data based on news domains in our experiments
## II Related Work
### _Political Leaning Detection_
Identifying political leaning has several applications in online social media like echo chamber detection [22], quantifying controversy [23], and partisan segregation identification [24]. Several research ideas are emerging to utilize linguistic and temporal features that exist in online content to characterize political bias and leaning. Few recent works [25, 26] study the identification of political ideology of media houses from social media networks like Twitter. Analyzing text data with deep learning methods have been widely studied to capture political viewpoints. Methods like attention mechanism [27] have been utilized with deep language encoder models on contextual information to capture political perspective. An attention-based multi-view model is proposed in [28] to identify the ideology of news articles from the article details such as title, content and link structure. Opinion-aware knowledge graph [29] has been proven efficient specially for graph based approaches to infer and predict the ideology. MEAN [30] detects bias in terms of entities that use entity information as external knowledge and leverage that information in multiple ways to capture political perspective. KCD [19] also detects political perspective with knowledge reasoning for graph level representation learning methods. Political bias detection has been studied at multiple levels like word-level, sentence-level [31], news article-level [32, 33] and news domain level [34]. Few recent works [14, 28, 35] try to infer the
\begin{table}
\begin{tabular}{p{108.4pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}} \hline \hline Method & Knowledge Encoding & Multiple Data Sources & Topic Encoding & Bias Mitigation \\ \hline Baly et al. [14] & ✓ & - & - & ✓ \\ KGAP [18] & ✓ & - & ✓ & - \\ KCD [19] & ✓ & - & ✓ & - \\ Devatine et al. [20] & - & - & - & ✓ \\ KHAN [21] & ✓ & ✓ & ✓ & - \\ \hline
**Ours** & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table} TABLE I: Qualitative comparison of aspects in the proposed work with existing methods
-political bias associated with the news article by using multiple methods such as deep neural network architectures. Although the work proposed in [14] is closely similar to our work, there are major differences. In this paper, we develop a novel knowledge-infused deep learning model to extract news article representations to predict their political leaning. We use two external sources to add both global and fine-grained contextual information to the news articles.
### _Model Fairness_
Traditional machine learning and deep learning models are mostly designed to optimize based on any performance metric such as accuracy, which can lead to learning bias or unfairness [16] to the model which can affect the prediction task. Previously, there has been work like elimination and measuring of gender bias [36], neuro-imaging dataset bias [37] from machine learning models and deep neural network frameworks. The bias in the texts associated with the sources [38] and users can be directly attributed to those sources and users. There are few works that try to detect the media bias, particularly the work given in [39] predicts the media bias by link-based approach. Having articles from the same news domain can make the model biased [40] because the classifiers may start making predictions based on the political leaning of news domain itself rather than the content of the article. We attempt to mitigate such model's algorithmic political bias in our work by _media-based split_ datasets and a proposed knowledge infusion method.
## III Datasets
In this work, we utilize datasets from multiple resources for knowledge infusion on news article representations. In this section, we give details of all the datasets.
### _Labeled News Articles_
We utilize labeled news articles from the existing research [14] in our experiments. This dataset comprises _37,554_ news articles from _389_ news domains and each news article is associated with one of the three political leaning labels: _left_, _center_, and _right_. Prediction models trained using the traditional approach of randomly splitting the data into training and test datasets will be unfair. Meaning that such models cannot generalize to predict the political leaning of news articles from an unknown news domain, as given in _row 2_ (Madia split) of Table II.
To analyze and mitigate the model bias of learning models, we split the news articles by news domains into training and test sets. Such data splits are represented as _Media split_ throughout this paper and this data split will test the performance of classifier models to predict the political leaning of news articles from completely unknown news domains during the model testing phase. Unlike the existing work [14], which hand pick the news domains for the test set we randomly choose \(7\%\) of the news domains from our data and their corresponding news articles for our test set. Table II demonstrates the performance of a classifier model built on top of a BERT language model to predict the political bias of news articles. Our experiments aligns with the existing work [14], although ours perform better than their results, to show that the learning models underperform when they get articles from unknown news domains and it requires mitigation measures. The distribution of 4 _Media split_ data and one _random split_ data used in our experiments is given in Figure 1(a). It is notable from the Figure 1(b) that the number of news domains in our news articles is relatively minimum. We claim that this imbalanced distribution does not bias the proposed model as the number of articles in the _right_ political ideology is approximately equal to the number of news articles in other political ideologies from Figure 1(a).
### _External Data Sources_
It is well studied research that adding contextual information from external data sources improve the performance of supervised machine learning models [14, 21, 26, 41]. Our proposed method use two external datasets to mitigate algorithmic political bias in political leaning prediction models. We have released the datasets to the research community here1. Figure 1(b) gives the distribution of the number of documents per class label.
Footnote 1: [https://github.com/sadiakamal/Learning-Unbiased-News-Article-Representation](https://github.com/sadiakamal/Learning-Unbiased-News-Article-Representation)
#### Iii-B1 Wikipedia Articles (**Wkb**)
One of the datasets that we use to mitigate model bias are Wikipedia pages of news domains. We use the Wikipedia API to collect the title and body of _324_ news domain that exists in our dataset. There are no wikipedia pages for the other 65 news domains. We used wikipedia articles in the proposed model to capture global context of news articles in the form of news domains. The observations in _row 3_ of Table II clearly demonstrates that Wikipedia article representations, learned from a pre-trained BERT model without incorporating any news articles representations, can effectively mitigate algorithmic political bias in the prediction model. This approach resulted in a \(5\%\) improve in performance when evaluated with the test data of the _media split_. We can also notice a significant performance spike of \(68.5\%\) model accuracy when using Wikipedia representations along with news article representations on the media split.
#### Iii-B2 Presidential Debates (**PDB**)
In addition to Wikipedia articles, we use presidential debates [42] to capture contextual representations of topics that are present in news articles. This dataset consists of debates of presidential candidates from both the republican and democratic parties. The main motivation to
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & **Acc.** & **Macro F1** & **MAE** \\ \hline NewsA (Random split) & 0.749 & 0.7456 & 0.3774 \\ NewsA (Media split) & 0.516 & 0.4804 & 0.5907 \\ \hline WikiA & 0.562 & 0.4040 & 0.5625 \\ NewsA (Media split) + WikiA & 0.685 & 0.6290 & 0.6011 \\ \hline \end{tabular}
\end{table} TABLE II: Demonstration of algorithmic political bias in classifier models and the importance of external data sources - Wikipedia
utilize this dataset is two-fold: (i) to capture the fine-grained representations of topics that appear in news articles, and (ii) to rely on relatively reliable external knowledge sources instead of utilizing topic representations derived from much unreliable and noisy data sources such as social media posts [14, 21, 26]. As depicted in Figure 1(b), our presidential debates dataset (PDB) comprises of an almost balanced distribution of 478 speeches of Democrats and 389 speeches of Republicans. We also highlight that the proposed approach extracts topic representations from the entire debate data rather than handling them separately by their political ideology. So, the absence of speeches from the _center_ ideology and the presence of a minor imbalance in the debates data cannot bias the proposed model.
## IV Methodology
Our proposed framework is a knowledge infused deep network architecture, as illustrated in Figure 3, that mitigates algorithmic political bias that exists in classifier models to predict the political leaning of news articles. Also, we summarize the list of notations used in this paper in Table III.
Assume the training set consists of \(l\) news articles which are represented as \(\mathcal{N}=\{n_{1},n_{2},n_{3},...,n_{l}\}\), where each article \(n_{j}\in\mathcal{N}\) is published by a news domain \(d_{k}\in\mathcal{D}\) where \(\mathcal{D}=\{d_{1},d_{2},d_{3},...,d_{m}\}\), given that \(m<<l\), and \(n_{j}\in\mathcal{N}\) is associated to one of the political leaning classes \(\mathcal{C}_{n_{j}}\in\{0,1,2\}\) (0: Left, 1: Center, 2: Right). Each news article \(n_{j}\in\mathcal{N}\) is in turn consists of a series of topics represented as \(T_{n_{j}}=\{t_{1},t_{2},t_{3},\ldots\}\), given that our model does not set any limits on the number of topics in the news article \(n_{j}\). In addition to the news articles, we also have Wikipedia knowledge base (**WKB**) with \(m\) wikipedia pages and debates database (**PDB**). The WKB provides contextual information about news domains that publish news articles \(\mathcal{N}\), while the PDB focuses on the topics discussed in news articles \(\mathcal{N}\).
Given a set of news articles \(\mathcal{N}\) and their corresponding external data sources, the proposed model learns a \(p-\)dimensional representation \(\Theta_{n_{j}}\in\mathbb{R}^{l\times p}\) of a news article \(n_{j}\) where \(p=2q+r\). We learn the news article representation as \(\Theta_{n_{j}}=\delta_{n_{j}}\oplus\omega_{n_{j}}\oplus\tau_{n_{j}}\), where \(\delta_{n_{j}}\in\mathbb{R}^{l\times q}\) is the base representation of \(n_{j}\), \(\omega_{n_{j}}\in\mathbb{R}^{l\times q}\) is the representation of a Wikipedia article that corresponds to the news domain \(d_{k}\) which published \(n_{j}\), and \(\tau_{n_{j}}\in\mathbb{R}^{l\times r}\) is the aggregated representation of topics in \(n_{j}\). The proposed model learns all the above representations jointly in a supervised fashion as \(f(\Theta)\to Pr(\mathcal{C}_{n_{j}}|\Theta)\) where \(\mathcal{C}_{n_{j}}\in\{0,1,2\}\) is the political leaning of news article \(n_{j}\in\mathcal{N}\). We give details about the proposed model to learn \(\Theta_{n_{j}}\), \(\delta_{n_{j}}\), \(\omega_{n_{j}}\), and \(\tau_{n_{j}}\) below.
### _Base Representation \(\delta_{n_{j}}\)_
We use the backbone transformer models to extract the base news article representations \(\delta_{n_{j}}\in\mathbb{R}^{l\times q}\). We choose transformer models over classic text representation models because of its ability to learn contextualized representations rather than static word embeddings. Given that there are \(w\) words in a
Fig. 2: Data distribution of news articles, news domains, Wikipedia articles, and Presidential debates characterized by their political ideologies
\begin{table}
\begin{tabular}{c l} \hline \hline
**Notation** & **Description** \\ \hline \(\mathcal{N}\) & Set of news articles \\ \(1\) & Number of news articles \\ \(\mathcal{D}\) & Set of news domains \\ m & Number of news domains \\ \(T_{n_{j}}\) & Set of topics in a news article \(n_{j}\) \\ \(C_{n_{j}}\) & Political leaning of news article \(n_{j}\) \\ \hline \(\delta\) & Feature representation of news articles \\ \(\omega\) & Feature representation of wikipedia articles \\ \(q\) & Number of dimensions in \(\delta\) and \(\omega\) \\ \(\tau\) & Feature representation of topics \\ \(\tau\) & Number of dimensions in \(\tau\) \\ \hline \(\beta\) & Weight of external knowledge representations \\ \(\lambda\) & Weighted external knowledge representations \\ \(\Theta\) & Knowledge infused news article representations \\ p & Number of dimensions in \(\Theta\) \\ \end{tabular}
\end{table} TABLE III: Description of notations used in this paper
news article \(n_{j}\), we aggregate the word representations from the transformer model using a MEAN operator. We fine-tune the parameters of the backbone model in a supervised setup \(f(\Theta)\) as mentioned before.
```
0: Topics of news article \(n_{j}\in N\): \(T_{n_{j}}\), Pre-trained word embedding model: \(\mathcal{W}\), knowledge weight variable: \(\beta\)
1: Initialize \(E\leftarrow\phi\)
2:for topic \(t\in T_{n_{j}}\)do
3:\(E\gets E+\mathcal{W}[t]\)
4:endfor
5:\(M\leftarrow\frac{E}{|T_{n_{j}}|}\)\(\triangleright\) Mean of all word vectors
6:\(\tau_{n_{j}}\leftarrow\) Encode(\(M\)) to \(r\)-dimensional vector
7:\(\tau_{n_{j}}\leftarrow(1-\beta)\times\tau_{n_{j}}\)\(\triangleright\) weighted topic representation of \(n_{j}\)
8:return\(\tau_{n_{j}}\)
```
**Algorithm 1** Topic Knowledge Extraction
### _Knowledge Infusion_
We further enhance the base representations of news articles (\(\delta_{n_{j}}\)) with the global contextual representations of news articles (\(\lambda_{n_{j}}\)) collected from external data sources to debias the learning model \(f(\Theta)\). In this paper, we utilize Wikipedia data (**WKB**) and Presidential debates data (**PDB**) to learn the global representations \(\lambda_{n_{j}}\) of a news article \(n_{j}\in\mathcal{N}\). Each of our external data covers a different context of global representations. That is the representations \(\omega_{n_{j}}\) from **WKB** consists of global context of the news domain \(d_{n_{j}}\) and the representations \(\tau_{n_{j}}\) from **PDB** captures the unmanipulated context of topics \(T_{n_{j}}\) present in news articles. We primarily use political debates to learn topic representations, rather than general sources like social media posts or Wikipedia, as they are significant to the political polarization problem and they are relatively reliable to perceive the political context of topics.
#### Iv-B1 Wikipedia Representation \(\omega_{n_{j}}\)
We first map the news domain \(\mathcal{D}_{n_{j}}\) of a news article \(n_{j}\) to its corresponding Wikipedia article \(W_{n_{j}}\) from the Wikipedia data base **WKB** using the mapping function \(g_{wkb}:\mathcal{D}_{n_{j}}\to W_{n_{j}}\). We learn the Wikipedia representations \(\omega_{n_{j}}\) of news article \(n_{j}\) with the same backbone transformer model used to learn the base representations of \(n_{j}\). The proposed model optimizes the backbone transformer model parameters based on the classification task \(f(\Theta)\) to learn \(\omega_{n_{j}}\).
#### Iv-B2 Topic representation \(\tau_{n_{j}}\)
Algorithm 1 gives the complete pipeline to extract the topics representation \(\tau_{n_{j}}\) of a news article \(n_{j}\) with topics \(T_{n_{j}}=\{t_{1},t_{2},t_{3},\ldots\}\) from our presidential speeches knowledge base (**PDB**). We collect topic representation \(\tau_{n_{j}}\) for each base representation \(\delta_{n_{j}}\) and this is our another infused knowledge base to enhance the learning. We do not set any limits on the number of topics to appear in the news article. Capturing this information allow us to predict the political leaning of a article with political speech based topic representation learning context. First, we pre-train a skip-gram word embedding model \(\mathcal{W}\) (_Word2Vec_[43] in this paper) on the **PDB** data to get contextual text features of words in the context of **PDB**. We then collect representations for each
Fig. 3: Architecture of our proposed knowledge-infused deep learning model. The proposed model infuse weighted Wikipedia representations (\(\omega\)) and topics representations (\(\tau\)) to the corresponding base representations (\(\delta\)) of a news article \(n_{j}\)
topic \(t\in T_{n_{j}}\) using the skip-gram model and take mean of representations from all topics in news article \(n_{j}\). We further enhance topics representation with two types of encoding in the proposed model: i) Encoder model ii) Autoencoder model, as given in Figures 3(a) and 3(b) respectively. For encoder model we use two linear layers and one ReLU layer. Similarly for the autoencoder model we use two linear layer and one ReLU layer in both encoder and decoder part. As given in Algorithm 1, we extract weighted topic representation \(\tau_{n_{j}}\) using our knowledge weight variable \((1-\beta)\).
#### Iv-B3 Knowledge representations \(\lambda_{n_{j}}\)
We propose the knowledge representations of \(n_{j}\) as the weighted aggregation of Wikipedia representations (\(\omega_{n_{j}}\)) and presidential debates representations (\(\tau_{n_{j}}\)). We use the weight variable \(\beta\in[0,1]\) to optimize the importance of representations from two different knowledge bases as given in Equation 1.
\[\lambda_{n_{j}}=[\beta\times\omega_{n_{j}}]\bigoplus[(1-\beta)\times\tau_{n_{j}}] \tag{1}\]
### _News Article Representations \(\Theta\)_
With base representations \(\delta\) learned from a news article and its corresponding knowledge representations \(\lambda\), we extract the knowledge-infused news article representations as \(\Theta=\delta\bigoplus\lambda\). We further fine-tune all three representations of news articles (\(\delta\), \(\omega\), and \(\tau\)) by training a supervised model \(f(\Theta)\) to predict political leaning of a news article.
### _Training the Proposed Model_
The concatenated knowledge-infused representations \(\Theta\) captures the high level information. Then \(\Theta\) is fed to a fully connected linear layer followed by a ReLU layer as shown in Equation 2 to predict the political leaning of news articles. Here \(W\) is parameter matrix and \(b\) is the bias associated.
\[Pr(\mathcal{C}_{n_{j}}|\Theta)=ReLU(W\Theta+b) \tag{2}\]
We optimize the knowledge-infused news article representations for the supervised task using the multi-class cross entropy loss function, as given in Equation 3, to optimize model parameters.
\[\mathcal{L}=-\sum_{\mathcal{C}_{n_{j}}=1}^{M}y_{\mathcal{C}_{n_{j}},\Theta} log(Pr(\mathcal{C}_{n_{j}}|\Theta)) \tag{3}\]
where \(\mathcal{C}_{n_{j}}\) is the training target label with respect to \(\theta\), M is the number of classes and y is binary indicator that represents correct or incorrect classification of the class label. In this paper, we set \(M=3\) (_left_, _center_, and _right_).
## V Experiments and Results
In this section, we evaluate our proposed knowledge-infused model with multiple experimental setup for political leaning detection task. We give results based on multiple transformer-based text representation models, assessed the model fairness with multiple datasets, and analyze the impact of weight parameter \(\beta\) on knowledge representations infused with base news article representations \(\delta_{n_{j}}\).
### _Experimental Setup_
Before presenting our experimental results, we will detail the implementation details and notations of the baseline models in the below sections.
#### V-A1 Implementation details
In this work, we use PyTorch 1.12.1 [45] to implement our proposed model. The machine we use for all the experiments is equipped with an Intel X710 quad port CPU with 64.14 GB memory and an NVIDIA Ampere A10 GPU with 24 GB memory, the GPU is installed with CUDA 11.3 and cuDNN8.3.2. We set the batch size as 2 and we use Adam optimizer [46] with learning rate = 1e-6. Lastly, we set the number of epochs as 3.
Fig. 4: We using an Autoencoder and Encoder model to learn topic representations \(\tau_{n_{j}}\), where each \(\tau_{n_{j}}\) represents a topic. Autoencoder model to learn topic representations \(\tau_{n_{j}}\) and we use ReLU() as the activation function in the hidden layers.
#### Iv-A2 Baseline models
We compare the performance of our proposed model with the performance reported in the following state-of-the-art baselines on the same media split dataset. Since there is not much work in the literature to predict the political leaning of news articles, we use multiple modules in our proposed model as baselines as well. Unless or otherwise specified as _Random Split_, all the models used in our experiments are trained with _Media Based_ data split. All the baseline models used in this paper are given below:
* **Devatine et al.(2022) (Bi-LSTM)**[20] This model is their base model and it leverages sequences of static word embedding to generate hidden representations. This model is modified to incorporate the enhancements proposed in [47] and it achieves \(46.97\%\) accuracy.
* **Devatine et al.(2022) (Bi-LSTM+SA/Sent)**[20] This model is an extension of the Bi-LSTM model where it leverages structural attention and sentence segmentation to the existing Bi-LSTM model to give \(48.76\%\) accuracy.
* **Devatine et al.(2022) (Bi-LSTM+SA/EDU)**[20] In particular, the model performs better when they incorporate Elementary Discourse Units(EDU) segmentation(Bi-LSTM + SA/EDU) to the base Bi-LSTM model. Identifying text spans(EDU) is linked with discourse relations to improve the overall model performance.
* **Liu et al.(2018) + SA/EDU** The original model proposed in [44] leverages structural attention and sentence segmentation performs well without the improvements introduced in [20].
* **Baly et al.(2020)(NewsA) [14],****Baly et al.(2020) (NewsA+ WikiA) [14],****Baly et al.(2020)(NewsA+Twitter) [14]** This work is the closest resemblance to the proposed model. But this work utilizes Twitter follower Bios, which can be uncertain on political leaning, and Wikipedia articles to predict the political leaning of news articles. Also, this work has not given any experiments to evaluate the model fairness on multiple datasets.
* **WikiA**: We report the performance of the proposed model only with Wikipedia articles representation from **WKB** and not including any base representations \(\delta_{n_{j}}\).
* **NewsA+WikiA**: This is the proposed model which learns the Wikipedia representation \(\omega_{n_{j}}\) together with news article representations \(\delta_{n_{j}}\) to predict the political leaning of the news article \(n_{j}\).
* **NewsA+WikiA+Topic(E)** and **NewsA +WikiA+Topic(AE)**: These are the proposed models which utilizes Wikipedia \(\omega_{n_{j}}\) and topic representations \(\tau_{n_{j}}\) with the base news article representations where \(E\) and _AE_ denotes Encoder and Autoencoder versions of the model respectively.
#### Iv-A3 Language Models
We utilize the existing transformer-based text representation learning models in our proposed model. We primarily use BERT [48] in all our experiments. However, we also give results by replacing BERT with other popular language models like _RoBERTa_[49] and _DistillBERT_[50] to learn base representations \(\delta_{n_{j}}\) and Wikipedia representations \(\omega_{n_{j}}\) of a news article \(n_{j}\).
### _Results_
Table IV gives an overview of the results of all our experiments using BERT to obtain base representations \(\delta_{n_{j}}\) and Wikipedia representations \(\omega_{n_{j}}\) of a news article \(n_{j}\). We replicate some of the results in Table II in our results for extensive comparison of our model performance. this experiment, we set \(\beta=0.5\) to give more importance to topics and all the results in Table IV are based on the Media-based split(1) dataset.
We can note from Table IV that the performance of the base BERT model on only news articles differs significantly for the random and media-based data splits. This corroborates our claim that models are not able to generalize the political learning from unseen news domains. It is worth mentioning here that minor parameter tuning on BERT model increase the model accuracy by \(15\%\) compared to the existing work [14]. To reduce the model bias we evaluated several experiments by infusing external knowledge sources.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Experiment Name & Acc. & Precision & Macro F1 & Recall & MAE \\ \hline \hline NewsA(RandomSSplit) & 0.749 & 0.7350 & 0.7456 & 0.8145 & 0.3774 \\ \hline Baby et al.(2020)(NewsA) [14] & 0.3675 & - & 0.3553 & - & 0.90 \\ Devatine et al.(2022)(Bi-LSTM) [20] & 0.4697 & - & 0.4441 & - & 0.69 \\ NewsA(Our method) & **0.516** & **0.5325** & **0.4804** & **0.5319** & **0.5907** \\ \hline WikiA & 0.562 & 0.4722 & 0.4040 & 0.3650 & 0.5625 \\ Baly et al.(2020)(NewsA+WikiA) [14] & 0.4975 & - & 0.5116 & - & 0.32 \\ Devatine et al.(2022)(Bi-LSTM + SA/Sent) [20] & 0.4876 & - & 0.4584 & - & 0.67 \\ Liu et al.(2018)+SA/EDU [44] & 0.5101 & - & 0.4861 & - & 0.72 \\ Devatine et al.(2022)(Bi-LSTM + SA/EDU) [20] & 0.5439 & - & 0.5136 & - & 0.57 \\ NewsA+WikiA & **0.6855** & **0.5535** & **0.6290** & **0.6835** & **0.6011** \\ \hline Khan et al.(2020)(21) [21] & 0.3240 & 0.33 & 0.161 & 0.10 & 1.03 \\ Baly et al.(2020)(NewsA+Twitter) [14] & 0.72 & - & 0.6429 & - & 0.2900 \\ NewsA+WikiA+Topic(AE) & 0.5867 & 0.6067 & 0.5344 & 0.5936 & 0.6184 \\ NewsA+WikiA+Topic(E) & **0.73** & **0.7232** & **0.7288** & **0.7584** & **0.3676** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Impact of infusing external knowledge to news article representations in the prediction task. All model results are given in Accuracy(%), Precision, Recall, Macro F1, and Mean Average Error (MAE). The proposed knowledge-infused approach with encoder module to learn topic representations (\(\tau\)) gives significant performance boost over the baseline approaches in terms of F1 score.
We first compare the model performance by only utilizing Wikipedia representations \(\omega_{n_{j}}\) to predict the political leaning of a news article \(n_{j}\). Similar to the previous experiment, we notice that the parameter fine-tuning on the BERT model increases the model accuracy by \(16.9\%\) compared to the model that uses only base representations while the existing method [14] under performs in a same setup. Also, our proposed model outperforms all other baseline works in terms of performance measures. Our experiments improves the accuracy by \(17\%\) than Liu et al.(2018)+SA/EDU [44]. Furthermore, it increases accuracy by \(19.8\%\) and \(14\%\) than Devatine et al.(2022) (Bi-LSTM+SA/Sent) and Devatine et al.(2022) (Bi-LSTM+SA/EDU) [20] respectively, when we utilize Wikipedia representations \(\omega_{n_{j}}\). Next, we compare the proposed model that infuses external knowledge representations \(\lambda_{n_{j}}\) of a news article \(n_{j}\) with the base representations \(\delta_{n_{j}}\), where \(\lambda_{n_{j}}\) is the weighted aggregation of Wikipedia representations \(\omega_{n_{j}}\) from **WKB** and topic representations \(\tau_{n_{j}}\) from **PDB**. It is evident from Table IV that the proposed model with _Encoder_ setup outperforms the existing work [14] and the _Autoencoder_. The encoder version of the proposed model gives an accuracy of \(73\%\) to predict the political leaning of news articles whose news domains are completely invisible during the training phase. Even though the performance of the Auto Encoder version of the proposed model is almost equal to that of the existing work [14], we emphasize that the proposed model gives a significantly better performance in terms of _F1 score_.
In Table V we show the impact of varying the value of weight parameter \(\beta\) on both Encoder and AutoEncoder versions of the proposed model. We notice that both models give their best performance with \(\beta=0.5\), which means we set equal priority to both knowledge sources. We verify the efficacy of our external knowledge bases, and the results illustrate the both global knowledge base and topic based knowledge plays an important role in terms of reducing the algorithmic political bias and improve in predicting the political leaning of news articles.
In order to demonstrate the generalizing capabilities of the proposed model, we evaluate the proposed model on multiple Media-based data splits. From the Figure 5, we notice that the algorithmic political bias is present in the base representation without any external knowledge on all the Media-based split datasets. Figure 5 also displays the effectiveness of our proposed framework across multiple train and test set combination from our news article dataset.It is important to analyze the performance of the model by varying the dataset during evaluation. We show the performance of our model on different Media-based splits to illustrate robustness and generalizability of the model.
To demonstrate the efficacy of our proposed model we use multiple language models like RoBERTa and DistillBERT. Table VI confirms that among all the models, BERT representation model outperforms other language models in both Encoder and AutoEncoder versions. BERT is a strong baseline model, as it was pre-trained on a large corpus of text and has been fine-tuned on various downstream tasks. RoBERTa and DistillBERT are both variations of BERT, RoBERTa, for example, uses additional pre-training data and modified pre-training techniques, while DistillBERT uses a knowledge distillation approach to reduce the computational resources required for training. Although efficient, we noticed that the proposed model trained with DistillBERT gives poor performance and it is unable to mitigate the algorithmic political bias in our prediction task. Table VI also corroborates the fact that BERT is better suited than RoBERTa or DistillBERT for capturing certain linguistic features or for making political leaning prediction.
## VI Conclusion and Future Work
In this work, we attempt to mitigate algorithm political bias of machine learning algorithms by infusing external knowledge sources like Wikipedia and political debates to predict the political leaning of news articles. We proposed a novel way to learn weighted feature representations of entities or topics present in presidential debate records by carefully mapping them in news articles. With a series of experiments, we notice that external knowledge sources can debias base feature representations of news articles and thereby improving the performance of the prediction model by outperforming the
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \hline \multicolumn{1}{c|}{Model} & \multicolumn{5}{c}{\(\beta\)} \\ \cline{3-6} \multicolumn{1}{c|}{} & 0.0 & 0.1 & 0.5 & 0.7 & 1.0 \\ \hline E & Acc. & 0.5044 & 0.5704 & **0.73** & 0.6850 & 0.6474 \\ & Mac. F1 & 0.4801 & 0.5855 & **0.7232** & 0.6321 & 0.6452 \\ \hline AE & Acc. & 0.5066 & 0.5824 & **0.5867** & 0.4191 & 0.4027 \\ & Mac. F1 & 0.4675 & 0.5196 & **0.5344** & 0.3906 & 0.3587 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Impact of varying the importance of representations \(\omega_{n_{j}}\) and \(\tau_{n_{j}}\) with the weight parameter \(\beta\) on the proposed model performance
Fig. 5: Accuracy (%) of the proposed model with only base representations (_NewsA_) and with knowledge-infused representations (_NewsA+WikiA+Topic(E)_) on multiple media-based data splits. The knowledge-infused approach has a significant performance improvement over the base model in mitigating algorithmic political bias
baseline approaches in terms of prediction accuracy (%) and F1 score. We demonstrate the effectiveness of our proposed knowledge-infused model by conducting several experiments on a variety of media-based data splits, and with multiple base language models.
The proposed model and the problem of predicting the political leaning of news articles have several future directions. We discuss some of them below:
### _Quantifying Political Bias on Topics_
The proposed work and existing work characterize political bias on fine-grained entities like users, social media posts, and news articles. An interesting research direction can be identifying and forecasting political bias of topics and events with news articles as surrogate information using machine learning. Such models can quantify social responses to news content on a given topic or story which can help social media engineers to give supporting information on topics for news articles from a given news domain.
### _Mitigating echo chambers_
It is well known that the political bias in news domains creates echo chamber effect in social media communities. Identifying the political leaning of news articles can assist analysts to engineer measures to mitigate echo chamber effect in social media and online discussion forums. This will open opportunities to develop ML algorithms that recommend news contents which are similar in story that the user prefers but in different projections of the story. Another potential research direction can be mitigating polarized online discussions among multiple users who argues about political scenarios that appeared on news articles.
### _Dynamics of polarization in online forums_
Since the political leaning of news domains change over time, mitigating the political bias in a temporal aspect can also be another interesting future direction in this work. Understanding the dynamics of political bias of topics in online forums can also be another potential research direction given the importance of quantifying political bias of topics.
|
2309.10355 | Requirements Quality Research: a harmonized Theory, Evaluation, and
Roadmap | High-quality requirements minimize the risk of propagating defects to later
stages of the software development life cycle. Achieving a sufficient level of
quality is a major goal of requirements engineering. This requires a clear
definition and understanding of requirements quality. Though recent
publications make an effort at disentangling the complex concept of quality,
the requirements quality research community lacks identity and clear structure
which guides advances and puts new findings into an holistic perspective. In
this research commentary we contribute (1) a harmonized requirements quality
theory organizing its core concepts, (2) an evaluation of the current state of
requirements quality research, and (3) a research roadmap to guide advancements
in the field. We show that requirements quality research focuses on normative
rules and mostly fails to connect requirements quality to its impact on
subsequent software development activities, impeding the relevance of the
research. Adherence to the proposed requirements quality theory and following
the outlined roadmap will be a step towards amending this gap. | Julian Frattini, Lloyd Montgomery, Jannik Fischbach, Daniel Mendez, Davide Fucci, Michael Unterkalmsteiner | 2023-09-19T06:27:23Z | http://arxiv.org/abs/2309.10355v1 | # Requirements Quality Research: a harmonized Theory, Evaluation, and Roadmap+
###### Abstract
High-quality requirements minimize the risk of propagating defects to later stages of the software development life cycle. Achieving a sufficient level of quality is a major goal of requirements engineering. This requires a clear definition and understanding of requirements quality. Though recent publications make an effort at disentangling the complex concept of quality, the requirements quality research community lacks identity and clear structure which guides advances and puts new findings into an holistic perspective. In this research commentary we contribute (1) a harmonized requirements quality theory organizing its core concepts, (2) an evaluation of the current state of requirements quality research, and (3) a research roadmap to guide advancements in the field.
We show that requirements quality research focuses on normative rules and mostly fails to connect requirements quality to its impact on subsequent software development activities, impeding the relevance of the research. Adherence to the proposed requirements quality theory and following the outlined roadmap will be a step towards amending this gap.
Requirements Quality, Theory, Survey
## 1 Introduction
The empirical evidence of the impact of requirements engineering (RE) on the software development life cycle has shown that the quality of requirements artifacts and processes influences project success and budget adherence [1; 2; 3]. Moreover, the cost of defects introduced during the RE phase of a project is reported to scale exponentially the longer they remain undetected [4]. This necessitates quality assurance techniques capable of detecting RE defects as soon and as reliably as possible.
Requirements quality research is dedicated to supporting the software engineering process with the means to evaluate and improve the quality of requirements, mainly focusing on requirements artifacts [5]. However, recent systematic investigations of requirements quality literature revealed a lack of rigor and relevance of these contributions [6; 7]. Moreover, the impact of the quality factors proposed in literature (i.e., requirements writing rules) remains largely unexplored in practice [7], hindering its adoption in industry [8; 9; 10; 11].
Existing quality theories and frameworks are too abstract to guide requirements quality research at an operational level [12; 13]. These theories often only divide quality into sub-categories without any means of applicability. In this paper, we argue for the need for a theoretical and operationalizable foundation of requirements quality research. We review the closely related software quality research and draw parallels to requirements quality research to consolidate a harmonized requirements quality theory. Additionally, we survey requirements quality literature with respect to the theory to reveal current shortcomings. Accordingly, we make the following contributions:
1. A harmonized requirements quality theory serving as a theoretical foundation for requirements quality research.
2. A survey of requirements quality research revealing if and how concepts of the theory are reported in the state of the art, but also emphasizing shortcomings.
3. A consequent research roadmap aimed at mitigating these shortcomings.
The rest of this manuscript is organized as follows: Section 2 illustrates the evolution of software quality research and draws the parallel to requirements quality research. In Section 3, we derive a harmonized requirements quality theory from this comparison. This theory is used to evaluate the state
of requirements quality research in Section 4 and reveal current shortcomings. The consequent research roadmap to mitigate these shortcomings is presented in Section 5 before concluding in Section 6.
## 2 Software Quality Research
Software quality research follows a similar premise as requirements quality research. It is necessary to control the quality of software artifacts (e.g., source code) as it impacts the overall quality of the development life cycle and the final product. This premise aligns with the aim of requirements quality research. To show commonalities and differences between these two research fields, we review the evolution of software quality research in Section 2.1 and draw a parallel to requirements quality research in Section 2.2. We reach conclusions about the necessary direction the latter needs to take.
### Evolution of Software Quality Research
Software quality research revolves around assessing the quality of software artifacts [14]. In the following, we describe the evolution of the field according to Broy et al. [14] and Deissenboeck et al. [15].
Guidelines and Metrics-based approachesGuidelines are the simplest approach for controlling the quality of software artifacts. For example, the Java coding conventions [16] prescribe--among other suggestions--how to name and structure Java files. However, guidelines commonly fail to significantly impact software quality, likely because they lack the motivation for their relevance [17]. For example, the aforementioned suggestions are justified because "[c]ode conventions improve the readability of the software" [16] without any empirical evidence of that claim. Furthermore, guideline conformance is difficult to assess and hence seldom done in practice [15]. The latter shortcoming was addressed by introducing metrics-based approaches where metrics were devised to measure relevant attributes of software artifacts. Among others, _lines of code_[18] and _cyclomatic complexity_[19] were used to evaluate software quality automatically. Nevertheless, most metrics continue to lack justification of their relevance [20; 21; 14; 22].
Quality ModelsTo overcome the relevance shortcoming, quality models aggregated metrics into hierarchical trees of criteria [23; 24]. The leaf nodes are specific enough to be operationalized as an evaluation metric, while the aggregation into higher-level quality characteristics provided the justification for their relevance. For example, low-level concepts such as _structuredness_ and _conciseness_ of code were justified by their aggregation to _understandability_ and _maintainability_, which were widely accepted as relevant software quality characteristics [24]. However, hierarchical models suffered from unclear decomposition rules and constrained
levels of granularity, which were either too abstract to be operationalized or too detailed, disconnecting the applicable metrics from their rationale [14; 15].
#### Quality Meta-Models
The popularity of quality models necessitated a structure for the proposed models [25]. Meta-models like the Goal Question Metric approach by Basili et al. [26] and the factor-strategy quality meta model by Marinescu and Ratiu [27] provide this overarching structure. Deissenboeck et al. [28] contribute the DAP classification for quality models, which categorizes the aim of a quality model to be to _define_ (D), _assess_ (A), or _predict_ (P). The publication further relates quality meta-models to quality models as the "model of the constructs and rules needed to build specific quality models." [28].
#### Activity-based Quality Models
In addition to the shortcomings that existing quality models continued to suffer, the elements populating these models were found to be heterogeneous [15]--i.e., properties of a _system_ were mixed with properties of _activities in which the system is used_. For example, the maintainability branch in the software quality characteristics tree by Boehm et al. [29] contains both system properties like the _structuredness_ of a software artifact, but also attributes of activities in which these artifacts are used, like _modifiability_. The latter describes the _activity_ of _modifying_ an artifact rather than a system property, despite the adjective's nominalization suggesting otherwise.
So far, no clear rule for distinguishing a system from an activity property has been proposed. We derived two heuristics from the implicit argumentation of previous publications [15]. First, if a property involves an additional agent (e.g., _testability_ involves a _test engineer_, _modifiability_ involves a _modifier_, although not necessarily human), then it represents how the system is used--i.e., an activity property. The second heuristic comes in the form of a syntactical criterion:
* Nominalized adjectives (e.g., structured-ness, concise-ness) tend to be **system properties**
* Nominalized verbs (e.g., modify-ability, access-ability, augment-ability) tend to be **activity properties**
Interpreting activity properties as system properties ignores an underlying impact relationship. For example, interpreting _modifiability_ as the _system_ property of how receptive it is to change omits that actual system properties (e.g., whether the system is digital or analog or who has writing access rights) _impact_ the ability of a stakeholder to modify the system, which is an activity property.
To address the issue of heterogeneous properties, Deissenboeck et al. introduced _activity-based quality models_[14; 15], which separate system properties from activity properties and form two distinct, orthogonal dimensions. The
model expresses quality as the impact of system properties on activity properties. Figure 1 visualizes a simplified version of the quality model [15], showing how code clones impact the modification sub-activity and expressive identifiers impact the concept-location sub-activity.
The activity-based quality model was successfully applied to usability [30], security [31], and service-oriented architecture [32] before Wagner et al. distilled a comprehensive activity-based meta-model in the scope of the Quamoco project [33; 34]. In parallel, the original use case of the activity-based quality model, which focused on maintainability, received extensive tool support [35; 36] contributing evidence to the operationalization of quality models in practice [37].
Activity-based quality models solve limitations of previous quality models at the cost of increased complexity, which manifests in additional challenges to operationalize and communicate the notion of quality [38]. However, the complexity of these models is necessary to tackle the faceted concept of quality [38; 39]. Research continuously tackles the inability of activity-based quality models to assess artifact quality and distinguish quality levels [40]. For example, weights empirically derived from historical data replaced expert-based propositions [41], and Bayesian networks were utilized to model the impact relationships [42].
### Mapping to Requirements Quality Research
In the following, we draw a parallel of the evolution of quality research between the areas of software engineering and requirements engineering.
#### Metrics and Quality Models
Similar to software quality, requirements quality research historically originated from proposing metrics like _passive voice_ of requirements sentences [43] or _sentence length_[44], which are associated with bad quality of requirements
Figure 1: Excerpt from the activity-based quality model for maintainability
specifications. Frattini et al. [7] collected these quality factors and indicated their limitations. Most existing publications either fail to gauge the impact of these metrics [45] or explicitly disregard their relationship [46]. Requirements quality models [47; 48] integrate these factors into larger frameworks but often remain vague on their notion of impact.
The investigation of impact is often limited to a comparison between the quality factor and practitioners' subjective, general perception of the quality of the requirements entities [49]. Wilson et al. contribute a first impact matrix between quality indicators and quality attributes [50], but the latter suffers from the same system and activity properties heterogeneity. Similarly, Yang et al. state that "[a]mbiguity is therefore not a property just of a text, but a conjoint property of the text and of the interpretations held by a group of readers of that text" [51], exposing the necessary distinction between system and activity properties.
_Activity-based Requirements Quality_
A large portion of requirements quality research exhibits the same shortcomings identified and overcome by software quality research, namely that (1) requirements quality factors lack relevance due to their unknown impact, which in turn inhibits adoption in practice, and (2) the terminology of requirements quality aspects confuses system and activity properties.
Femmer et al. apply the activity-based quality perspective to requirements engineering by proposing the activity-based requirements engineering quality model (ABRE-QM) [52]. This model leverages the insights from activity-based software quality models [15; 17; 33] and shows that the quality of requirements depends on the impact they have on the activities in which they are used. However, despite the authors' call for action [53], ABRE-QM saw little adoption in research as demonstrated in recent systematic investigations of the requirements quality literature [6; 7].
The ABRE-QM example above raises the concern that requirements quality researchers do not properly utilize the activity-based approach successfully employed in software quality research. In this manuscript, we want to encourage further research on this approach by presenting a revised requirements quality theory, a thorough investigation of the requirements quality literature verifying the hypotheses from previous studies [6; 7], and a consequent research roadmap.
## 3 Requirements Quality Theory
We generated a harmonized requirements quality theory (RQT) by consolidating the evolution of software quality models described in Section 2.1, their application in requirements engineering as described in Section 2.2, and alignment to the established Quamoco quality model [34]. In terms of theory types [54], the RQT is both _explanatory_, as it explains the notion of requirements quality, and _prescriptive_, as it prescribes how to report contributions
to requirements quality. The building blocks of the theory are described in Section 3.1 and illustrated with an example in Section 3.2.
### Theory
The concepts that constitute this theory are visualized in Figure 2, and each concept is described in Table 1. The model represents an evolution of the original activity-based requirements engineering quality model (ABRE-QM) proposed by Femmer et al. [52]. Here, we present changes to the original model.
\begin{table}
\begin{tabular}{l|l|l}
**Concept** & **Explanation** & **Origin** \\ \hline Entity & A requirements artifact or part thereof & [52] \\ Factor & “[A] normative metric which maps a textual requirement of a specific granularity” [7] to a numerical output & [15] \\ Entity-Fact & A composition of one entity and one factor & [15] \\ \hline Agent & Any person, group of people, or automatism involved & [52] \\ Activity & An activity in which the entity is used & [15] \\ Attribute & A measurable property of an activity & [30] \\ Activity-Fact & A composition of one activity and one attribute & [15; 52] \\ Impact & The impact of a fact on an activity-fact & [15; 56] \\ \hline Context Factor & A factor describing the context of the impact relationship & [55; 56] \\ \hline Cost & The magnitude of cost associated with an activity-fact & [56] \\ Resource & The resource affected by the economical impact & [56; 57] \\ \hline \end{tabular}
\end{table}
Table 1: Explanation and origin of theory concepts.
Figure 2: Concepts of the Requirements Quality Theory.
The artifact-related section of the model (left part of Figure 2) is largely equivalent to the original publications [15; 52]. Entities represent requirements artifacts of different granularity [5], which can be decomposed into further entities. For example, a requirements specification can be decomposed into sections, which in turn consist of paragraphs and sentences or requirements. We consider an artifact to be a high-level requirements entity and hence do not explicitly add the _artifact_ to the model, deviating from the original [52]. Similarly, factors can be decomposed into sub-factors to accommodate composite factors. For example, Antinyan et al. [58] position their proposed quality factor of _conjunctive complexity_ as a sub-factor of _syntactical complexity_.
The activity-related section of the model (middle part of Figure 2) again adapts the original models [15; 52]. The concept _activity_ does not represent common requirements activities, like elicitation, analysis, and validation [59], but rather every process that takes a requirements entity as input and produces an output. This includes some requirements activities (like analysis and validation, which use requirements as input) but not others (like elicitation, which often does not presuppose existing requirements). Hence, we rather refer to them as _requirements-affected activities_. These further include implicit sub-activities (e.g., _understanding_ and _interpreting_ an entity), which can be aggregated with other, more explicit sub-activities (e.g., _test case design_) to form high-level activities (e.g., _validation_). The decomposition relationship of the activity concept accommodates this aggregation. To accommodate not only human actors involved in activities but also any automatism like requirements processing tools [60] we abstract the concept of _stakeholder_ to _agent_.
We generalized the impact concept in this theory. While previous models assumed that impact is categorical (i.e., the occurrence of a fact has either a positive, negative, or no impact at all, like in Figure 1[15] or linear (i.e., the larger the evaluation of a quality factor, the better/worse is its quality), we consider the impact to model any kind of relationship between Entity-facts and Activity-facts. This opens up the theory to more complex relationships, which can model the actual impact more accurately and allows to compare the impact of quality factors with each other.
Two concepts were added to the model. First, the impact was related to an _Activity-fact_ composed of an activity and an attribute as proposed by Winter et al. [30]. This way, the structure of the variables on the two sides of the impact relationship is mirrored. Furthermore, the necessity to associate an impact with a measurable property of an activity is emphasized. Second, context factors also influence the impact of an Entity-fact on an Activity-fact. As recognized by previous publications [55; 56], the impact differs depending on external factors related to, among others, the organization and the people involved [61].
The economic section of the model (right part of Figure 2) is a novel addition to previous iterations of the activity-based models [15; 34; 52]. As long as the subsequent _economic_ impact of an Activity-fact is unknown, the Entity-fact that produces the Impact on this Activity-fact will remain neglected [56; 57].
Hence, the software process economics perspective introduces a _Cost_ for a specific _Resource_ such as time or money.
### Example
In this section, we illustrate the RQT with a fictitious example to demonstrate its application. The example is additionally visualized in Figure 3.
In this example, a customer's requirements were elicited and documented in a requirements specification containing the entity _user story 42_. One relevant quality factor used by the organization responsible for implementing the requirements is template _conformance_, which prescribes that all user stories must follow the Connextra template [62] "As a \(<\)role\(>\) I want to \(<\)goal\(>\) so that \(<\)benefit\(>\)." This quality factor maps the entity to a categorical value, containing--among others--the values _conform_, _missing role_, and _missing all elements_. In this example, the role is omitted from the user story. Hence, the quality factor template conformance is evaluated to _missing role_, which constitutes the entity-fact (yellow box in Figure 3).
Figure 3: Exemplary instantiation of the theory
The organization uses this user story in a subsequent, requirements-affected _development_ activity, where a different stakeholder--the developer--is responsible for translating the entity into code. This activity can be decomposed into two distinct sub-activities: _understanding_ the entity and _programming_ the respective implementation.
One desired attribute of the activity understanding is _determinism_--i.e., a requirements entity should have only one unique interpretation. Possible variations of the interpretation and, therefore, the subsequent translation of a requirement must be avoided. Because the _conformance_ quality factor is evaluated to _missing role_ on the _user story_ entity, the _understanding_ activity is less _deterministic_, as the developer can make a different assumption about the role implied by the requirement. The understanding activity has become ambiguous, which constitutes the _activity-fact_ (orange box in Figure 3).
The relationship between the entity-fact and the activity-fact is the _impact_ of the quality factor. Instead of limiting the impact concept to categorical values (e.g., either _has an impact_ or _has no impact_), the RQT enables more complex impact relationships. In this fictitious example, the quality factor value _missing role_ is associated with a 64% chance of making the understanding sub-activity ambiguous. This relationship can be determined empirically via experimental research investigating the likelihood of the different values of the conformance quality factor reducing the determinism of the understanding sub-activity.
The programming sub-activity may go unaffected by the entity-fact that the conformance has a value of _missing role_ (green box in Figure 3): regardless of the agent's interpretation of the requirements entity, the programming sub-activity will remain unaffected in respect to the relevant attribute _duration_ under the assumption of a similar user interface for both roles. Whether the feature is coded for the role receptionist (as the customer intended) or patient (as the developer assumed) does not significantly change the duration of the sub-activity if the user interfaces only barely differ.
The significant impact on understanding is influenced by the organizational model, which is one relevant _context factor_. Since the organization is globally distributed and the two involved agents are unlikely to have informal interactions, the impact is amplified. In contrast, in a small organization where all involved agents share an office, the impact can be alleviated as missing information is recovered through informal communication. Similarly, the software development process model may significantly influence the impact of the quality factor, and the use of an agile approach may reduce the impact by encouraging communication between the customer and developer. The context factors significantly influence the impact and, therefore, have to be included in the relationship between entity-facts and activity-facts.
The reduced determinism of the understanding activity has an economic effect--i.e., the less deterministic the activity is, the more the implementation needs to be revised, which costs money and time (red box in Figure 3). Context
factors influence the extent of this effect as, for example, a re-implementation can be more costly in larger organizations due to organizational overhead.
For the sake of brevity, the example omits the following aspects: (1) the example limits the number of elements populating the relationship. More quality factors of the entity, activities, attributes of activities, and context factors are possibly involved in the relationship. (2) Interaction effects between quality factors and context factors are plausible but not reported here.
However, the example demonstrates how adherence to this activity-based RQT elevates requirements quality factors from normative rules (i.e., user stories must conform the template for the sake of it) to empirically-backed impact predictions (i.e., user stories must conform the template to mitigate ambiguous interpretations and avoid implementation cost).
## 4 State of research
Despite the publication of the ABRE-QM [52] and its authors' proposition to adapt the quality meta-model for future requirements quality research [53], recent systematic reviews raised concerns regarding a perspective on requirements quality limited to the artifact-related section of the model (left part of Figure 2) [6; 7].
To validate these concerns, we formulate the following research question. **How are the concepts of the requirements quality theory reported in requirements quality literature?** Answering this research question requires extracting information from a population of publications; accordingly, we employ survey research as our approach to gain insight into the current state of research. We follow the survey guidelines by Molleri et al. [63] and report our survey in the following subsections. All supplementary material for replicating this study is available in our replication package1.
Footnote 1: Available at [https://doi.org/10.5281/zenodo.8167598](https://doi.org/10.5281/zenodo.8167598).
### Survey Objects
The target population of our survey is the requirements quality literature dealing with quality factors in requirements artifacts. Frattini et al. [7] conducted a systematic study on requirements quality factors, including a sample of 57 primary studies. To our knowledge, this is the only sample that fulfills our aforementioned requirements. This classifies the sampling as non-probabilistic, more specifically convenience sampling [63].
### Study Design
We follow the recommended practices for the survey research process and report our steps accordingly [63]. However, we disregarded steps that only apply to surveys with human subjects, such as _participant recruitment_ and _response management_.
We derived the _definition of the research objectives_ in the form of the research question directly from previous research [6; 7; 53]. We established a _study plan_, rigorously documenting all research progress and justifications for any deviations during the process. We _identified and characterized the population_ of our survey and executed our _sampling plan_ as described in Section 4.1.
For our _instrument design_, we maintained two artifacts. We created an extraction guideline based on the RQT concepts. Each concept of the RQT was associated with one or more categorical variables, each containing a set of codes that represented _if_ and _how_ the concept was reported. The codes were created ad hoc in the first iteration of extraction and refined based on discussions and theoretical background in the second iteration.
The extent of the codes varied. The codes that represent how the concept _entity_ is reported are, for example, _explicit_ and _implicit_. An entity is either reported explicitly if its scope and form are clear. It is reported implicitly if the authors just report that the factor applies to a "requirement" without defining whether this is a single or multiple natural language sentence, whether the language is constrained or not, or whether it assumes a full sentence at all.
The codes of other concepts were more complex and grouped into distinct categories. For example, the codes of the concept _Factor_ were split into two groups, representing both the _explicitness_ when reporting a factor (i.e., whether the factor is explicitly _reported_ or _referenced_ from another publication) and the _form_ in which the factor is reported (i.e. if the factor is represented with a _textual description_ or defined using a logical or mathematical _formula_). The extraction guideline containing all codes, explanations, and examples can be found in the replication package.
The first author extracted the appropriate code for each concept in the requirements quality theory from each publication. The extractions for each publication in the sample were recorded in a spreadsheet. For _instrument validation_, the second author of this manuscript independently performed the extraction task using the guideline on six (\(\approx 10\%\)) publications randomly sampled from the survey objects. The second author performed the extraction on two of these six publications as training, and the remaining four were used to calculate the inter-rater reliability between the first and second author.
The task overlap achieved an percentage agreement [64] of 83.3%, whereas Cohen's Kappa yields a _moderate_ agreement of 54.2%. As Cohen's Kappa is unreliable for uneven marginal distributions [65], we calculated the more robust S-Score [66]--yielding a _good_ agreement of 76.8%--which we deem sufficient for assessing the inter-rater reliability.
We used the codes in the _data analysis_ phase to generate descriptive statistics on which we based our interpretation of the state of requirements quality. These form a quantified foundation for interpreting the state of requirements quality literature with respect to the research question. For final _reporting_, we adapted established reporting guidelines [63] and disclosed all material in a reusable replication package.
### Study Results
Figure 4 visualizes the distribution of the relevant codes among all concepts included in the requirements quality theory. Each concept is overlaid with a bar representing how many of the 57 publications contained the concept. The row below each concept represents its dimensions derived from the appropriate codes.
Though both entities and factors are explicitly reported in all 57 publications of the sample, a large portion (\(24/57=42.1\%\)) of entities is reported implicitly--i.e., the entity's scope is not clear. This occurs mostly because authors attach the reported quality factor to the entity _requirement_ without specifying the scope or form of the entity. Montgomery et al. [6] have already noted this shortcoming in the requirements quality literature and it represents a terminological ambiguity in the research domain.
Seventeen out of 57 publications (29.8%) do not report any impact on activities (code _N/A_) and hence neglect the practical relevance of the proposed quality factors. Agents are only reported in 14 (24.6%) of all publications. Activities are--when reported--predominantly elicited _ad hoc_ (\(37/40=92\%\)) and rarely _systematically_--i.e., when activities impacted by a quality factor are discussed, the identification of activities has no systematic approach. Attributes are also only rarely reported (\(8/57=14\%\)).
We grouped the codes classifying how _impact_ is reported into four distinct dimensions, two of which are reported here. The _evidence_ for the impact--when at all reported--is dominantly hypothesized (\(19/40=47.5\%\)) and rarely either inductive (\(11/40=27.5\%\)) or referenced (\(10/40=25\%\)), i.e., draws the evidence from another publication. Previous studies [6; 7] have also noted this dominance of anecdotal, non-empirical evidence. The _modality_ of impact relationships is balanced between _necessary_ and _possible_--i.e., the impact of
Figure 4: Survey results depicting the distribution of codes.
quality factors is reported almost equally often to be certain or potential. The remaining two dimensions of impact (_generality_ and _frame of reference_) yielded no additional insight into the surveyed objects and are hence not reported here but contained in the replication package.
Context factors are almost completely neglected and only reported to a degree varying between zero (no publication reports the influence of any _tools_) and 24.6% (14 out of 57 publications reporting _product_-related factors, e.g., the system's size or type).
Both _cost_ and _resources_ are reported only rarely (\(9/57=15.8\%\) and \(5/57=8.8\%\) respectively) and, if so, only hypothesized or referenced, never determined empirically. Money and time are mentioned as the resources affected by activity impact, and the cost is only estimated in terms of expected change (e.g., "_reduction_ of the time spent" [46]) or general magnitude (e.g., "_significant_ amounts of money" [67]).
### Interpretation
In this section, we interpret the results presented in Section 4.3 and answer the research question.
Publications in the requirements quality literature adhere to the RQT to a varying degree. While all publications in the sample mentioned both an entity and a quality factor, activity-related concepts, context factors, and the economic impact are often neglected. Failing to consider the context factors severely threatens the external validity of the proposed quality factors [55; 56] and neglecting the economic impact risks undermines their acceptance [56; 57].
Context factors and economic impact are arguably more challenging to investigate [68]; however, we emphasize that the lack of activity perspective when proposing quality factors is critical for several reasons. The complete negligence of a quality factor's impact limits the factor to a normative, unmotivated prescription and challenges its practical relevance [52], which in turn promotes skepticism regarding requirements quality factors in industry [8; 9; 10; 11].
The survey emphasized two additional shortcomings in the field of requirements quality research. First, the tendency to elicit activities _ad hoc_ when discussing the impact of requirements quality factors bears the risk of missing other important impacts. Most publications discuss a hypothesized impact of a quality factor on a non-systematically selected activity or set of activities. This selection is usually justified by anecdotal or folkloric circumstances, like "[a]mbiguous requirements may bring about misinterpretations among stakeholders, and prompt a few issues" [69].
While these impact relationships are neither empirically proven nor falsified, the non-systematic selection of activities can disregard other impact relationships. Femmer et al. [52] demonstrated that a systematic elicitation of activities could reveal both positive and negative impacts by the same quality factor. For example, the factor _free of UI design details_, which states that an "artifact should describe the problem domain instead of the solution domain" [52], will positively affect maintainability, as UI details are volatile in
the beginning and require a lot of change management if specified in a requirement. Conversely, the same factor negatively impacts understandability, as the presence of UI design makes requirements more comprehensible.
Second, while activities are not reported consistently, attributes of activities are reported even less. Attributes represent measurable characteristics of activities; for example, the activity _understanding_ can be quantified by its attribute _level of agreement_[58; 70] or a _readability index_[71]. Neglecting the quantifiable attributes of activities impedes an empirical evaluation of a quality factor impact because it omits the measurement instrument for the dependent variable (i.e., the activity-fact) in the impact relationship [30].
We conclude that the requirements quality theory is implicitly embedded in the requirements quality literature. However, insufficient adherence to it results in several limitations when reporting new requirements quality factors. While the artifact-centric theory concepts are commonly covered, activity-centric concepts, context factors, and economic concepts receive less attention, which decreases these publications' practical relevance. With this study, we empirically confirm the concerns voiced in previous investigations of the requirements quality literature [6; 7].
### Threats to Validity of this Research
We discuss the threats to validity proposed by Wohlin et al. [72] and extended by Molleri et al. [63].
_Internal Validity_
We acknowledge a threat to internal validity due to sampling of publications. The method of object selection [6; 7] is deemed sufficiently rigorous to derive an initial theory.
_Construct Validity_
The constructs in this study--i.e., the elements of the theory--are established strictly following mature quality theories from the field of software quality. This ensures the alignment between the underlying theory and measurement constructs.
The lack of a theory to which the surveyed publications could have adhered when reporting quality factors resulted in the concepts of requirements quality often being embedded implicitly, complicating the extraction task. We minimized the resulting threat to internal validity through independent labeling and calculating appropriate inter-rater reliability metrics [65].
_External Validity_
The selected sample of publications [7] is constrained to empirical contributions to requirements quality research [6]. This limits the conclusion validity of the type of evidence for the _impact_ concept, as non-empirical work could contribute _theoretical_ evidence for impact relationships. For example, the impact of quality factors like _nominalization_[73] can be derived deductively by referring to
valency reduction caused by nominalization [74]. While publications utilizing linguistic theory are unknown to the authors, a valid conclusion regarding this type of evidence requires a more thorough extension of the sampling strategy.
## 5 Research Roadmap
Femmer et al. proposed an initial research roadmap detailing how to advance the field of requirements quality research [53]. Based on concerns of previous studies [6; 7] and the survey of the state of research reported in this study, we assess and update the three suggested steps by Femmer et al. [53]:
1. Creation of "a reference artifact and a usage model" eliciting typical entities, activities, and agents.
2. Creation of "a taxonomy of quality factors" as a central, accessible repository of quality factors.
3. Creation of "a taxonomy of impacts" as a catalog of impacts from quality factors onto activities.
We reflect on these proposed research streams in Sections 5.1 to 5.3 and add three further proposals in Sections 5.4 to 5.6. Because these research streams are grounded in the experiences from the software quality research, we expect contributions to them to promote requirements quality research that is relevant to practice.
### Artifact and Usage Model
Mendez et al. have contributed a reference artifact model for requirements engineering [5; 75] based on their fundamental positioning on artifact orientation [76; 77]. The AMDiRE approach constitutes a domain-agnostic reference for artifact types and serves the purpose requested by Femmer et al. [53] in that it can be tailored towards any industry context to model an artifact structure.
While the elicitation of human [78] and non-human, automatic agents [79] has been addressed, a reference model for activities requires explicit attention in literature. More importantly, with the update of the requirements quality theory over the initial ABRE-QM [52], we argue that a reference model for requirements-affected activities needs to provide _attributes_ to quantify each activity. Such attributes enable an empirical assessment of the impact of quality factors.
Additionally, a majority of publications reporting an impacted activity mention some variation of _understanding_ or _interpreting_ (\(32/40=80\%\)). We assume that every requirements-affected activity comprises an initial _interpretation_ sub-activity. However, such composition is obscured by the lack of a proper reference model for requirements-affected activities accounting for their aggregated nature.
It is conceivable that the _interpretation_ sub-activity is most prone to defects, which explains the research community's focus on _ambiguity_[6], as ambiguity represents the non-determinism of an interpretation. We argue
that a proper reference model for requirements-affected activities accounting for their aggregated nature can steer research towards identifying critical sub-activities--i.e., the ones most prone to impacting subsequent activities.
### Taxonomy of Quality Factors
Requirements quality factors [7; 53] are the cornerstone of artifact-centric quality assurance. The requirements quality factor ontology proposed by Frattini et al. [7] furthered this research stream. Although the ontology is in an early stage and requires additional iterations, quality factors and related objects--such as data sets and automation approaches--are now collected in a central repository.
### Taxonomy of Impacts
The taxonomy of impacts that Femmer et al. [53] deem the necessary final step of the roadmap has to be extended. Previous quality models--including the ABRE-QM [52]--consider only categorical or, at most, linear impact relationships. Therefore, a taxonomy seemed sufficient to record "a list of well-examined effects of quality factors on activities" [53]. We argue that the impact relationship can be more complex and requires a more general representation--i.e., rather than aiming for a taxonomy of impacts, we argue for developing an _impact framework_.
Given the evaluation of quality factors on requirements entities on one side and the evaluation of activity attributes on the other side, the impact relationship between these variables can be formulated as a regression problem. Instead of relying on experts to hypothesize the (categorical) type or (linear) extent of an impact, more complex relationships can be determined using, for example, Bayesian data analysis [80]. Consequently, this research stream aims to develop an impact framework capable of determining these impact relationships based on statistical instruments given sufficient data.
### Context Factors
Context factors must be considered in the impact relationship to operationalize the requirements quality theory [55]. Large-scale endeavors acknowledge the importance of context factors in regard to requirements quality [1], yet no unified collection of context factors relevant to requirements engineering exists. Established sets of software engineering context factors [61; 81] can be used as a starting point but require a dedicated investigation from the requirements engineering perspective.
A clear set of relevant context factors can support developing reporting guidelines for empirical studies on requirements quality and enable context-driven research [82]. While empirical software and requirements engineering publications typically strive for generalizability [81], scoping an empirical study according to the given context factors allows the data collected in that study to be integrated into the impact framework as outlined in Section 5.3. Conversely,
reporting the limited scope of a study enables a general requirements quality theory that can be assembled from multiple studies in well-defined contexts.
### Economic Impact
With the addition of economic concepts in the requirements quality theory, a research stream should be dedicated to the economic impact of activity facts. The impact relationship between quality factors and activities already benefits the acceptance of those factors for quality assurance in practice [53]. Adding an economic perspective--i.e., what amount of which resource a change of a certain activity-fact entails--can further bridge the gap between the normative, artifact-centric quality factors on one side and an economic decision-making process on the other side [57]. Since the purpose of quality factors is to support quality assurance in industry, understanding this economic perspective is of high priority despite the complexity of the topic.
### Tool support
We aim to make the RQT applicable to the industrial context through the development of tool support. The components necessary to realize this tool support are visualized in Figure 5. The goal of the tool is to estimate the impact of requirements entities and their context on the attributes of requirements-affected activities.
To this end, the tool needs an interface to the requirements entities, context information about the involved agents, and context information about the organization. The former two are often available in a requirements tracking system like Jira2[83], while the latter a company likely has to generate and provide manually.
Figure 5: Architectural overview of the proposed tool-support
Once provided with the necessary information, the tool characterizes both entities and context, i.e., quantifies the natural language requirements entities and the elusive factors determining the context. The quantified entities and context serve as input to the impact prediction model as described in Section 5.3, estimating the impact on the attributes of the requirements-affected activities, which in turn enables quantifying the economic impact as described in Section 5.5.
The realization of this tool depends on the previously described streams of research to identify valid quality factors (Section 5.2), context factors (Section 5.4), and activity attributes (Section 5.1). For the tool to provide an automated impact prediction the following automation modules must be realized:
1. Automatic entity characterization: a shared architecture to automatically evaluate the requirements quality factors collected in the quality factor ontology [7]
2. Automatic impact prediction: an accessible statistical model estimating the impact of quantified entities and context on affected activities, trained on historical data.
Developing this tool while adhering to open science principles will allow scholars to propose new quality and context factors, customize relevant activity attributes, and contribute historic data to improve the impact estimation of the prediction model. We invite contributions to the implementation and maintenance of the tool via its dedicated repository on Github3.
Footnote 3: Available at [https://github.com/JulianFrattini/rqt-tool](https://github.com/JulianFrattini/rqt-tool). An archived version is accessible at [https://doi.org/10.5281/zenodo.8167541](https://doi.org/10.5281/zenodo.8167541).
## 6 Conclusion
In this manuscript, we investigated the software quality literature and the application of the activity-based quality perspective to the requirements engineering domain. We extend the work of Femmer et al. [52] by proposing an evolved and harmonized requirements quality theory, and assess the adherence of the requirements quality literature to this theory. Our survey confirms the bias towards artifact-centric and the negligence of activity-centric concepts, which was noted in previous secondary studies [6; 7]. Finally, we update the requirements quality research roadmap initiated by Femmer et al. [53] to guide future contributions in the requirements quality research domain.
We are confident that the harmonized requirements quality theory provides the necessary guidance to propel requirements quality research and establish a common understanding of quality that is operationalizable in practice. We invite fellow researchers to contribute to the theory and the requirements quality research field in adherence to it.
Acknowledgments.This work was supported by the KKS foundation through the S.E.R.T. Research Profile project at Blekinge Institute of Technology.
|
2309.09604 | Interaction of soliton gases in deep-water surface gravity waves | Soliton gases represent large random soliton ensembles in physical systems
that display integrable dynamics at the leading order. We report hydrodynamic
experiments in which we investigate the interaction between two "beams" or
"jets" of soliton gases having nearly identical amplitudes but opposite
velocities of the same magnitude. The space-time evolution of the two
interacting soliton gas jets is recorded in a 140-m long water tank where the
dynamics is described at leading order by the focusing one-dimensional
nonlinear Schrodinger equation. Varying the relative initial velocity of the
two species of soliton gas, we change their interaction strength and we measure
the macroscopic soliton gas density and velocity changes due to the
interaction. Our experimental results are found to be in good quantitative
agreement with predictions of the spectral kinetic theory of soliton gas
despite the presence of perturbative higher-order effects that break the
integrability of the wave dynamics. | Loic Fache, Félicien Bonnefoy, Guillaume Ducrozet, François Copie, Filip Novkoski, Guillaume Ricard, Giacomo Roberti, Eric Falcon, Pierre Suret, Gennady El, Stéphane Randoux | 2023-09-18T09:19:01Z | http://arxiv.org/abs/2309.09604v1 | # Interaction of soliton gases in deep-water surface gravity waves
###### Abstract
Soliton gases represent large random soliton ensembles in physical systems that display integrable dynamics at the leading order. We report hydrodynamic experiments in which we investigate the interaction between two "beams" or "jets" of soliton gases having nearly identical amplitudes but opposite velocities of the same magnitude. The space-time evolution of the two interacting soliton gas jets is recorded in a \(140-\)m long water tank where the dynamics is described at leading order by the focusing one-dimensional nonlinear Schrodinger equation. Varying the relative initial velocity of the two species of soliton gas, we change their interaction strength and we measure the macroscopic soliton gas density and velocity changes due to the interaction. Our experimental results are found to be in good quantitative agreement with predictions of the spectral kinetic theory of soliton gas despite the presence of perturbative higher-order effects that break the integrability of the wave dynamics.
## I Introduction
Soliton gas (SG) is a concept in statistical mechanics and nonlinear physics that has been originally introduced by V. Zakharov in 1971 Zakharov (1971) as a large random ensemble of interacting solitons of the Korteweg-de Vries (KdV) equation. In the original Zakharov's model, the KdV SG is _diluted_ with all solitons being individually discernible in the physical space where they occupy random positions and have random amplitudes. The emergent dynamics of SG on a macroscopic (hydrodynamic) scale, significantly larger than the characteristic soliton width, is determined by the fundamental properties of the "elementary" interaction between individual solitons. Owing to the integrable nature of the KdV equation soliton collisions are pairwise (multi-particle effects are absent) and elastic, so that the interaction does not change the soliton amplitudes and velocities but produces only the additional position (phase) shifts Zakharov (1971).
In ref. Zakharov (1971) Zakharov introduced the kinetic equation for a non-equilibrium _diluted_ gas of weakly interacting solitons of the KdV equation. The Zakharov kinetic equation was generalized to the case of a dense (strongly interacting) KdV SG in ref. Zakharov (1972). The kinetic theory of SG for the focusing one-dimensional nonlinear Schrodinger equation (1D-NLSE) has been developed in refs. Zakharov (1973); Zakharov (1974).
Due to the presence of an infinite number of conserved quantities, random ensembles of nonlinear waves in integrable systems do not reach the thermodynamic equilibrium state characterized by an equipartition of energy leading to the so-called Rayleigh-Jeans distribution of the modes. Consequently, the properties of SGs are very different compared to the properties of classical gases whose particle interactions are non-elastic. The question of the thermodynamic properties of SGs is addressed by invoking _generalized hydrodynamics_ (GHD), the hydrodynamic theory of many-body quantum and classical integrable systems Zakharov (1975); Zakharov (1976); Zakharov (1977); Zakharov (1978).
It is well known that a comprehensive description of solitons and their interactions in physical systems described by integrable equations like the KdV equation or the 1D-NLSE is achieved within the framework of the celebrated inverse scattering transform (IST) method Zakharov (1971); Zakharov (1971); Zakharov (1971); Zakharov (1972); Zakharov (1973). In the IST method, each soliton is parametrized by a discrete eigenvalue of a linear spectral problem associated with the nonlinear wave equation under consideration Zakharov (1973). The fundamental property of integrable dynamics is isospectrality, i.e. the preservation of the soliton spectrum (the eigenvalues) under evolution.
The central quantity of interest in SG theory is the density of states (DOS), which represents the statistical distribution over the spectral (IST) eigenvalues. The spectral kinetic description of non-uniform (non-equilibrium) SGs involves the continuity equation for the DOS (associated with the isospectrality condition) and the equation of state defining the effective velocity of a tracer soliton inside a SG, which differs from its velocity in the "vacuum" due to the pairwise interactions with other solitons in the gas, accompanied by the position/phase shifts.
Despite the significant developments of the SG theory Zakharov (1971); Zakharov (1971); Zakharov (1972); Zakharov (1973); Zakharov (1973); Zakharov (1973); Zakharov (1974); Zakharov (1975); Zakharov (1976); Zakharov (1977); Zakharov (1978), the experimental and observational results related to SGs are quite limited Zakharov (1973); Zakharov (1974); Zakharov (1975); Zakharov (1976); Zakharov (1977); Zakharov (1978); Zakharov (1979). In recent works, it has been shown that SGs with controlled and measurable DOS can be generated in laboratory experiments made with deep-water surface gravity waves Zakharov (1979). An important step towards the quantitative verification of the spectral kinetic theory of SG has recently been made in optical fiber experiments where the refraction of a soliton by a dense soliton gas has been demonstrated Zakharov (1981). In
this experiment, the velocity change experienced by the tracer soliton in its interaction with an optical SG has been found to be in good quantitative agreement with the results of the spectral kinetic theory of SG.
In this article, we report further experiments to investigate the physical validity of the kinetic theory of SG. Instead of considering the interaction between a single tracer soliton and a SG like in ref. [31], we examine the interaction between two SG "beams" or "jets" in hydrodynamic experiments performed with deep-water surface gravity waves. By the SG jet we mean a SG having a narrow distribution of the discrete IST eigenvalues around some given point in the complex spectral plane. Sometimes such special SG's are called monochromatic, with the DOS modeled by the Dirac delta-function. Mathematically, the introduction of a DOS in the form of a linear superposition of several delta-functions (the "polychromatic" ansatz) leads to a significant simplification of the kinetic equation and the availability of analytical solutions describing various SG interactions [4; 15; 20; 32].
In our experiments, we consider the interaction of two monochromatic SG jets that are configured to have equal amplitudes and opposite velocities. In physical space, each jet has the form of a large ensemble of individual solitons, with all the solitons having nearly the same amplitude and velocity. This configuration has been considered theoretically in ref. [4] by formulating an appropriate Riemann problem for the SG kinetic equation. In this specific setting the DOS in the interaction region represents a linear superposition of two delta-functions, which reduces the SG kinetic equation to two quasilinear partial differential equations of hydrodynamic type. As shown in [4; 15; 22] the Riemann problem for the resulting two-component hydrodynamic system admits a simple weak solution consisting of three constant states (for each component) separated by two propagating contact discontinuities. This solution, in particular, describes the component density and velocity changes resulting from the nonlinear interaction between two SG jets. In this paper, we present hydrodynamic experiments where the theoretical predictions from the spectral kinetic theory of SG are verified with good accuracy, further confirming its physical validity.
This article is organized as follows. In Sec. II, we present the theoretical background from kinetic theory of SGs, which is necessary to describe the interaction between SG jets in the framework of the focusing 1D-NLSE. We illustrate the theoretical results with numerical simulations of the reduced kinetic equation describing the evolution in space and time of the densities of the two SG components. In Sec. III, we show how the IST method can be used to realize the implementation of two interacting SG jets in direct numerical simulations of the 1D-NLSE. In Sec. IV, we report our experimental results and compare them with the predictions of the kinetic theory.
## II Theoretical background
In this section, we provide a brief summary of the theoretical results from the SG theory that are relevant to the description of the interaction between two spectrally "monochromatic" SG jets. More details about this special class of SGs can be found in refs. [4; 15; 20; 23; 33]. We also illustrate the main theoretical results from the kinetic theory of SGs with some numerical simulations of the simplified SG kinetic equation describing the "two-jet" interactions.
### Analytical results from the spectral kinetic theory of SG
We consider nonlinear wave systems described by the integrable focusing 1D-NLSE that reads
\[i\psi_{t}+\psi_{xx}+2|\psi|^{2}\psi=0. \tag{1}\]
The fundamental soliton solution of Eq. (1) parameterized by the the complex IST eigenvalue \(\lambda=\alpha+i\gamma\) (\(\alpha\in\mathbb{R}\), \(\gamma\in\mathbb{R}^{+}\)) reads
\[\psi(x,t)=2\gamma\frac{\exp[-2i\alpha x-4i(\alpha^{2}-\gamma^{2})t-i\phi_{0}] }{\cosh[2\gamma(x+4\alpha t-x_{0})]}, \tag{2}\]
where \(x_{0}\) and \(\phi_{0}\) represent the initial position and phase parameters. The real part of the eigenvalue \(\lambda\) encodes the velocity \(-4\alpha\) of the soliton in the \((x,t)\) plane, while the imaginary part determines its amplitude \(2\gamma\) (as a matter of fact, the IST spectrum of (2) also includes the complex conjugate \(\lambda^{*}=\alpha-i\gamma\)).
In the spectral kinetic theory of 1D-NLSE SG, the DOS represents the distribution \(f(\lambda;x,t)\) over the spectral eigenvalues, so that \(fd\alpha d\gamma dx\) is the number of soliton states found at time \(t\) in the element of the 3D phase space \([\alpha,\alpha+d\alpha]\times[\gamma,\gamma+d\gamma]\times[x,x+dx]\). Due to the isospectrality condition associated with the integrable nature of Eq. (2), the space-time evolution of the DOS \(f(\lambda;x,t)\) is governed by the continuity equation
\[\frac{\partial f}{\partial t}+\frac{\partial(sf)}{\partial x}=0, \tag{3}\]
where \(s=s(\lambda;x,t)\) represents the transport velocity of a tracer soliton inside a SG. For the focusing 1D-NLSE, the equation of state connecting the SG transport velocity with the DOS \(f(\lambda;x,t)\) reads
\[\begin{split} s(\lambda;x,t)&=-4\Re(\lambda)\,+\, \frac{1}{\Im(\lambda)}\iint\limits_{\Lambda^{+}}\ln\left|\frac{\mu-\lambda^{* }}{\mu-\lambda}\right|\\ &\qquad\left[s(\lambda;x,t)-s(\mu;x,t)\right]f\!\left(\mu;x,t \right)\!d\xi d\zeta\end{split} \tag{4}\]
where \(\mu=\xi+i\zeta\) and \(\Lambda^{+}\subset\mathbb{C}^{+}\setminus i\mathbb{R}^{+}\) represents the 2D compact domain or 1D curve in the upper complex half-plane where the discrete eigenvalues parametrizing the
SG of interest are located (it is sufficient to consider only the upper half-plane due to the c.c. (Schwarz) symmetry of the soliton spectrum).
Eqs. (3), (4) form the general kinetic equation for the focusing 1D-NLSE SG (see [4; 5]). It is a nonlinear integro-differential equation describing the evolution in space and time of the SG DOS \(f(\lambda,x,t)\). The system (3), (4) can be considerably simplified if it is assumed that the SG is composed of a finite number of "monochromatic" components, or SG jets, each characterized by a DOS in the form of a Dirac delta-function. Here we concentrate on the two-component case involving two species of solitons with identical amplitudes and opposite velocities. The corresponding DOS has the form
\[f(\lambda;x,t)=\rho_{1}(x,t)\ \delta(\lambda-\lambda_{1})+\rho_{2}(x,t)\ \delta(\lambda-\lambda_{2}) \tag{5}\]
with \(\lambda_{1}=-\alpha+i\gamma\) and \(\lambda_{2}=\alpha+i\gamma\). Here \(\rho_{1,2}(x,t)\) are the SG component densities.
Under the ansatz (5) Eqs. (3), (4) reduce to the following "two-jet" hydrodynamic system [4]
\[\begin{split}\frac{\partial\rho_{1}(x,t)}{\partial t}+\frac{ \partial(s_{1}(x,t)\,\rho_{1}(x,t))}{\partial x}&=0,\\ \frac{\partial\rho_{2}(x,t)}{\partial t}+\frac{\partial(s_{2}(x, t)\,\rho_{2}(x,t))}{\partial x}&=0,\end{split} \tag{6}\]
with the component transport velocities given by
\[\begin{split} s_{1}&=4\alpha\frac{1-\kappa(\rho_{1 }-\rho_{2})}{1-\kappa(\rho_{1}+\rho_{2})},\\ s_{2}&=-4\alpha\frac{1+\kappa(\rho_{1}-\rho_{2}) }{1-\kappa(\rho_{1}+\rho_{2})}.\end{split} \tag{7}\]
Here \(\kappa\) is the interaction parameter
\[\kappa=\frac{1}{2\gamma}\ln\left(1+\frac{\gamma^{2}}{\alpha^{2}}\right), \tag{8}\]
which represents the space shift due to the collision between two individual solitons with spectral parameters \(\lambda_{1}\) and \(\lambda_{2}\)[34].
As observed in [4] (see also [33]) system (6), (7) is equivalent to the so-called Chaplygin gas equations, the system of isentropic gas dynamics with the equation of state \(p=-A/\rho\), where \(p\) is the pressure, \(\rho\) is the gas density and \(A>0\) is a constant. The Chaplygin gas equations occur in certain theories of cosmology (see e.g. [35]) and are also equivalent to the 1D Born-Infeld equation arising in nonlinear electromagnetic field theory [36; 37]. The fundamental property of system (6), (7) is its linear degeneracy. Indeed, upon introducing the dependent variables \(s_{1,2}(x,t)\) instead of \(\rho_{1,2}(x,t)\) in (6) one arrives at the diagonal system
\[\frac{\partial s_{1}}{\partial t}+s_{2}\frac{\partial s_{1}}{\partial x}=0, \quad\frac{\partial s_{2}}{\partial t}+s_{1}\frac{\partial s_{2}}{\partial x }=0, \tag{9}\]
with the characteristic velocities not depending on the corresponding Riemann invariants. Linear degeneracy of system (6), (7) implies the principal absence of wave-breaking effects accompanied by the classical shock formation with the only admissible singularities being contact discontinuities [38].
Following ref. [4], we use system (6), (7) to describe the collision between two SG jets with spatially uniform DOS's: \(\rho_{10}\delta(\lambda-\lambda_{1})\) and \(\rho_{20}\delta(\lambda-\lambda_{2})\)--that are spatially separated at initial time. The corresponding initial condition for Eq. (6) has the form
\[\begin{split}\rho_{1}(x,0)&=\rho_{10},\qquad\rho_{ 2}(x,0)=0\qquad\text{for}\qquad x<0,\\ \rho_{1}(x,0)&=0,\qquad\rho_{2}(x,0)=\rho_{20} \qquad\text{for}\qquad x>0,\end{split} \tag{10}\]
and it is schematically shown in Fig. 1(a).
This is a Riemann or "shock-tube" problem for the system of hydrodynamic conservation laws (6). Its solution, schematically shown in Fig. 1(b), consists of three constant states for \((\rho_{1},\rho_{2})\) separated by two contact discontinuities [4]:
\[(\rho_{1}(x,t),\rho_{2}(x,t))=\begin{cases}(\rho_{10},0)&x<c^{-}t,\\ (\rho_{1c},\rho_{2c})&c^{-}t\leq x<c^{+}t\\ (0,\rho_{20})&c^{+}t\leq x,\end{cases} \tag{11}\]
where the values of the component densities \(\rho_{1c},\rho_{2c}\) in the interaction region and the velocities \(c^{-}\) and \(c^{+}\) of the contact discontinuities are found from the Rankine-Hugoniot conditions to be (see [4] for details)
\[\begin{split}\rho_{1c}&=\frac{\rho_{10}(1-\kappa\rho_{20 })}{1-\kappa^{2}\rho_{10}\rho_{20}},\\ \rho_{2c}&=\frac{\rho_{20}(1-\kappa\rho_{10})}{1-\kappa^{2} \rho_{10}\rho_{20}}.\end{split} \tag{12}\]
Figure 1: (a) Initial condition (Eq. 10) for the Riemann problem for the two-jet hydrodynamic system (Eqs. (6), (7)) and (b) schematic of the solution given by Eq. (11).
\[c^{-}=s_{2c}=-4\alpha\frac{1+\kappa(\rho_{1c}-\rho_{2c})}{1-\kappa( \rho_{1c}+\rho_{2c})}, \tag{13}\] \[c^{+}=s_{1c}=4\alpha\frac{1-\kappa(\rho_{1c}-\rho_{2c})}{1-\kappa( \rho_{1c}+\rho_{2c})}.\]
One should note that the denominators in (12), (13) never vanish due to fundamental restriction related to the notion of critical, or condensate, DOS (see [5]). Moreover, it is not difficult to show that the interaction between the two SGs results in a "dilution" of each of the two species, i.e. \(\rho_{1c}<\rho_{10}\), \(\rho_{2c}<\rho_{20}\).
Fig. 2 shows the densities \(\rho_{1,2c}\) and the velocities \(s_{1,2c}\) in the interaction region as functions of \(\alpha\), which is the parameter that determines the relative velocities of the two SG species. The parameter \(\gamma\) determining the amplitude of the solitons has been fixed to unity. The initial densities are taken to be \(\rho_{10}=\rho_{20}=0.4\) in the left column and \(\rho_{10}=\rho_{20}=0.16\) in the right column. For the values of \(\alpha\) that are large enough (\(\alpha\gtrsim 0.7\)), the interaction parameter \(\kappa\) is relatively small (\(\kappa\lesssim 1\)) and the kinetic theory predicts that the density changes in the interaction region are relatively small (\(\rho_{1,2c}\sim\rho_{1,20}\)). On the other hand, the interaction between the two species increases when their initial relative velocity is small (\(\alpha\lesssim 0.5\)). This results in the density changes that are more significant for smaller values of \(\alpha\).
The dashed red lines with slopes \(\pm 4\alpha\) in Fig. 2 represent the velocities that each species of SG would have in the \((x,t)\) plane without any interaction with the other one (\(\kappa=0\) in Eq. (13)). The black lines in the bottom row of Fig. 2 indicate the velocities \(s_{1,2c}\) that are taken by each species as the result of the interaction with the other one. The comparison between the right and left columns shows that the velocity changes are more important when the initial density of the SGs is large. One of the goals of this paper is to compare the theoretical curves presented in Fig. 2 with results of physical experiments, see Sec. IV.
We now present two series of numerical simulations where we verify the weak solution (11) by (i) numerically solving the two-jet kinetic equation (6) and (ii) performing direct simulations of the 1D-NLSE (1). Both simulations are performed for the initial data relevant to the physical experiments to be discussed in Section IV.
### Numerical simulations of the kinetic equations
Fig. 3 shows numerical simulations of the kinetic equation illustrating the theoretical results presented in Sec. II.1. We consider two SG jets with the DOS being given by Eq. (5) and \(\lambda_{1,2}=\mp 0.5+i\) (\(\alpha=0.5\), \(\gamma=1\)), as shown in Fig. 3(a). We have numerically integrated the "two-jet" kinetic equations (6) using a standard pseudo-spectral method where the space derivatives are computed in Fourier space. To avoid numerical problems associated with the finite size of the numerical box and the discontinuities of the initial condition used in the analytical calculations of ref. [4] (Eq. (10)), the initial condition taken in our numerical simulations is composed of two boxes of large extents and uniform initial densities \(\rho_{10}=\rho_{20}=0.4\), as shown in Fig. 3(b).
Fig. 3(d)(e) show the space-time evolutions of the densities \(\rho_{1,2}(x,t)\) of the two SG jets that are initially separated and start interacting from \(t\sim 5\). As a result of the interaction the density of each species falls from \(\rho_{10}=\rho_{20}=0.4\) to \(\sim 0.302\), see the color scale that changes from yellow to green in Fig. 3(d)(e). The numerical value of the densities computed in the interaction region is in perfect agreement with theoretical predictions, as shown in Fig. 3(c) where the green dashed line represents the densities \(\rho_{1c}=\rho_{2c}\) that are computed using the analytical expressions given by Eq. (12).
In addition to the density changes due to the interaction, Fig. 3(d)(e) show that the velocity changes found in numerical simulations are also in perfect agreement with theoretical predictions, see white dashed lines parallel to the boundaries of the SGs and associated with velocities \(s_{1c}\sim 3.898\) and \(s_{2c}\sim-3.898\) that are given by Eq. (13). Finally, Fig. 3(f) shows that despite the density of each species decreases due to the interaction, the total density \(\rho_{1c}+\rho_{2c}\) of the SG in the interaction region is larger than the individual densities \(\rho_{1,2}(x,t)=\rho_{10,20}\) of each gas outside the interaction region. At the same time, \([\rho_{1c}+\rho_{2c}]<[\rho_{10}+\rho_{20}]\), i.e. the SG component interaction leads to the overall dilution compared to the non-interacting two-component gas. This feature has al
Figure 2: Evolution of the densities \(\rho_{1,2c}\) and of the velocities \(s_{1,2c}\) of the interacting SG jets as a function of \(\alpha\), the parameter determining the relative velocity of the two jets. The plots in the left (resp. right) column are computed from Eq. (12) and Eq. (13) with parameters that describe the densities of the non-interacting SGs being \(\rho_{10}=\rho_{20}=0.4\) (resp. \(\rho_{10}=\rho_{20}=0.16\)) and \(\gamma=1\). The red dashed lines represent the free velocities \(\pm 4\alpha\) of the non-interacting SGs (\(\kappa=0\)).
ready been pointed out in ref. [4].
Summarizing, the kinetic theory of SG predicts that the interaction between two monochromatic SG jets having opposite mean velocities but identical mean amplitudes results in density and velocity changes that are illustrated in Figs. 2, 3. Our goal in this paper is to perform a hydrodynamic experiment to quantitatively verify these theoretical predictions. Before moving to experimental results, we present in Sec. III direct numerical simulations of the 1D-NLSE corresponding to the numerical simulations of the two-jet kinetic equations shown in Fig. 3.
## III Interacting soliton gas jets in numerical simulations of the 1D-NLSE
In this Section, we show how the IST method can be used to realize the implementation of two interacting jets of SGs not in numerical simulations of the kinetic equations but in numerical simulations of the 1D-NLSE.
A nonlinear wave field \(\psi(x,t)\) satisfying Eq. (1) can be characterized by the so-called scattering data (the IST spectrum). For localized wave fields decaying to zero as \(x\to\infty\) the IST spectrum consists of a discrete part related to the soliton content and a continuous part related to the dispersive radiation. A special class of solutions, the N-soliton solutions (N-SSs), exhibits only a discrete spectrum consisting of N complex-valued eigenvalues \(\lambda_{n}\), \(n=1,...,N\) and \(N\) complex parameters \(C_{n}=|C_{n}|e^{j\phi_{n}}\), called norming constants, defined for each \(\lambda_{n}\). The complex discrete eigenvalues encode the amplitudes and velocities of the solitons while the norming constants encode their phases and "positions" in physical space [2].
Using a recursive algorithm based on the Darboux transform [39], we have built a N-SS of Eq. (1) with \(N=100\). The discrete eigenvalues associated with this N-SS are partitioned into two random sets, each being linked to a given SG. The first (resp. second) SG is parameterized by 50 eigenvalues that are randomly distributed in an uniform way within a small square region of the complex plane centered around
Figure 3: Numerical simulations of the “two-beam” kinetic equations (Eq. 6) showing the interaction between two jets of SGs. (a) Spectral (IST) parameters of the two interacting SGs, with the DOS being defined by Eq. (5) with \(\lambda_{1}=-0.5+i\) and \(\lambda_{2}=0.5+i\). (b) Initial distribution of the densities \(\rho_{1,2}(x,t=0)\). (c) Numerically computed distribution of the densities at \(t=12\). The green dashed line represents the densities in the interaction region that are computed using Eq. (12) with \(\rho_{10}=\rho_{20}=0.4\) (\(\alpha=0.5\), \(\gamma=1\)). (d) Space-time evolution of the density \(\rho_{1}(x,t)\). The region in green is the interaction region where the density has decreased from \(\rho_{10}=0.4\) to \(\rho_{1c}\sim 0.302\). (e) Same as (d) but for the second species \(\rho_{2}(x,t)\). (f) Space-time evolution of the sum of the densities \(\rho_{1}(x,t)+\rho_{2}(x,t)\) showing that the total density has increased in the interaction region despite the individual densities have decreased.
(resp. \(\lambda_{2}=0.5+i\)), see Fig. 4(e). Following the approach described in ref. [40; 41], we have synthesized the SG by implementing the Darboux recursive scheme in high precision arithmetics, a requirement due to the large number of solitons composing the wave field. The wave field has been synthesized at \(t=0\) and a standard NLSE solver based on a pseudo-spectral method has been used to compute the space-time evolution at longer time, as shown in Fig. 4(a). At initial time, the two SGs are separated without any spatial overlap between the two species, see Fig. 4(c). Each of the two SGs is composed of 50 solitons having approximately the same amplitude while being individually discernible. The random nature of each gas can be appreciated in physical space through the fact that the distance between neighboring solitons is not fixed but random.
Let us emphasize that the two SGs that we have realized are as dense as possible. The Darboux method is a recursive transformation scheme where a "seeding solution" of the focusing 1D-NLSE is used as a building block for the construction of a higher-order solution through the addition of one discrete eigenvalue. The Darboux transform machinery produces N-SS in such as way that the smaller the distance between the eigenvalues, the greater the physical separation between the solitons in physical space. For our SG, the mean distance in physical space between neighboring solitons of each species is therefore determined by the size of the square regions where the discrete eigenvalues are located, see Fig. 4(e). However the mean distance between solitons not only depends on the distance between the eigenvalues \(\lambda_{n}\) but also on the norming constants \(C_{n}\). In Fig. 4, the SG has been made as dense as possible by setting the moduli \(|C_{n}|\) of the norming constants to unity and by distributing uniformly their phases \(\phi_{n}\) between 0 and \(2\pi\), similarly to what has been done in ref. [30]. The SG of Fig. 4 cannot be denser than it is but it could be diluted by randomly distributing the moduli of the norming constants over some interval having a nonzero extent.
At time \(t=0\) each of the two species constitutes a uniform SG whose density \(\rho_{0}\) represents the number \(n\) of solitons relative to the interval of length \(l\) they occupy: \(\rho_{0}=n/l\). In Fig. 4(a)(c), the initial densities \(\rho_{10}\) and \(\rho_{20}\) of each of the two non-interacting species are \(n/l\sim 50/320\sim 0.156\), which is the highest possible for the spectral parameters that have been chosen (see Fig. 4(e)). This means that the numerical results presented in this Section and their associated experimental results presented in Sec. IV must to be compared with theoretical predictions of the kinetic theory that are plotted in the right column of Fig. 2 where \(\rho_{10}=\rho_{20}=0.16\).
Fig. 4(a) shows that the interaction between the two
Figure 4: Numerical simulations of Eq. (1) with the initial condition being under the form of two “monochromatic” beams of SGs with opposite velocities. At initial time, each beam of SG is composed of 50 solitons with nearly identical amplitudes and opposite velocities (\(\alpha=0.5\), \(\gamma=1\)). (a) Space time plot showing velocity and density changes arising from the interaction between the two SGs. (b) Enlarged view of the interaction region showing microscopic dynamics and multiple elastic collisions between individual solitons. (c) Modulus \(|\psi(x,t=0)|\) of the initial condition. (d) Modulus of the field at time \(t=48\). (e) Discrete IST spectrum of the field composed of two separate clouds of 50 eigenvalues centered around \(\lambda_{1,2}=\mp 0.5+i\).
species results in a "dilution" associated with a drop in the densities. In the center of the interaction region, at time \(t\sim 75\), each of the two species containing \(n=50\) solitons now occupy a spatial domain having an extent that has increased from \(l\sim 320\) to \(l^{\prime}\sim 362\). This results in a decrease of the densities that fall from \(\rho_{10}=\rho_{20}\sim 0.156\) to \(\rho_{1c}=\rho_{2c}=n/l^{\prime}=50/362\sim 0.138\), in good quantitative agreement with the expressions (12) obtained with the framework of the kinetic theory of SG. In addition to density changes, Fig. 4(a) also shows that the interaction between the two species of SG leads to changes in their relative velocities. Simulations of the 1D-NLSE plotted in Fig. 4(a) show that the mean velocity of the first species increases from \(4\alpha\sim 2\) to \(s_{1c}\sim 2.57\) due to the interaction, once again in good quantitative agreement with the results from the kinetic theory (Eq. 13).
Recent optical fiber experiments reported in ref. [31] have investigated the interaction between an individual tracer soliton and a dense SG. It has been shown that the tracer soliton experiences an effective velocity change due to its interaction with the optical SG. The experimental features observed in this optical fiber experiment are qualitatively similar to the classical refraction phenomenon observed in ray optics at the interface between two dielectric media having different refractive indexes. Here, the space-time evolution shown in Fig. 4(a) for two SG jets is also reminiscent from ray optics with one beam/jet of SG being shifted in space but not due to the propagation in a medium with another refractive index, but due to the nonlinear interaction with another beam/jet of SG. Note that the velocity and density changes measurable for each species of SG at the macroscopic scale are the emergent effects due to the numerous elementary elastic collisions between individual solitons occurring at the microscopic, soliton, scale, as shown in Fig. 4(b).
## IV Experiments
### Experimental setup and generation of the initial wave field
The experiments have been performed in a wave flume at the Hydrodynamics, Energetics and Atmospheric Environment Lab (LHEEA) in Ecole Centrale de Nantes (France). The flume which is 140 m long, 5 m wide and 3 m deep is equipped with an absorbing beach that is approximately 8 m long, see Fig. 5. With the addition of pool lanes arranged in a W pattern in front of the beach the measured amplitude reflection coefficient is as low as 1%. Unidirectional waves are generated with a computer assisted flap-type wavemaker. As in the experiments reported in ref. [30; 42], the setup comprises 20 equally spaced resistive wave gauges that are installed along the basin at distances \(Z_{j}=j\times 6\) m, \(j=1,2,...20\) from the wavemaker located at \(Z=0\) m. This provides an effective measuring range of 114 m.
In our experiments, the water elevation at the wavemaker reads \(\eta(Z=0,T)=Re\left[A_{0}(T)e^{i\omega_{0}T}\right]\), where \(\omega_{0}=2\pi f_{0}\) is the angular frequency of the carrier wave. In all the experiments presented in our paper, the frequency of the carrier wave is set to \(f_{0}=1.01\) Hz. \(A_{0}(T)\) represents the complex envelope of the initial condition. Our experiments are performed in the deep-water regime, and they are designed in such a way that the observed dynamics is described at leading order by the focusing 1D-NLSE
\[\frac{\partial A}{\partial Z}+\frac{1}{C_{g}}\frac{\partial A}{\partial T}=i \frac{k_{0}}{\omega_{0}^{2}}\frac{\partial^{2}A}{\partial T^{2}}+i\beta k_{0} ^{3}|A|^{2}A, \tag{14}\]
where \(A(Z,T)\) represents the complex envelope of the water wave that changes in space \(Z\) and in time \(T\)[43]. \(k_{0}\) represents the wavenumber of the propagating wave (\(\eta(Z,T)=Re\left[A(Z,T)e^{i(\omega_{0}T-k_{0}Z)}\right]\)), which is linked to \(\omega_{0}\) according to the deep water dispersion relation \(\omega_{0}^{2}=k_{0}g\), where \(g\) is the gravity acceleration. \(C_{g}=g/(2\omega_{0})\) represents the group velocity of the wavepackets and \(\beta\simeq 0.91\) is a dimensionless term describing the small finite-depth correction to the cubic nonlinearity [42].
The first important step of the experiment consists in generating an initial condition \(A_{0}(T)\) in the form of two "monochromatic" beams of SGs, as illustrated in Fig. 4(c). To achieve this, we have to convert the dimensionless fields synthesized as initial conditions (see Sec. III) into physical units. Connections between physical variables of Eq. (14) and dimensionless variables in Eq. (1) are given by \(t=Z/(2L_{NL})\), \(x=(T-Z/C_{g})\sqrt{g/(2L_{NL})}\) with the nonlinear length being defined as \(L_{NL}=1/(\beta k_{0}^{3}\,a^{2})\), where \(a\) represents the mean peak amplitude of solitons outside the interaction region (\(a\simeq 2.8\) cm in all our experiments).
Numerical simulations of Fig. 4(a) show that \(\sim 140\) units of normalized time are needed for two beams of SGs to overlap, interact and separate. This large normalized evolution time corresponds to an unrealistic physical propagation distance over 280 nonlinear lengths, with the nonlinear length \(L_{NL}\) being typically around \(\sim 20\) m in the experiments that we are dealing with [30; 42]. To take account for the fact that our hydrodynamical ex
Figure 5: Schematic representation of the 1D water tank used in the experiments. 20 wave elevation gauges are placed every 6 meters, covering a measurement range of 114 meters.
periments cannot go beyond propagation distances longer than \(\sim 6\)\(L_{NL}\), we have designed our initial wavefield in such a way that it is composed of a total number of 100 solitons with one central interaction region and two lateral regions where each species does not interact with the other, see Fig. 6(b). Note that the SGs outside the interaction region are uniform with constant densities being equal to \(\rho_{1,20}=0.156\).
### Space-time evolution, measurement of the Fourier and of the discrete IST spectra
Taking two beams of solitons with spectral (IST) parameters identical to those used to compute Fig. 4, Fig. 6(a) shows the space-time diagram reconstructed from the signals recorded by the 20 gauges. Note that our experiments deal with envelope solitons. The signal recorded by the gauges is therefore composed of a carrier wave at a frequency \(f_{0}\sim 1.01\) Hz that is slowly modulated by a solitonic envelope. The first step in processing the experimental data consists in removing the carrier wave and in computing the complex envelope \(A(Z,T)\) of the measured wavefield, which is achieved by using standard Hilbert transform techniques [43]. The space-time diagram of Fig. 6 is plotted in a reference frame moving at the mean group velocity \(C_{g}\) of the two monochromatic SG jets. In this reference frame, the two SG jets have opposite velocities of the same magnitude.
Fig. 6(a) and 6(b) shows that the wavefield is composed of one central interacting region and two lateral regions where each species does not interact with the other. Fig. 6(c) is an enlarged view into the interaction region. It shows that, despite the relatively short propagation distance (\(\sim 6L_{NL}\)) reached in the experiment, individual interactions occur between pairs of solitons at
Figure 6: Experiments performed in the 140-m long water tank with two interacting SG jets, each being composed of 50 solitons with spectral (IST) parameters \(\alpha=\mp 0.5\) and \(\gamma=1\). (a) Space-time evolution of the two “monochromatic” SG jets with the central region being the interaction region. In the two lateral regions of the space-time diagram, the two species of SGs propagate with opposite velocities without interacting. (b) Modulus of the envelope of the wave field measured by the first gauge at \(Z=6\) m. (c) Enlarged view of the interaction region showing individual collision between solitons occurring at random positions inside the water tank. (d) Fourier power spectra of the elevation of the wave field measured at \(Z=6\) m and at \(Z=120\) m. (e) Discrete IST spectra of the envelope of the wave field measured at \(Z=6\) m and at \(Z=120\) m. Experiments are made for a carrier frequency \(f_{0}=1.01\) Hz and a steepness \(k_{0}a\simeq 0.115\) (\(L_{NL}\simeq 20.3\) m).
_random_ propagation distances in the water tank. These paired interactions occurring at _microscopic_ level are responsible for _macroscopic_ density and velocity changes that are measurable and that will be discussed in Sec. IV.3.
Fig. 6(d) shows the Fourier power spectra of the elevation of the wave fields that are measured at \(Z=6\) m, close to the wavemaker and at \(Z=120\) m, far from the wavemaker. The propagation of the generated SGs is not accompanied by any significant broadening of the Fourier power spectrum.
Fig. 6(e) shows the discrete IST spectra measured at \(Z=6\) m and at \(Z=120\) m. The discrete IST spectrum measured at \(Z=6\) m consists of two narrow clouds of eigenvalues centered around \(\lambda_{1,2}=\mp 0.5+i\), in accordance with the initial condition we have engineered, see Sec. III. Each cloud represents an ensemble of 50 discrete eigenvalues, with each of these discrete eigenvalue being associated with one of the solitons that propagates in the water tank (see Fig. 6(a)).
The discrete IST spectrum measured at \(Z=120\) m (red points in Fig. 6(e)) is not identical to the discrete IST spectrum measured at \(Z=6\) m. This means that the experiment is not perfectly described by the _integrable_ 1D-NLSE (Eq. (14)) and that the space-time dynamics is weakly perturbed by higher order effects, a feature that we have already observed and discussed in some of our previous experiments [42; 30; 44]. A discussion about the higher-order effects breaking the integrability of the wave dynamics is given in the Appendix. The important point here is that the IST analysis reveals that two separate clouds, each containing 50 eigenvalues, retain a finite and limited extension in the complex plane during the nonlinear evolution. As a result, we can now examine the extent to which the predictions of the kinetic theory of SG remain robust in an experiment that cannot be inherently described by an integrable equation.
### Measurement of the densities and velocities of the hydrodynamic SGs
In order to verify the predictions of the kinetic theory of SG, we carried out experiments to examine the validity of the velocity and density evolutions plotted in Fig. 2 using Eq. (12) and (13). We have made an ensemble of 9 experiments similar to the one depicted in Fig. 6 by changing the value of \(\alpha\) between \(\sim 0.2\) and \(\sim 0.9\). In each of the 9 experiments, we have used the IST-based methodology described in Sec. III to synthesize two interacting SGs with the parameter \(\alpha\) being changed between \(\sim 0.2\) and \(\sim 0.9\), the parameter \(\gamma\) being kept to one. We have recorded the associated space-time evolutions and we have checked that discrete IST spectra measured close and far from the wavemaker consist of two separate clouds composed of 50 eigenvalues similar to those shown in Fig. 6(e).
The easiest macroscopic observables to measure in the experiment are the densities of each species \(\rho_{1,2c}\) in the interaction region. To measure \(\rho_{1,2c}\), we first convert the signals recorded in physical variables into dimensionless form by using relations given in Sec. IV.1. Taking the dimensionless wavefield measured at the last gauge, we just count the number of solitons \(n\) that we find for each species in the interaction region and we measure the space interval \(l\) that these solitons occupy. As discussed in Sec. III, the measured density of the SGs is given by \(\rho_{1,2c}=n/l\).
Fig. 7(a) shows that we obtain a very good quantitative agreement between experiments and the kinetic theory of SG. The density of each species in the interaction region decreases from \(\sim 0.15\) to \(\sim 0.125\) when the value of \(\alpha\) is changed from \(\sim 0.9\) to \(\sim 0.2\). In the experiment, there was no meaning in trying to further increase the interaction between the two SGs by reducing the value of \(\alpha\) below \(\sim 0.2\). For values of \(\alpha\) smaller than \(0.2\) the relative velocity of the two species is indeed so small that there is no significant interaction/collision between the two species over the relatively short propagation distance (\(\sim 6L_{NL}\)) that is accessible in the experiment.
Looking at the evolution pattern measured in the experiment (see Fig. 6(a) and Fig. 6(c)), it can considered
Figure 7: Comparison between the experiments and kinetic theory of SG. (a) Evolution of the densities \(\rho_{1,2c}\) as a function of \(\alpha\) in the interaction region. Green points represent experimental measurement points while the solid black line is computed using Eq. (12) with \(\rho_{1,20}=0.156\) and \(\gamma=1\). (b) Same as (a) but for the velocities \(s_{1,2c}\) of the interacting SGs. The red dashed lines represent the free velocities \(\pm 4\alpha\) of the non-interacting SGs. All the experiments have been made with \(f_{0}=1.01\) Hz and for a steepness \(k_{0}a\simeq 0.115\). Error bars in (a) are associated with the uncertainty in the measurement of the space interval occupied by the SGs. Error bars in (b) represent the standard deviations associated with the velocity measurements, see Fig. 8(b).
at first sight that it is difficult, if not impossible, to determine the velocity of the SGs inside and even outside the interaction region. Following the approach proposed in ref. [45] to separate right- and left-propagating solitons in a shallow water bidirectional SG, we have found that the Radon transform can be used to measure the velocities of the solitons in the space-time diagrams recorded in our experiments.
The two-dimensional Radon transform is an integral transform that maps a function to its integral over radial lines parameterized by an angle \(\theta\) and by a distance \(r\) to an origin point. The Radon transform \(R(r,\theta)\) of the normalized space-time plots \(|\psi(x,t)|\) recorded in the experiment reads:
\[R(r,\theta)=\int\int|\psi(x,t)|\,\delta(x\cos\theta+t\sin\theta-r)dxdt \tag{15}\]
where \(\delta\) is the Dirac function. \(r=\sqrt{x^{2}+t^{2}}\) is the distance to an origin point located in the center of \((x,t)\) domain.
Fig. 8(a) represents the Radon transform of the experimental space-time diagram of Fig. 6(a), which has been normalized to dimensionless variables of Eq. (1) using variable transformations given in Sec. IV.1 (\(\psi(x,t)=A/(a/2)\), \(t=Z/(2L_{NL})\), \(x=(T-Z/C_{g})\sqrt{g/(2L_{NL})}\)). The Radon transform \(R(r,\theta)\) immediately reveals the existence of several distinct classes of solitons being parameterized by their position \(r\) relatively to the origin point and by an angle parameter \(\theta\) related to their velocity in the \((x,t)\) plane. After applying a calibration procedure converting the angle parameter into a velocity parameter and after isolating the local maxima associated with each soliton in the Radon transform, we end up with the simple plot presented in Fig. 8(b).
Fig. 8(b) represents the velocities of the solitons that have been unambiguously detected using the Radon transform of the space-time diagram of Fig. 6. Depending on the initial phase, position and precise velocity of each soliton, certain interaction patterns measured in physical space can produce signatures in the Radon transform, such as double peaks, which do not allow us to conclude unambiguously about the velocity taken by the solitons. These ambiguous measurement points are ignored and we finally obtain two sets, each containing not 50 but 35 solitons, for which we have a correct velocity measurement performed using the Radon transform.
Fig. 8(b) shows that 8 isolated (non-interacting) solitons are detected with a velocity of \(\sim 1.69\) and 8 other non-interacting solitons are detected with a nearly opposite velocity of \(\sim-1.69\). In the interaction region, the solitons with positive velocities have their mean velocity that increases to \(\sim 2.34\) while the solitons with negative velocities have their mean velocity that decreases to \(\sim-2.34\). Note that the dispersion of the velocities around the mean value is significantly larger in the interaction region as compared with the region where solitons do not interact. This is due to the fact that each paired interaction occurs at different random positions in the water tank, which results in a collection of microscopic interaction patterns associated with a larger dispersion of the values of velocities measured using the Radon transform.
Fig. 7(b) synthesizes all the measurements of the mean velocities that have been made in the interaction region on our 9 experiments where the value of \(\alpha\) has been changed between \(\sim 0.2\) and \(\sim 0.9\). Despite the existence of higher-order effects and the fact that the experiments are not perfectly described by the integrable 1D-NLSE, Fig. 7(b) shows that the theoretical predictions of the kinetic theory in terms of velocity changes of the SGs are quantitatively well verified in the experiment.
Figure 8: (a) Radon transform \(R(r,\theta)\) of the experimental space-time diagram of Fig. 6(a) for \(\alpha=\mp 0.5\). The white points indicate the positions at which a maximum of the function \(R(r,\theta)\) is found. (b) Simplified diagramatic view of the results obtained in (a) using the Radon transform. Two sets each containing 8 free (non-interacting) solitons are found with mean velocities of \(\sim 1.69\) and \(\sim-1.69\) (black points). Two other sets each containing 25 solitons are found in the interaction region with mean velocities of \(\sim 2.34\) (blue points) and \(\sim-2.34\) (red points).
## V Conclusion
In this paper, we have reported hydrodynamic experiments in which we have investigated the interaction between two SG jets having identical mean amplitude but opposite mean velocities. The two jets of interacting SGs are synthesized using the IST method. Their IST spectrum is composed of two clusters of discrete eigenvalues centered around two specific points of the complex spectral plane. We have recorded the space-time evolution of the interacting SGs in a \(140-\)m long water tank. We have varied the interaction strength between the two interacting species by changing their relative initial velocity. We have measured the macroscopic density and velocity changes due to the interaction between the two SG jets. Our experimental results are found to be in good quantitative agreement with predictions of the kinetic theory of SG despite the fact that the experiment is not perfectly described by the integrable 1D-NLSE.
We believe that our experimental results provide an important step towards the physical validation of the fundamental theoretical principles behind the spectral theory of SGs. We hope that they will stimulate new research in the field of statistical mechanics of nonlinear waves and integrable turbulence.
*
## Appendix A Influence of higher-order effects
In this Appendix, we use numerical simulations of the focusing 1D-NLSE and of a modified (non-integrable) 1D-NLSE to show the role of higher order effects on the observed space-time dynamics and on the discrete IST spectra of the two jets of interacting SGs.
Following the work reported in ref. [46], higher-order effects in 1D water wave experiments can be described by a modified NLSE written under the form of a spatial evolution equation
\[\begin{split}\frac{\partial A}{\partial Z}+\frac{1}{C_{g}}\frac {\partial A}{\partial T}=i\frac{k_{0}}{\omega_{0}^{2}}\frac{\partial^{2}A}{ \partial T^{2}}+i\beta k_{0}^{3}|A|^{2}A\\ -\frac{k_{0}^{3}}{\omega_{0}}\left(6|A|^{2}\frac{\partial A}{ \partial T}+2A\frac{\partial|A|^{2}}{\partial T}-2iA\mathcal{H}\left[\frac{ \partial|A|^{2}}{\partial T}\right]\right),\end{split} \tag{10}\]
where \(A(Z,T)\) represents the complex envelope of the wave field and \(\mathcal{H}\) is the Hilbert transform defined by \(\mathcal{H}[f]=(1/\pi)\int_{-\infty}^{+\infty}f(\xi)/(\xi-T)d\xi\).
When the last three terms are neglected in Eq. (10), the integrable 1D-NLSE (14) is recovered. Figures 9(a)(d) show space-time diagrams in which the dynamics of interaction between the two jets of SG is governed by the integrable focusing 1D-NLSE. Fig. 9(g) shows that the discrete IST spectra of the two interacting SGs consist of two narrow clouds centered around \(\lambda_{1,2}=\mp 0.5+i\)
Figure 9: Comparison between experiments, numerical simulations of the focusing 1D-NLSE and of Eq. (10) for the interaction between two jets of SG, each containing 50 solitons. (a) Space-time diagram showing the space-time evolution described by the integrable 1D-NLSE. (c) Zoomed view into the interaction region. (g) Discrete IST spectra computed at \(Z=0\) m and at \(Z=120\) m. (b), (e) (h) Same as (a), (c), (g) but in the experiment. (c), (f), (i) Same as (a), (c), (g) but in numerical simulations of Eq. (10). Parameters used in numerical simulations are \(f_{0}=1.01\) Hz, \(k_{0}a=0.115\), \(g=9.81\) m s\({}^{-2}\), \(\beta=0.91\).
Because of the isospectrality condition underlying the integrable nature of the focusing NLSE, these IST spectra do no change with the propagation distance.
Figures 9(c)(f) show space-time diagrams computed from the numerical integration of Eq. (24) that takes into account the influence of higher-order terms. The space-time evolution plotted in Fig. 9(c)(f) is very similar to that observed in the experiments, see Fig. 9(b)(e). In particular, it can be clearly seen in Fig. 9(c) and in Fig. 9(f) that solitary waves emit some radiation, which in not the case in Fig. 9(b). The discrete IST spectra computed at \(Z=6\) m and at \(Z=120\) m show that the isospectrality condition is not fulfilled in the experiment and in the numerical simulation of Eq. (24), compare Fig. 9(h)(i) with Fig. 9(g). Higher-order effects produce some spreading (or diffusion) of the discrete eigenvalues, which nevertheless remain confined to two distinct clouds.
###### Acknowledgements.
This work has been partially supported by the Agence Nationale de la Recherche through the StormWave (ANR-21-CE30-0009) and SOGOOD (ANR-21-CE30-0061) projects, the LABEX CEMPI project (ANR-11-LABX-0007), the Simmons Foundation MPS No. 651463 project, the Ministry of Higher Education and Research, Hauts de France council and European Regional Development Fund (ERDF) through the Contrat de Projets Etat-Region (CPER Photonics for Society P4S). The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme "Dispersive hydrodynamics: mathematics, simulation and experiments, with applications in nonlinear waves" when part of the work on this paper was undertaken. G. El's and G. Robertis's work was also supported by EPSRC Grant Number EP/W032759/1. G. Roberti thanks Simons Foundation for partial support.
|
2309.13884 | Estimating Treatment Effects Under Heterogeneous Interference | Treatment effect estimation can assist in effective decision-making in
e-commerce, medicine, and education. One popular application of this estimation
lies in the prediction of the impact of a treatment (e.g., a promotion) on an
outcome (e.g., sales) of a particular unit (e.g., an item), known as the
individual treatment effect (ITE). In many online applications, the outcome of
a unit can be affected by the treatments of other units, as units are often
associated, which is referred to as interference. For example, on an online
shopping website, sales of an item will be influenced by an advertisement of
its co-purchased item. Prior studies have attempted to model interference to
estimate the ITE accurately, but they often assume a homogeneous interference,
i.e., relationships between units only have a single view. However, in
real-world applications, interference may be heterogeneous, with multi-view
relationships. For instance, the sale of an item is usually affected by the
treatment of its co-purchased and co-viewed items. We hypothesize that ITE
estimation will be inaccurate if this heterogeneous interference is not
properly modeled. Therefore, we propose a novel approach to model heterogeneous
interference by developing a new architecture to aggregate information from
diverse neighbors. Our proposed method contains graph neural networks that
aggregate same-view information, a mechanism that aggregates information from
different views, and attention mechanisms. In our experiments on multiple
datasets with heterogeneous interference, the proposed method significantly
outperforms existing methods for ITE estimation, confirming the importance of
modeling heterogeneous interference. | Xiaofeng Lin, Guoxi Zhang, Xiaotian Lu, Han Bao, Koh Takeuchi, Hisashi Kashima | 2023-09-25T05:44:17Z | http://arxiv.org/abs/2309.13884v1 | # Estimating Treatment Effects Under Heterogeneous Interference
###### Abstract
Treatment effect estimation can assist in effective decision-making in e-commerce, medicine, and education. One popular application of this estimation lies in the prediction of the impact of a treatment (e.g., a promotion) on an outcome (e.g., sales) of a particular unit (e.g., an item), known as the individual treatment effect (ITE). In many online applications, the outcome of a unit can be affected by the treatments of other units, as units are often associated, which is referred to as interference. For example, on an online shopping website, sales of an item will be influenced by an advertisement of its co-purchased item. Prior studies have attempted to model interference to estimate the ITE accurately, but they often assume a homogeneous interference, i.e., relationships between units only have a single view. However, in real-world applications, interference may be heterogeneous, with multi-view relationships. For instance, the sale of an item is usually affected by the treatment of its co-purchased and co-viewed items. We hypothesize that ITE estimation will be inaccurate if this heterogeneous interference is not properly modeled. Therefore, we propose a novel approach to model heterogeneous interference by developing a new architecture to aggregate information from diverse neighbors. Our proposed method contains graph neural networks that aggregate same-view information, a mechanism that aggregates information from different views, and attention mechanisms. In our experiments on multiple datasets with heterogeneous interference, the proposed method significantly outperforms existing methods for ITE estimation, confirming the importance of modeling heterogeneous interference.
Keywords:Causal Inference Treatment Effect Estimation Heterogeneous Graphs Interference
## 1 Introduction
In recent years, treatment effect estimation has been performed to enable effective decision-making in many fields, such as medicine [25], education [22], and e-commerce [18, 29, 37]. For example, estimating treatment effects helps us understand whether an advertisement affects the sales of the advertised products. The
effect of a treatment (e.g., advertisement) for a particular unit (e.g., product) is known as the individual treatment effect (ITE) [42], while that for a given group is known as the average treatment effect (ATE) [42].
This study aims to estimate treatment effects from observational graph data, which contain records of covariates of units, relationships between units (i.e., graph structure), and treatment assignments with their outcomes. For example, data from an e-commerce platform typically include the logs of information regarding assignments of advertisements, sales of items, item profiles, and relationships between items, e.g., a co-purchased relationship.
As units are associated in these graphs, the outcome for a unit will be influenced by the treatments assigned to its neighboring units. This phenomenon is referred to as _interference_[17, 21], an example of which is shown in Figure 0(a). In a co-purchased graph, many customers buy the Mouse when they buy the Computer. In this case, advertising the Computer may also influence the sales of the Mouse, whose sales can no longer be independent of the advertisement, making it challenging to estimate the ITE accurately. Previous works have attempted to accurately estimate ITE given graph data by modeling interference, such as _group-level interference_[9, 15, 32], which is a _partial interference_ and models interference within subgroups of units but ignores inter-group interference; _pairwise interference_[1, 3, 21, 36], which considers interference from immediate neighbors only; and _networked interference_[17], which can model interference from distant neighbors. All these methods assume single-view interference, such that a graph is homogeneous and can only represent the same relationship among units, such as a co-purchased graph.
However, real-world graphs are rarely homogeneous, e.g., YouTube dataset [31], and Amazon dataset [8]. Therefore, we consider addressing interference on heterogeneous graphs that have multi-view edges, such as co-viewed and co-purchased item-to-item graphs of the Amazon dataset [8]. In this case, units are influenced by treatments of their heterogeneous neighbors via the multi-view
Figure 1: An example of the difference between interference on a homogeneous graph and heterogeneous graphs. An edge in a co-purchased graph represents the relationship that both items are bought together by many customers, while an edge in a co-viewed graph represents the relationship that both items are viewed on an e-commerce platform together by many customers. Edges in different views or graphs constitute multi-view or heterogeneous edges.
edges, which is referred to as _heterogeneous interference_ and often leads to _cross-view interference_, an example of which is shown in Figure 0(b). Although there is no direct edge between the Computer and the Mouse 2, the advertisement of the Computer still affects sales of the Mouse 2 via the edge between the Computer and the Mouse 1 in the co-purchased graph and the edge between the Mouse 1 and the Mouse 2 in the co-viewed graph. Without properly modeling the heterogeneous interference, the cross-view interference cannot be addressed, which will result in inaccurate ITE estimation.
To overcome the difficulty caused by heterogeneous interference, we propose a novel method called **I**ndividual **T**reatment **E**ffects Estimator Under **H**eterogeneous **I**nterference (HINITE; see Figure 2). The core idea of HINITE is to model the propagation of heterogeneous interference across units and views. To this end, inspired by Wang et al. [39], we design a heterogeneous information aggregation (HIA) layer, as shown in Figure 3. In the HIA layer, multiple single-layered graph neural networks (GNNs) [12] are used to capture information within the same views, and a view-level information aggregation mechanism is then used to combine information from different views. To properly model heterogeneous interference, the HIA layer also infers importances of different edges and views of heterogeneous graphs by applying attention mechanisms [34, 35, 39]. A single HIA layer can help units aggregate information from their 1-hop or direct neighbors across all views of heterogeneous graphs, enabling the HINITE to model the propagation of cross-view interference by stacking multiple HIA layers. Other components of the HINITE are explained in Section 3.
The contributions of this study can be summarized as follows:
* This study describes a new issue of interference on heterogeneous (multi-view) graphs. Moreover, we formalize the problem of estimating ITE under heterogeneous interference.
* This study proposes a method to address interference on heterogeneous graphs with multi-view edges.
* Results of extensive experiments reveal that the proposed method outperforms existing methods for estimating ITE under heterogeneous interference while confirming the importance of modeling heterogeneous interference.
## 2 Problem setting
In this study, we aim to estimate ITE from observational heterogeneous graphs. Herein, we use \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) to denote the covariates of a unit \(i\) (e.g., brand), \(t_{i}\in\{0,1\}\) to denote the treatment assigned to a unit \(i\) (e.g., an advertisement), \(y_{i}\in\mathbb{R}\) to denote the observed outcome of a unit \(i\) (e.g., the observed sales of a unit \(i\)), and non-bold, italicized, and capitalized letters (e.g., \(X_{i}\)) to denote random variables. Moreover, a unit with \(t=1\) is treated, and \(t=0\) is controlled.
Homogeneous graphs.Homogeneous graphs have only a single view of edges. We use an adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) to represent the structure of a homogeneous graph, where \(n\) is the number of nodes (units). If there is an edge between units \(j\) and \(i\), \(A_{ij}=1\); otherwise, \(A_{ij}=0\). We let \(A_{ii}=0\).
_Heterogeneous graphs._ This study considers heterogeneous graphs1 that have multiple views of edges [30], which are called heterogeneous or multi-view edges. We use the \(\mathbf{H}=\{\mathbf{A}^{v}\}_{v=1}^{m}\) to denote all the multi-view graph structures, where \(\mathbf{A}^{v}\in\{0,1\}^{n\times n}\) denotes the adjacency matrix of the \(v\)-th view, and \(m\) is the number of views. We use \(\mathbf{N}_{i}^{v}\) to denote the set of neighboring units of the unit \(i\) in the \(v\)-th view, \(\mathbf{N}_{i}=\{\mathbf{N}_{i}^{v}\}_{v=1}^{m}\) to denote the set of neighbors of the unit \(i\) across all views. Here, the units in \(\mathbf{N}_{i}\) are heterogeneous neighbors of the unit \(i\).
Footnote 1: Heterogeneous graphs can be classified into two types: those with multiple types of nodes and multiple types (views) of edges [30], and those with a single type of node and multiple types of edges [30]. In this study, we focus on the latter type.
_ITE estimation without interference._ In traditional treatment effect estimation [24, 42], non-graph data are given and it is assumed that there is no interference between units [24, 42]. In this case, the potential outcomes \(y_{i}^{1}\) and \(y_{i}^{0}\) of a unit \(i\) are defined as the real value of outcome for a unit \(i\) with treatment value \(t=1\) and \(t=0\),2 respectively [42]. Additionally, the ITE is defined as \(\tau_{i}=\mathbb{E}[Y_{i}^{1}|X_{i}=\mathbf{x}_{i}]-\mathbb{E}[Y_{i}^{0}|X_{i}= \mathbf{x}_{i}]\)[42].
Footnote 2: Outcomes with \(1-t\) are called counterfactual outcomes [42].
_ITE estimation under heterogeneous interference._ This study aims to estimate the ITE from observational heterogeneous graph data. The data can be denoted by \((\mathbf{X},\mathbf{T},\mathbf{Y},\mathbf{H})\), where \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{n}\), \(\mathbf{T}=\{t_{i}\}_{i=1}^{n}\), and \(\mathbf{Y}=\{y_{i}\}_{i=1}^{n}\). We assume that there exists interference between units in heterogeneous graphs. In this case, the outcome of a unit is not only influenced by its own treatments and covariates but also influenced by those of its neighbors [17, 21]. In heterogeneous graphs, every unit can receive interference from its heterogeneous neighbors through multi-view edges, so the interference in heterogeneous graphs is referred to as heterogeneous interference. Such heterogeneous interference contains two types of interference: _same-view interference_ and cross-view interference. The former is that interference occurs within the same views, and the latter happens when interference propagates across different views through multi-view edges. To formalize the ITE under heterogeneous interference, we use \(\mathbf{s}_{i}\) to denote a summary vector of \(\mathbf{X}_{-i}\) and \(\mathbf{T}_{-i}\) on heterogeneous graphs \(\mathbf{H}\), where the subscript \(-i\) denotes all other units except \(i\). The potential outcomes of the unit \(i\) in heterogeneous graphs, denoted by \(y_{i}^{1}(\mathbf{s}_{i})\) and \(y_{i}^{0}(\mathbf{s}_{i})\), are real outcomes for the unit \(i\) under \(\mathbf{s}_{i}\) and treatment value \(t=1\) and \(t=0\), respectively. Then, we define the ITE under heterogeneous interference as follows:
\[\tau_{i}=\mathbb{E}[Y_{i}^{1}(S_{i}=\mathbf{s}_{i})|X_{i}=\mathbf{x}_{i}]-\mathbb{E}[Y _{i}^{0}(S_{i}=\mathbf{s}_{i})|X_{i}=\mathbf{x}_{i}]. \tag{1}\]
_Confounder._ The existence of confounders is a well-known issue when estimating the ITE from observational data [26]. Confounders are parts of covariates, which can simultaneously affect the treatment assignment and outcome [42], resulting in an imbalance in the distributions of different treatment assignments. For instance, we consider that the treatment is whether a product is advertised. Famous brands have more promotion funds to advertise their products. Meanwhile,
customers tend to buy a product (e.g., a computer) from a famous brand (e.g., Apple). In this case, the brand is a confounder. Without accurately addressing confounders, ITE estimation will be biased.
Assumption 1: Following the previous studies [16, 17], we assume that there exists an aggregation function that can aggregate information of other units on heterogeneous graphs while outputting a vector \(\mathbf{s}\), i.e., \(\mathbf{s}_{i}=\text{AGG}(\mathbf{T}_{-i},\mathbf{X}_{-i},\mathbf{H})\). Here, we extend the neighbor interference assumption [3] to heterogeneous interference, for \(\forall i\), \(\forall\mathbf{T}_{-i},\mathbf{T}_{-i}^{\prime},\forall\mathbf{X}_{-i}, \mathbf{X}_{-i}^{\prime}\), and \(\forall\mathbf{H},\mathbf{H}^{\prime}\): when \(\mathbf{s}_{i}=\text{AGG}(\mathbf{T}_{-i},\mathbf{X}_{-i},\mathbf{H})=\text{AGG}( \mathbf{T}_{-i}^{\prime},\mathbf{X}_{-i}^{\prime},\mathbf{H}^{\prime})=\mathbf{s}_ {i}^{\prime}\), \(Y_{i}^{t}(S_{i}=\mathbf{s}_{i})=Y_{i}^{t}(S_{i}=\mathbf{s}_{i}^{\prime})\) holds.
Assumption 2: We extend consistency assumption [3] to heterogeneous interference setting. We assume \(Y_{i}=Y_{i}^{t_{i}}(S_{i}=\mathbf{s}_{i})\) on the heterogeneous graphs \(\mathbf{H}\) for the unit \(i\) with \(t_{i}\) and \(\mathbf{s}_{i}\).
Assumption 3: To address confounders, we extend the unconfoundedness assumption [3, 16] to the heterogeneous interference setting. For any unit \(i\), given the covariates, the treatment assignment and output of the aggregation function are independent of potential outcomes, i.e., \(T_{i},S_{i}\perp\!\!\!\perp Y_{i}^{1}(\mathbf{s}_{i}),Y_{i}^{0}(\mathbf{s}_{i})|X_{i}\).
Theoretical analysis.To model potential outcomes using observed data under heterogeneous interference, we prove the identifiability of the expected potential outcome \(Y_{i}^{t}(\mathbf{s}_{i})\) (\(t=1\) or \(t=0\)) based on the above assumptions as follows:
\[\mathbb{E}[Y_{i}|X_{i}=\mathbf{x}_{i},T_{i}=t,X_{-i}=\mathbf{X}_{-i}, T_{-i}=\mathbf{T}_{-i},H=\mathbf{H}]\] \[= \mathbb{E}[Y_{i}|X_{i}=\mathbf{x}_{i},T_{i}=t,S_{i}=\mathbf{s}_{i}]\] ( _Assumption 1_ ) \[= \mathbb{E}[Y_{i}^{t}(\mathbf{s}_{i})|X_{i}=\mathbf{x}_{i},T_{i}=t,S_{i}= \mathbf{s}_{i}]\] ( _Assumptions 1_ and \(2\) ) \[= \mathbb{E}[Y_{i}^{t}(\mathbf{s}_{i})|X_{i}=\mathbf{x}_{i}]\] ( _Assumption 3_ )
Based on the above proof, once we aggregate \(\mathbf{X}_{-i}\) and \(\mathbf{T}_{-i}\) on heterogeneous graphs \(\mathbf{H}\) into \(\mathbf{s}_{i}\), we can estimate the potential outcomes \(Y_{i}^{1}(\mathbf{s}_{i})\) and \(Y_{i}^{0}(\mathbf{s}_{i})\). This enables us to estimate the ITE using Eq. (1).
## 3 Proposed Method: Individual Treatment Estimator Under Heterogeneous Interference
This study proposes HINITE, a method that can estimate the ITE from observed data \((\mathbf{X},\mathbf{T},\mathbf{Y},\mathbf{H})\) under heterogeneous interference. Figure 2 shows the architecture of HINITE. As can be seen, HINITE consists of three components to address confounders, model heterogeneous interference, and predict outcomes, respectively. Specifically, the first component addresses confounders by learning balanced representations of covariates with the Hilbert-Schmidt Independence Criterion (HSIC) regularization [6]. The second component aggregates interference by modeling the propagation of interference across units and views, and generates representations of units, which are referred to as interference representations. The last component consists of two outcome predictors that infer potential outcomes using the covariate and interference representations.
### Learning Balanced Covariate Representations
To address the imbalance in distributions of different treatment groups caused by confounders, HINITE learns balanced covariate representations using an existing approach [17]. The key idea is to find a representation space in which the treatment assignments and covariate representations become approximately independent [17]. This goal can be achieved by applying the HSIC regularization [6], which is an independence test criterion of two random variables. The value of HSIC is 0 when two random variables are independent. Thus, minimizing the HSIC can achieve the abovementioned goal.
Specifically, we learn a balanced covariate representation \(\mathbf{u}_{i}\) for the \(\mathbf{x}_{i}\) using a map function \(\phi\) that consists of multiple feed-forward (FF) layers, i.e., \(\mathbf{u}_{i}=\phi(\mathbf{x}_{i})\), resulting in covariate representations for all units, denoted as \(\mathbf{U}\). We train \(\phi\) by minimizing the HSIC between \(\mathbf{u}\) and \(t\), which is denoted as \(\text{HSIC}_{\phi}\) and designed as follows:
\[\text{HSIC}_{\phi}(\mathbf{U},\mathbf{T})=\frac{1}{N^{2}}\text{tr}(\mathbf{KMLM }),\quad\mathbf{M}=\mathbf{I}_{N}-\frac{1}{N}\mathbf{1}_{N}\mathbf{1}_{N}^{ \top}, \tag{2}\]
where \(N\) is the number of training units, \(\cdot^{\top}\) represents the transposition operation, \(\mathbf{I}_{N}\) is the identity matrix, and \(\mathbf{1}_{N}\) is the vector of all ones. \(\mathbf{K}\) and \(\mathbf{L}\) represent the Gaussian kernel applied to \(\mathbf{U}\) and \(\mathbf{T}\), respectively, i.e.,
\[K_{ij}=\exp\left(-\frac{\|\mathbf{u}_{i}-\mathbf{u}_{j}\|_{2}^{2}}{2}\right),\quad L_ {ij}=\exp\left(-\frac{(t_{i}-t_{j})^{2}}{2}\right). \tag{3}\]
### Learning Heterogeneous Interference Representations
To properly model heterogeneous interference, it is necessary to capture both same-view and cross-view interference. To this end, we model the propagation of the same-view and cross-view interference. Inspired by Wang et al. [39], we design an HIA layer, as shown in Figure 3, which contains node-level and view-level aggregation mechanisms. The node-level aggregation mechanism aggregates same-view interference received by units. It utilizes \(m\) single-layered GNNs [12, 35] to perform aggregations within each view. The view-level aggregation mechanism
Figure 2: An example of the model architecture of HINITE. In this case, there are two views, i.e., \(v_{1}\) and \(v_{2}\).
combines (i.e., sums up) the results aggregated by the node-level aggregations to generate new representations of units. Therefore, by employing an HIA layer, units are able to aggregate interference received from their one-hop heterogeneous neighbors. This enables capturing cross-view interference by stacking HIA layers. Similarly, same-view interference from multi-hop neighbors can also be captured by stacking HIA layers.
Consider again the co-purchased and co-viewed graphs in Figure 0(b). Suppose that we feed units and their co-purchased and co-viewed graphs to a network stacked by two HIA layers. For the Mouse 1, the first HIA layer performs two node-level aggregations. One aggregation helps the Mouse 1 aggregate interference within the co-purchased graph, while the other helps the Mouse 1 aggregate interference within the co-viewed graph, resulting in two aggregated results. Then, the view-level aggregation mechanism combines these results obtained by node-level aggregations to generate the Mouse 1's new representation, while updating the new representation in all views. This enables the Mouse 1 to aggregate interference from the Computer. Similarly, the first HIA layer also generates new representations for other units. Then, by taking these new representations of all units as inputs of the second HIA layer, the second HIA layer enables the Mouse 2 to capture interference from the Mouse 1, which contains interference from the Computer. Therefore, the cross-view interference from the Computer to the Mouse 2 can be captured by stacking two HIA layers.
Apart from cross-view interference, another challenge is that the importance of edges and views may differ in heterogeneous graphs [39]. For example, in a co-viewed graph, the importance of products in the same category tends to be higher than that of products in different categories. Here, the weights of edges in the same view can be different. Furthermore, a co-purchased graph may have more significant importance than a co-viewed graph in terms of interference, leading to different importance for each view. To overcome these difficulties and properly model the propagation of interference, we infer different weights for every edge via a graph attention mechanism [35] (called node-level attention)
Figure 3: The architecture of the HIA layer. This layer consists of node-level and view-level aggregation mechanisms with their attention mechanisms.
before node-level aggregations, and learn different importance for every view via an attention mechanism [34, 39] (called view-level attention) before view-level aggregations.
More specifically, given covariate representations \(\mathbf{U}\), treatment assignments \(\mathbf{T}\), and structures of heterogeneous graphs \(\mathbf{H}\), we aim to obtain interference representations \(\mathbf{G}\) using a function \(\psi\) that consists of multiple HIA layers, i.e., \(\mathbf{G}=\psi(\mathbf{U},\mathbf{T},\mathbf{H})\). For a unit \(i\), its interference representation \(\mathbf{g}_{i}\) is supposed to capture the interference from its heterogeneous neighbors. Let \(\mathbf{p}\) be a representation of a unit, which is the input of the current HIA layer and the output of the previous HIA layer. For the first HIA layer, \(\mathbf{p}\) is the concatenation of \(\mathbf{u}\) and \(\mathbf{t}\). Let \(\mathbf{z}\) denote a new representation for the unit \(i\) computed by the current HIA layer, \(\alpha^{v}_{ij}\) denote the inferred weight of the edge between units \(j\) and \(i\) at the \(v\)-th view, \(w^{v}_{i}\) denote the learned importance of the \(v\)-th view for the unit \(i\), and \(\beta^{v}_{i}\) denote the normalized value for \(w^{v}_{i}\).
Now, we describe the architecture of the HIA layer in detail. First, the HIA layer infers the edge weight \(\alpha^{v}_{ij}\) by the node-level attention mechanism as follows:
\[\alpha^{v}_{ij}=\frac{\exp(\text{LeakyReLU}(\mathbf{a}^{\top}[\mathbf{W}\mathbf{p}_{i }\|\mathbf{W}\mathbf{p}_{j}]))}{\sum_{k\in\mathbf{N}^{v}_{i}\bigcup\{i\}}\exp( \text{LeakyReLU}(\mathbf{a}^{\top}[\mathbf{W}\mathbf{p}_{i}\|\mathbf{W}\mathbf{p}_{k}]))}, \tag{4}\]
where \(\mathbf{a}\) and \(\mathbf{W}\) represent a learnable parameter vector and matrix, respectively, and \(\|\) represents the concatenation operation. Next, it performs node-level aggregations. The node-level aggregation at the \(v\)-th view is computed as follows:
\[\mathbf{p}^{v^{\prime}}_{i}=\sigma\left(\sum_{j\in\mathbf{N}^{v}_{i}\bigcup\{i\}} \alpha^{v}_{ij}\mathbf{W}\mathbf{p}_{j}\right), \tag{5}\]
where \(\sigma\) is an activation function, such as ReLU. Next, the view-attention mechanism is applied to learn the importance of different views as follows:
\[w^{v}_{i}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{q}^{\top}\text{LeakyReLU}(\mathbf{W} \mathbf{p}^{v^{\prime}}_{i}+\mathbf{b}),\quad\beta^{v}_{i}=\frac{\exp{(w^{v}_{i})}}{ \sum_{v=1}^{m}\exp{(w^{v}_{i})}}, \tag{6}\]
where \(\mathbf{b}\) is a bias vector, and \(\mathbf{q}\) is a learnable parameter vector. Finally, the view-level aggregation is applied to aggregate the information from different views as follows:
\[\mathbf{z}_{i}=\sum_{v=1}^{m}\beta^{v}_{i}\mathbf{p}^{v^{\prime}}_{i}. \tag{7}\]
### Outcome Predictions and ITE estimation
Given the covariate representations \(\mathbf{U}\), interference representations \(\mathbf{G}\), and treatment assignments \(\mathbf{T}\), we train two predictors that consist of multiple FF layers to infer the outcomes with different \(t\). Specifically, let \(f_{y_{0}}\) and \(f_{y_{1}}\) denote the predictor for \(t=0\) and \(t=1\), respectively. We optimize the two predictors by
minimizing the following mean square error (MSE) between prediction outcomes and observed outcomes with the HSIC regularization:
\[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\left(f_{y_{t_{i}}}(\mathbf{u}_{i},\mathbf{g}_{i})-y _{i}\right)^{2}+\gamma\text{HSIC}_{\phi}, \tag{8}\]
where the \(\gamma\) is a regularization hyperparameter.
Finally, we can estimate the ITE using \(\hat{\tau}_{i}=f_{y_{1}}(\mathbf{u}_{i},\mathbf{g}_{i})-f_{y_{0}}(\mathbf{u}_{i},\mathbf{g}_{i})\).
## 4 Experiments
### Datasets
We used three heterogeneous graph datasets: Amazon Software (AMZ S) [8], Youtube [31], and Flicker [40]. Following prior studies on ITE/ATE [17, 16, 26], we simulated outcomes3 as the ground-truth values for counterfactual outcomes are not available.
Footnote 3: The simulated outcomes and the codes of the HINITE are available at [https://github.com/LINXF208/HINITE](https://github.com/LINXF208/HINITE).
**Outcome Simulation:** Similar to the outcome simulation in Ma et al. [16], we used available data and heterogeneous graph structures to simulate outcomes under heterogeneous interference of the unit \(i\):
\[y_{i}=f_{0}(\mathbf{x}_{i})+f_{t}(t_{i},\mathbf{x}_{i})+f_{s}(\mathbf{T},\mathbf{X}, \mathbf{N}_{i})+\epsilon_{i}, \tag{9}\]
where \(f_{0}(\mathbf{x}_{i})=\mathbf{w}_{0}^{\top}\mathbf{x}_{i}\) simulates the outcome of a unit \(i\) under treatment \(t_{i}=0\) without interference, and every element of \(\mathbf{w}_{0}\) follows a Gaussian distribution or uniform distribution (i.e., \(\mathcal{N}(0,1)\) or \(\mathcal{U}(0,1)\)). \(f_{t}(t_{i},\mathbf{x}_{i})=t_{i}\times\mathbf{w}_{1}^{\top}\mathbf{x}_{i}\) simulates the ITE of the unit \(i\), where \(\mathbf{w}_{1}\sim\mathcal{N}(0,\mathbf{I})\) or \(\mathcal{U}(0,\mathbf{I})\). In the literature, the effect caused by interference is known as _spillover effect_[21]. We simulate it through \(f_{s}(\mathbf{T},\mathbf{X},\mathbf{N}_{i})=o_{i}^{(1)}+o_{i}^{(2)}\), where \(o_{i}^{(1)}=\text{Agg}(\text{Concat}(\mathbf{X},\mathbf{T}),\mathbf{N}_{i})\) represents a spillover effect from 1-hop heterogeneous neighbors for the unit \(i\), \(o_{i}^{(2)}=\text{Agg}(\mathbf{O}^{(\mathbf{1})},\mathbf{N}_{i})\) represents the spillover effect of 2-hop heterogeneous neighbors, and \(\mathbf{O}^{(\mathbf{1})}\) represents the spillover effects from 1-hop heterogeneous neighbors for all units. Here, the aggregation function is defined as \(\text{Agg}(\mathbf{C},\mathbf{N}_{i})=\sum_{v=1}^{m}e^{v}\left(\frac{1}{| \mathbf{N}_{i}^{v}|}\sum_{j\in\mathbf{N}_{i}^{v}}\mathbf{w}_{ij}^{\top}\mathbf{c}_{j}\right)\), where \(e^{v}\) and every element of \(\mathbf{w}_{ij}\) follow \(\mathcal{N}(0,1)\) or \(\mathcal{U}(0,1)\). Lastly, \(\epsilon_{i}\sim\mathcal{N}(0,1)\) is a random noise.
**Amazon Software dataset [8]:** The Amazon dataset [8] is collected from Amazon4. In the graphs of the Amazon dataset, each node is a product. To study causal effects, we chose the co-purchased and co-viewed graphs from the software category of the Amazon dataset. After removing nodes with missing values, there are 11,089 items with 11,813 heterogeneous edges. The covariates consist of reviews and the number of customer reviews of items. We put reviews into the SimCSE [5] model to generate 768-dimensional sentence embeddings.
The review rating of items is considered as a treatment: an item is treated (\(t=1\)) when the average review rating is at least 3, and an item is controlled (\(t=0\)) when the average review rating is less than 3. The causal problem in this dataset is whether review rating has a role in influencing the sales of items. Due to the heterogeneous edges among items, the sales of an item might be influenced by its heterogeneous neighbors' treatments.
**YouTube dataset [31]:** Tang et al. [31] used YouTube Data API5 to crawl the information of contacts, subscriptions, and favorites of users from YouTube6, while extending them to a contact graph, co-subscription graph, co-subscribed graph, and favorite graph. Every node in the graphs is a user of YouTube. In this case, we consider a causal problem: "how much recommendation of a video (treatment) to a user will affect the user's experience of this video (outcome)?" Moreover, users might share the recommended video with heterogeneous neighbors, which constitutes heterogeneous interference. We took 5,000 users with their heterogeneous graphs containing 3,190,622 heterogeneous edges to simulate outcomes and study heterogeneous interference. As detailed information about each user is missing, we simulated the covariates via \(\mathbf{x}_{i}\sim\mathcal{N}(0,\mathbf{I})\) (100-dimensional vector), and simulated treatment \(t_{i}\) as follows, following most existing works, such as Ma et al. [16]:
Footnote 5: [https://developers.google.com/youtube/?csw=1](https://developers.google.com/youtube/?csw=1)
Footnote 6: [https://www.youtube.com/](https://www.youtube.com/)
\[t_{i}\sim\text{Ber}(\text{sigmoid}(\mathbf{x}_{i}^{\top}\mathbf{w}_{t})+\epsilon_{t_ {i}}), \tag{10}\]
where \(\text{Ber}(\cdot)\) represents a Bernoulli distribution, \(\mathbf{w}_{t}\) is a 100-dimensional vector in which every element follows \(\mathcal{U}(-1,1)\), and \(\epsilon_{t_{i}}\) is random Gaussian noise.
**Flicker dataset [40]:** Flicker7 is an online social website where users can share their images. Qu et al. [19] constructed a dataset with multi-view graphs, i.e., friendship view and similarity view, from the Flicker dataset [40]. Every node in the graphs is a user of Flicker. Following Qu et al. [19], we also consider friendship-view and similarity-view graphs that have 7,575 users with approximately 1,236,976 heterogeneous edges. Here, the causal question is: "how much recommending a hot photo (treatment) to a user will affect the user's experience (outcome) of this photo?" In this case, users might share recommended photos with their heterogeneous neighbors, which constitutes heterogeneous interference. We used the 1206-dimensional embeddings that are provided by Guo et al. [7], generated using a list of users' interest tags, and simulated the treatments using Eq. (10).
Footnote 7: [https://www.flickr.com/](https://www.flickr.com/)
### Baselines
**BNN [11]:** Balancing Neural Network [11] (BNN) addresses confounders by minimizing the discrepancy of distributions of units belonging to different groups, without considering interference. Following Johansson et al. [11], we considered two structures: BNN-4-0 and BNN-2-2. The former has four representation layers
but no prediction layers, and the latter has two representation layers and two prediction layers. Both have one linear output layer.
**CFR [26]:** Counterfactual Regression (CFR) [26] minimizes the maximum mean discrepancy (MMD) and Wasserstein distance between different distributions of two groups. Similar to BNN, it also ignores interference. Following Shalit et al. [26], we considered two different schemes: \(\text{CFR}_{\text{MMD}}\) and \(\text{CFR}_{\text{Wass}}\). The former minimizes the MMD of two different distributions, while the latter minimizes the Wasserstein distance.
**TARNet [26]:** TARNet consists of the same model architecture as the CFR model but removes the balance term (MMD or Wasserstein distance).
**GCN-based methods [17]:** Ma et al. [17] proposed methods to address interference on a homogeneous graph using graph convolutional networks (GCNs) [41]. The GCN-based method can use only a single view rather than all views of heterogeneous graphs. To overcome it, we consider two schemes. The first scheme is to replace heterogeneous graphs with a projection graph \(\mathbf{A}_{\text{Proj}}\) and apply the GCN-based method to the \(\mathbf{A}_{\text{Proj}}\), denoted as \(\text{GCN}_{\text{Proj}}\). If two units have an edge in either of the original heterogeneous graphs, there will be an edge in this projection graph. The second scheme is to augment the GCN-based method with mixing operations, which includes two variants: \(\text{MGCN}_{\text{C}}\) and \(\text{MGCN}_{\text{M}}\). The \(\text{MGCN}_{\text{C}}\) concatenates interference representations from different views into a single vector, while the \(\text{MGCN}_{\text{M}}\) computes the mean vector of these interference representations.
### Experiment Settings
For all datasets, we calculated \(\epsilon_{\text{PEHE}}/\epsilon_{\text{ATE}}\) to evaluate the error on ITE/ATE estimations as follows:
\[\epsilon_{\text{PEHE}}=\frac{1}{n}\sum_{i=1}^{n}(\tau_{i}-\hat{\tau}_{i})^{2},\quad\epsilon_{\text{ATE}}=\left|\frac{1}{n}\sum_{i=1}^{n}\tau_{i}-\frac{1}{ n}\sum_{i=1}^{n}\hat{\tau}_{i}\right|. \tag{11}\]
Following Ma et al. [17], the entire \(\mathbf{X}\), \(\mathbf{T}\), and heterogeneous graph structures were given during the training, validation, and testing phases. However, only the observed outcomes of the units in the training set were provided during the training phase. We randomly split all datasets into training/validation/test splits with a ratio of \(70\%/15\%/15\%\). Results on the Youtube and Flicker datasets were averaged over ten realizations, while the results on the AMZ S dataset were averaged over three repeated executions. We trained all models with the NVIDIA RTX A5000 GPU. All methods utilized the Adam optimizer with 2,000 training iterations for all datasets. In addition, dropout and early stopping were applied for all methods to avoid overfitting.
For all datasets, we set the learning rate to 0.001 with a weight decay of 0.001, set the training batch size to 512, and searched \(\gamma\) in the range of \(\{0.01,0.1,0.5,1.0,1.5\}\) using the validation sets. We used ReLU as activation function for \(\phi\), \(f_{y_{t_{i}}}\), and node-level aggregations. The hidden layers of \(\phi\) were set to \((128,64,64)\)-dimensions, \(\psi\) are set to \((64,64,32)\)-dimensions, \(f_{y_{t_{i}}}\) are set
to \((128,64,32)\)-dimensions, and the dimensions of view-level attention were set to \((128,128,64)\)-dimensions. Moreover, we searched for hyperparameters for all baseline methods from the search range suggested in the corresponding literature.
### Results
Treatment effect estimation performance.Table 1 lists the results of ITE and ATE estimations on test sets of all datasets. It can be seen that the HINITE outperforms all baseline methods in ITE estimation, while there are significant gaps (p-values of the t-test are far less than 0.05) in ITE estimation between the proposed and baseline methods. It can also be seen that HINITE outperforms most baseline methods in ATE estimation, at least, achieving comparative performance of ATE estimation to those of the baseline methods. These results reveal that HINITE has a powerful ability to address heterogeneous interference. Moreover, the GCN\({}_{\mathrm{Proj}}\) and MGCN with some simple mixers cannot always achieve better performance than other baseline methods. This implies that modeling cross-view interference using the HIA layers is important.
Ablation study.To further investigate the importance of each component of HINITE, we conducted ablation experiments. Let us start by introducing some variants of HINITE: (i) HINITE-PG applies the HINITE to the projection graph \(\mathbf{A}_{\mathrm{Proj}}\), which was described when introduced the GCN-based method. (ii) HINITE-NHG replaces the HIA layers with GCN layers [12] while using the \(\mathbf{A}_{\mathrm{Proj}}\). (iii) HINITE-NB removes the HSIC regularization by setting \(\gamma\) to 0.
Figure 4 presents the results of the ablation experiments. A clear performance gap can be seen in ITE and ATE estimation between the HINITE-PG/HINITE-NHG and HINITE. This implies that it is important to model the heterogeneous interference using the information of heterogeneous graphs and the proposed HIA layer. Comparing the results of HINITE and HINITE-NB, we can also observe
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Youtube} & \multicolumn{2}{c}{Flicker} & \multicolumn{2}{c}{AMZ S} \\ Method & \(\epsilon_{\mathrm{PEHE}}\) & \(\epsilon_{\mathrm{ATE}}\) & \(\epsilon_{\mathrm{PEHE}}\) & \(\epsilon_{\mathrm{ATE}}\) & \(\sqrt{\epsilon_{\mathrm{PEHE}}}\) & \(\epsilon_{\mathrm{ATE}}\) \\ \hline TARNet & 40.75\(\pm\)7.95 & 0.51\(\pm\)0.23 & 24.20\(\pm\)6.79 & 0.30\(\pm\)0.26 & 112.37\(\pm\)11.54 & 103.91\(\pm\)12.78 \\ BNN-2-2 & 93.03\(\pm\)16.02 & 0.26\(\pm\)0.23 & 27.91\(\pm\)7.53 & **0.13\(\pm\)0.07** & 199.37\(\pm\)0.20 & 196.36\(\pm\)0.20 \\ BNN-4-0 & 105.38\(\pm\)22.50 & 0.26\(\pm\)0.23 & 29.22\(\pm\)7.53 & **0.13\(\pm\)0.07** & 206.03\(\pm\)0.08 & 203.12\(\pm\)0.08 \\ CFR\({}_{\mathrm{MMD}}\) & 42.02\(\pm\)9.96 & 0.43\(\pm\)0.36 & 24.44\(\pm\)7.49 & 0.29\(\pm\)0.17 & 103.18\(\pm\)25.02 & 89.76\(\pm\)32.13 \\ CFR\({}_{\mathrm{WASS}}\) & 39.36\(\pm\)8.76 & 0.51\(\pm\)0.41 & 24.02\(\pm\)6.71 & 0.35\(\pm\)0.17 & 109.91\(\pm\)24.49 & 99.40\(\pm\)30.40 \\ GCN\({}_{\mathrm{Proj}}\) & 42.37\(\pm\)7.45 & 0.61\(\pm\)0.39 & 24.59\(\pm\)5.11 & 0.21\(\pm\)0.13 & 139.14\(\pm\)20.63 & 135.57\(\pm\)22.86 \\ MGCN\({}_{\mathrm{C}}\) & 53.10\(\pm\)11.83 & 0.29\(\pm\)0.27 & 26.87\(\pm\)6.43 & 0.25\(\pm\)0.20 & 95.14\(\pm\)8.25 & 72.08\(\pm\)13.47 \\ MGCN\({}_{\mathrm{M}}\) & 53.99\(\pm\)13.46 & 0.37\(\pm\)0.33 & 29.48\(\pm\)7.17 & 0.29\(\pm\)0.25 & 87.33\(\pm\)3.40 & 60.81\(\pm\)3.27 \\ \hline HINITE & **14.43\(\pm\)3.27** & **0.21\(\pm\)0.20** & **18.45\(\pm\)4.42** & 0.15\(\pm\)0.11 & **76.16\(\pm\)3.82** & **15.21\(\pm\)3.89** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results (mean \(\pm\) standard errors) of performance of ITE and ATE estimation. Results in bold indicate the lowest mean error. HINITE is our method.
Figure 4: Results (mean and standard error) of ablation experiments. We set \(\alpha\) to \(1.5\) for HINITE-PG, HINITE-NHG, and HINITE in the ablation experiments.
Figure 5: Performance changes on Filker and Youtube datasets with different \(\gamma\) (in the range of \(\{0.01,0.1,0.5,1.0\}\)). Results are averaged over ten realizations with a fixed value of \(\gamma\).
that removing the HSIC regularization results in performance degradation. This reveals that it is also important to balance the different distributions.
Sensitivity analysis.To investigate whether HINITE is sensitive to \(\gamma\), we conducted experiments with different \(\gamma\) and present the results in Figure 5. No significant change in performance was observed with different values of \(\gamma\). This reveals that HINITE is not particularly sensitive to the value of \(\gamma\).
## 5 Related work
In the literature, efforts have been made to estimate treatment effect without interference [2, 7, 11, 13, 23, 24, 26, 42, 43] and with interference on homogeneous graphs [1, 3, 9, 15, 17, 32, 33, 36] or hyper-graphs [16]. A few studies have considered heterogeneous graphs. For example, Qu et al. [20] assumed a partial interference and could only estimate ATE. Zhao et al. [46] proposed a method to construct a heterogeneous graph from a homogeneous graph by learning a set of weights for each edge using an attention mechanism, but their method cannot capture interference between multi-view graph structures. We offer the first approach for handling interference on multi-view graphs.
Meanwhile, heterogeneous graphs have been the subject of recent graph analysis studies, focusing on tasks such as node classification, link prediction, and graph classification [4, 10, 14, 27, 28, 38, 39, 44, 45]. The proposed HINITE shares some similarities with the heterogeneous graph attention network (HAN) [39]. However, HAN aggregates information from each view at the end of forward propagation only once, while the proposed HINITE does aggregation layer-by-layer, which is essential for capturing cross-view interference. In addition, we use LeakyReLU (for view-level attention) instead of the tanh function as an activation function to address the vanishing gradient issue, and we use single-head instead of multi-head attention for better efficiency.
## 6 Conclusion
In this paper, we described the problem of heterogeneous interference and the difficulty of treatment effect estimations under heterogeneous interference. This paper proposed HINITE to model the propagation of heterogeneous interference using HIA layers that contain node-level aggregation, view-level aggregation, and attention mechanisms. We conducted extensive experiments to verify the performance of the proposed HINITE, where the results validate the effectiveness of the HINITE in ITE and ATE estimation under heterogeneous interference.
## Acknowledgements
This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JP-MJFS2123, and supported by JSPS KAKENHI Grant Number 20H04244.
## Ethics
This study only involved public datasets that are freely available for academic purposes.
|
2301.00002 | Evaluating Alternative Glyph Design for Showing Large-Magnitude-Range
Quantum Spins | We present experimental results to explore a form of bivariate glyphs for
representing large-magnitude-range vectors. The glyphs meet two conditions: (1)
two visual dimensions are separable; and (2) one of the two visual dimensions
uses a categorical representation (e.g., a categorical colormap). We evaluate
how much these two conditions determine the bivariate glyphs' effectiveness.
The first experiment asks participants to perform three local tasks requiring
reading no more than two glyphs. The second experiment scales up the search
space in global tasks when participants must look at the entire scene of
hundreds of vector glyphs to get an answer. Our results support that the first
condition is necessary for local tasks when a few items are compared. But it is
not enough to understand a large amount of data. The second condition is
necessary for perceiving global structures of examining very complex datasets.
Participants' comments reveal that the categorical features in the bivariate
glyphs trigger emergent optimal viewers' behaviors. This work contributes to
perceptually accurate glyph representations for revealing patterns from large
scientific results. We release source code, quantum physics data, training
documents, participants' answers, and statistical analyses for reproducible
science https://osf.io/4xcf5/?view_only=94123139df9c4ac984a1e0df811cd580. | Henan Zhao, Garnett W. Bryant, Wesley Griffin, Judith E. Terrill, Jian Chen | 2022-12-25T20:57:47Z | http://arxiv.org/abs/2301.00002v1 | # Evaluating Glyph Design for Showing Large-Magnitude-Range Quantum Spins
###### Abstract
We present experimental results to explore a form of bivariate glyphs for representing large-magnitude-range vectors. The glyphs meet two conditions: (1) two visual dimensions are separable; and (2) one of the two visual dimensions uses a categorical representation (e.g., a categorical colormap). We evaluate how much these two conditions determine the bivariate glyphs' effectiveness. The first experiment asks participants to perform three local tasks requiring reading no more than two glyphs. The second experiment scales up the search space in global tasks when participants must look at the entire scene of hundreds of vector glyphs to get an answer. Our results support that the first condition is necessary for local tasks when a few items are compared. But it is not enough for understanding a large amount of data. The second condition is necessary for perceiving global structures of examining very complex datasets. Participants' comments reveal that the categorical features in the bivariate glyph trigger emergent optimal viewers' behaviors. This work contributes to perceptually accurate glyph representations for revealing patterns from large scientific results. We release source code, quantum physics data, training documents, participants' answers, and statistical analyses for reproducible science at \(https://\!/\!osf.io/\!4xcf5/?view_{only}=94123139d\!f9c4ac984a1e0d\!f811cd580\).
Separable and integral dimension pairs, bivariate glyph, 3D glyph, quantitative visualization, large-magnitude-range.
## 1 Introduction
Bivariate glyph visualization is a common form of visual design in which a dataset is depicted by two visual variables, often chosen from a set of perceptually independent graphical dimensions of shape, color, texture, size, orientation, curvature, and so on [1, 2]. A bivariate glyph design [3] has been broadly applied to reveal atom spin behaviors for quantum physicists at National Institute of Standards and Technology (NIST) to examine experimental results; thanks to their team's Nobel-prize-winning simulations [4]. Quantum physicists world-wide can now manipulate many individual quantum systems to study complex atom and sub-atom interactions. Because atoms can be in multiple states simultaneously and because these spin magnitudes are large in range and often vary greatly in local regions, computational solutions still do not exist to characterize the spin behaviors. Today's quantum physicists rely on visualization to interpret simulation results.
On the visualization side, the initial design and evaluation of large-magnitude-range spin vector visualizations use scientific notation to depict digit and exponent as two concentric cylinders [3]: inside and outside tube-lengths (\(length_{y}length_{y}\) or \(L_{y}L_{y}\) or \(splitVectors\)) are mapped to digit and power of spin magnitude accordingly (Figure 0(e)). A three-dimensional (3D) bivariate glyph scene of this _splitVectors_ design (Figure 0(e)) achieves up to ten times greater accuracy than the traditional direct approach (_Linear_, Figure 0(f)) for reading a vector magnitude of a single spin or deriving ratios between two spin magnitudes. However, this design also increases task completion time for an apparently simple comparison task between two magnitudes in three dimensions (3D): the traditional direct approach of _Linear_ (Figure 0(f)) is significantly faster than _splitVectors_ (Figure 0(e)).
One may frame this large-magnitude-range issue as a visual design problem: _how can we depict a scalar value using bivariate visual features to help quantum physicists examine complex spatial data?_ Intuitively, since all tasks in previous study involve a single or at most two spin locations, human visual system would integrate the two component parts (digit and exponent terms) of a quantum spin into one gestalt before comparing the results [5]. Since relating the digit and exponent to the two _size_ features demands a focused attention mode of visual processing, a viewer would take longer to process two component parts in _splitVectors_ compared to a single linear mapping. We term this explanation the _object-level hypothesis_ where a viewer responds to combine two component parts of a value represented in a glyph to its original scalar value (here the magnitude).
However, the _object-level_ processing may be neither efficient nor necessary. For example, Borgo et al. [6] state that "_... effective glyph design should encompass a non-conflicting set of separable retinal variables_". Now, for our examples, if we increase the bivariate feature separability by replacing the exponent-to-length mapping in Figures 0(e) and 0(e) to the exponent-to-color mapping in Figures 0(c) and 0(c) for comparison tasks, it would be counterproductive for our attention first to visit each glyph to compute the magnitude. Instead, the global categorical color (hue) can guide our attention to first compare the exponent, prior to compare vector lengths (digit). In these cases, no object-level attentive processing of bivariate features is needed as long as the two color hues can be easily recognizable.
Further considering the quantum physicists' task relevant to multiple objects (e.g., find maximum among hun
dreds of vectors) (Figure 2), viewers are likely to check the color legend and then use color to first divide the scene into subregions, prior to use length for detailed comparisons within the yellow region (Figures 1(b) and 1(c)). The colorful scene context benefits the reduction of search to a much smaller scale via global statistics of the scene. Coincidentally, this first impression of the data to drive structural and statistical information is also called _scene-level processing_[7]; Wolfe called features guiding this top-down task-driven attention behaviors as _scene_ features. Scene features are also _preattentive_ and can guide attention in visual search toward a target [8], perhaps due to fast ensemble processing [9].
Taken together, an effective design of bivariate glyphs is likely to be influenced by two conditions: separable dimensions, with one of them being a pre-attentive scene feature. These two factors are not necessarily independent. For example, For the first factor, we can follow Borgo et al. [6] and Ware [10] for _"a non-conflicting set of separable retinal variables"_. To meet the both conditions to choose the scene feature, we can give preferences of the separable pair when one of the variables is categorical. This is because categorical features are likely to be better at facilitating the perception of a group of objects in the scene [7, 11, 12]. We in this work compared several separable-integral pairs, _length-color_ (Figures 0(b), 1(c), 1(c)), _length-texture_ (Figures 0(d), 1(d)), and _length-length_ (Figures 0(a), 1(a)). Among the three features of color, texture, and size, color is categorical and thus "most recognizable". Color ensembles are preattentive and permit visual selection at a glance [13]. We purposefully select texture patterns by varying the amount of dark on white, thus introducing luminance variations when many vectors are examined together (Figure 1(d)). Compared to the continuous random noise in Urness et al. [14], ours is for discrete quantities and thus uses regular scale variations. When coupled with separable features, we hypothesize that _highly distinguishable separable dimension pairs, with one being categorical might encourage preattentive global processing to reduce task completion time and be more accurate._
We tested this hypothesis in two experiments with six tasks using four pairs to compare against the \(length_{y}length_{y}\) (separable) in Zhao et al. [3]; \(length_{y}length_{x}\) (integral), \(length_{y}color\) (separable), \(length_{y}texture\) (separable), and \(length_{y}color/length_{x}\) (redundant and separable). Since we predicted that separable dimensions with more preattentive features would reduce the task completion time, \(length_{y}color\) and \(length_{y}color/length_{x}\) might achieve more efficiency without hampering accuracy than other bivariate pairs.
This work makes the following contributions:
* Empirically validates that bivariate-glyphs encoded by
Fig. 1: Illustration of five bivariate configurations of vector magnitudes \(\in(0,9,999]\). Three examples show vector magnitudes \(440\) (\(4.4\times 10^{2}\)), \(9,999\) (\(9.9\times 10^{3}\)), and \(1\) (\(1\times 10^{0}\)). Take 440 as an example, \(length_{y}length_{x}\) (a) maps \(4.4\) (digit) and \(2\) (exponent) to _lengths_ along the y and x axes accordingly ( \(length_{y}\)\(length_{x}\)); (b)-(e) have the same digit-to-\(length_{y}\) representation as (a). The exponent representations are manipulated to be (1) more integral or separable from \(length_{y}\) and (2) more or less categorical. (b) \(length_{y}color/length_{x}\) uses color to double-code exponent compared to (a). The exponents in (c), (d), and (e) use color, texture, or outer cylinder length accordingly. Our experimental results support that more separable dimensions lead to more perceptually accurate glyphs. The higher the separability, the higher the accuracy. Also, using a more categorical feature (e.g., color in (c)) of one of the variables benefited efficiency and accuracy.
highly separable dimensions would improve comparison task completion time (Exp I).
* Is the first to evaluate categorical features in bivirate-glyphs to leverage the benefits of the global scene features (Exp II).
* Offers a rank order of separable variables for 3D glyph design and shows that the separable pairs \(length_{y}color\) and \(length_{y}texture\) are among the most effective and efficient feature pairs.
* Reveals a novel visual design method for scalable search in big-data.
## 2 Theoretical Foundations in Perception and Vision Sciences
At least four perceptual and vision science theories have inspired our work: integral and separable dimensions [15], preattentive scene features [7, 8, 16, 17], feature ranking, and monotonicity [2].
**Integral and Separable Dimensions.** Garner and Felfoldy's seminal work on integral and separable dimensions [15] has inspired many visualization design guidelines. Ware [10] suggests a continuum from more integral to more separable pairs: _(red-green)-(yellow-blue)_, \(size_{x}\)-\(size_{y}\),color-shape/size/orientation, motion-shape/size/orientation, motion-color, and _group position-color_. His subsequent award-winning bivariate study [2] using _hue-size, hue-luminance_, and _lue-texton_ (texture) supports the idea that more separable dimensions of _hue-texton_ lead to higher accuracy. Our work follows the same ideas of quantifying integral and separable dimensions but differs from Ware's texton selection in two important aspects. First, the Ware study focuses on finding relationships between two independent data variables. In contrast, ours demands participants to examine a complex scene for item discrimination when the two variables are component parts of a vector magnitude. Second, our texture uses the amount of black and white to show luminance variations, in contrast to the discrete shape variation in textons. We anticipate that ours will be more suitable to continuous quantitative values so it is easier to compare large and small to divide the regions [18]. No existing work we know of has studied whether or not one of the separable features being categorical can facilitate global comparisons and can be scaled to large and more complex 3D vector magnitude analysis.
**Scene-Guidance and Feature Distance.** In order to recognize items, viewers do not "see" features and instead "bind" these features to objects. This binding studies how our visual systems separate object features such as shape, color, motion trajectories, sizes, and distances into the whole
Fig. 2: Real-world large-magnitude-range quantum physics simulation results shown using (a)-(e) five bivariate feature-pairs and (f) a traditional linear representation. \(LC\), \(LCL\), and \(LT\) can reveal scene spatial structures. We anticipate that two conditions determine the glyph efficiency: (1) the bivariate glyph uses two separable dimensions; and (2) one of the two dimensions uses a categorical representation thus can reveal global structures in data. The first condition is necessary for local tasks when a few items are compared. The second condition is needed for inspecting the entire scene.
object [5]. What we "see" also depends on our goals and expectations. Wolfe et al. propose the theory of _"guided search"_[8], a first attempt to incorporate users' goals into viewing. For example, if the viewer's goal is to search largest values, s/he can just check the yellow ones in Figure 2. Wolfe et al. [8] further suggest that color, texture, size, and spatial frequency are among the most effective features in attracting the user's attention.
When we combine features together, Duncan and Humphreys articulate some of the most basic principles. In general, guidance to a target will be stronger when the feature differences between the target (T) and distractor (D) are larger (TD differences), and when the feature differences amongst distractors are smaller (DD similarity) [19]. For example, Ts are 2.3 (digit) and 2 (exponent) for 230 (\(2.3\times 10^{2}\)). Ds include all numbers but 2.3 and 2. Using the TD differences between features may explain why _splitVectors_ was time consuming. For example, to compare 230 (\(2.3\times 10^{2}\)) to 2,300 (\(2.3\times 10^{3}\)), viewers have to differentiate the two lengths of 2 (T) and 3 (T) from other distractors (Ds other than 2 or 3). The heterogeneity of Ds or small DD distances from 3D lengths may make the use of splitVectors challenging, thus introducing temporal cost.
**Preatentive and Attentive Feature Ranking.** Human visual processing can be faster when it is preattentive. Wolfe called a feature preattentive when it guides attention in search and cannot be decomposed into simpler features [7]. The idea of preattentive pop-out has historically highlighted that _a single object_ has been considered compelling because it captures the user's attention against a background of other objects (e.g., in showing spatial highlights [20]). Visual features such as orientation and color (hue, saturation, lightness) can generate pop-out effects [21]. This type of pop-out was also used visualizations. For example, Ropinski, Oeltze, and Preim [22] summarized two groups of glyph design: _"parameter mapping"_ from shape and appearance (color, transparency, and texture) and _"placement"_ driven by features or data. Our study concerns appearance.
Recent vision science development also suggests that the preattentive feature is not limited to single items but expanded to _high-level structures_. Global statistical and structural features can be also preattentive [7]. Unlike the now outdated Treisman's 1988 preattentive processing [23], where preattentive features were considered to be perceived _before_ it is given focused attention [23], these preattentive features are _persistent during_ viewers' data exploration thus can continue to provide guidance [7, 8]. Viewers can use peripheral vision to compare in parallel to confidently tell apart regions relevant or irrelevant to tasks [24].
Visual features also can be responsible for different attention speeds, and color (hue) and size (length and spatial frequency) are among those that guide attention [9, 18]. Healey and Enns [25] in their comprehensive review further remark that these visual features are not popped-out at the same speed: _hue_ has higher priority than _shape_ and _texture_[26]. Also, when data size increased, some preattentive features diminished [27][28].
For visualizing quantitative data, MacKinlay [29] and Cleveland and McGill [30] leverage the ranking of visual features and suggest that position and size are quantitative and can be compared in 2D. For example, MacKinlay's A Presentation Tool (APT) [29] automatically recommends visualizations using _effectiveness_ and _expressive_ criteria and outputs a ranked set of encoding to enumerate candidate visualizations based on data types. Casner [31] expands MacKinlay's APT by incorporating user tasks to guide visualization generation. McCleman et al. [32] revise the ranking of visual features based on the number of items. All these studies almost exclusively consider only single item mappings. Demiralp et al. [33] evaluate a crowdsourcing method to study subjective perceptual distances of 2D bivariate pairs of shape-color, shape-size, and size-color. When adopted in 3D glyph design, the authors further suggest that the most important data attributes should be displayed with the most salient visual features, to avoid situations in which secondary data values mask the information the viewer wants to see. Ours also emphasizes the use of global scene features to optimize viewing experiences.
**Monotonicity.** Quantitative data encoding must normally be monotonic, and various researchers have recommended a coloring sequence that increases monotonically in luminance [34]. In addition, the visual system mostly uses luminance variation to determine shape information [35]. There has been much debate about the proper design of a color sequence for displaying quantitative data, mostly in 2D [36] and in 3D shape volume variations [37]. Our primary requirement is to use categorical colormaps that users be able to read large or small exponents at a glance. We used four color steps in the first study and up to seven steps in the second study from ColorBrewer [36] for showing areas of large and small exponents that are mapped to a hue-varying sequence. We claim not that these color sequences are optimal, only that they are reasonable solutions to the design problem.
## 3 Experiment I: Effect of Separable Pairs on Local Discrimination and Comparison
The goal in this first experiment is to quantify the benefits of separable pairs with preattentive features for visual processing of a few items. This section discusses the experiment, the design knowledge we can gain from it, and the factors that influence our design.
### _Methods_
#### 3.1.1 Bivariate Feature-Pairs
We chose five bivariate feature-pairs to examine the comparison task efficiency of separable-integral pairs.
\(Length_{y}length_{x}\) **(integral)** (Figure 1a). Lengths encoded digits and exponents shown as the height and radius of cylinder glyphs.
\(Length_{y}color/length_{x}\) **(redundant and separable)** (Figure 1b). This pair compared to \(length_{y}length_{x}\) added a redundant color (luminance and hue variations) dimension to the exponent and the four sequential colors were chosen from Colorbrewer [36] (Appendix A shows the sequences.)
\(Length_{y}color\) **(separable)** (Figure 1c). This pair mapped exponents to color. Pilot testing showed that the least incorrect exponent levels were selected among these five feature-pairs.
\(Length_{y}texture\) (_separable_) (Figure 1d). Texture represented exponents. The percentage of black color (Bertin [38]) was used to represent the exponential terms \(0\) (\(0\%\)), \(1\) (\(30\%\)), \(2\) (\(60\%\)) and \(3\) (\(90\%\)), wrapped around the cylinders in five segments to make them visible from any viewpoint.
\(Length_{y}length_{y}\) (_splitVectors_[3], _separable_) (Figure 1e). This glyph used _splitVectors_[3] as the baseline and mapped both digit and exponent to lengths. The glyphs were semi-transparent so that the inner cylinders showing the digit terms were legible.
_Father-like fishbone legends_ were added at each location when the visual variable _length_ was used. The _tick-mark band_ was depicted as subtle light-gray lines around each cylinder. Distances between neighboring lines show a unit length legible at certain distance (Figure 1, rows \(1\) and \(2\)).
#### 3.1.2 Hypotheses
Given the analysis below and recommendations in the literature, we arrived at the following working hypotheses:
* _Exp I. H1. (Overall). The \(length_{y}color\) feature-pair can lead to the most accurate answers._
* _Exp I. H2. (Integral-separable). Among the three separable dimensions, \(length_{y}color\) may lead to the greatest speed and accuracy and \(length_{y}texture\) will be more effective than \(length_{y}length_{y}\) (splitVectors)._
* _Exp I. H3. (Redundancy on time). The redundant pair \(length_{y}color/length_{x}\) will reduce task completion time compared to splitVectors._
Several reasons led to H1 and H2. They are related to the two conditions of glyph design we evaluate. Color and length were separable dimensions, so comparing length to color is simple (condition 1). And color was preattentive and could be detected quickly (condition 2). Compared to the redundant \(length_{y}color/length_{x},length_{y}color\) reduced crowding since the feature-pairs were generally smaller than those in \(length_{y}color/length_{x}\). Also, distinguishing two lengths in _splitVectors_ might be less efficient than \(length_{y}texture\). H3 could be supported because redundancy increased information processing capacity [10]. Redundancy contributes to efficiency by increasing the feature distances between exponents. We did not expect accuracy gain from redundancy because _splitVectors_ achieved the same level of accuracy as reading texts in Zhao et al. [3]. It may not be useful to decode quantitative data in this experiment at least for showing a few items.
#### 3.1.3 Tasks
Participants performed the following three task types as in Zhao et al. [3] so that results were comparable. They had unlimited time to perform these three tasks.
**Exp I. Task 1 (MAG): magnitude reading (Figure 2(a)).**_What is the magnitude at point A?_ One vector was marked by a red triangle labeled "A", and participants should report the magnitude of that vector. This task required precise numerical input.
**Exp I. Task 2 (RATIO): ratio estimation (Figure 2(b)).**_What is the ratio of magnitudes of points A and B?_ Two vectors are marked with two red triangles labeled "A" and "B", and participants should estimate the ratio of magnitudes of these two vectors. The ratio judgment is the most challenging quantitative task [29]. Participants could either compare the glyph shapes or decipher each vector magnitude and compute the ratio mentally.
**Exp I. Task 3 (COMP): comparison (Figure 2(c)).**_Which magnitude is larger, point A or B?_ Two vectors are marked with red triangles and labeled "A" and "B". Participants select their answer by directly clicking the "A" or "B" answer buttons. This task was a simple comparison between two values and offered a binary choice of large or small.
#### 3.1.4 Data Selection
Because we were also interested in comparing our results to those in Zhao et al. [3], we replicated their data selection method by randomly sampling some quantum physics simulation results and produce samples within 3D boxes of size
Fig. 3: Experiment I: Local discrimination and comparison tasks. These two red equilateral triangles are rendered on the screen coordinate and are thus always visible.
\(5\times 3\times 3\). There were \(445\) to \(455\) sampling locations in each selected data region.
We selected the data satisfying the same following conditions: (1) the answers must be at locations where some context information was available, i.e., not too close to the boundary of the testing data; (2) no data sample was repeated to the same participant; (3) since data must include a broad measurement, we selected the task-relevant data from each exponential term of \(0\) to \(3\).
#### 3.1.5 Empirical Study Design
**Design and Order of Trials.** We used a within-subject design with one independent variable of bivariate quantitative feature-pair (five types). Dependent variables were error and task completion time. We also collected participants' confidence levels. Table I showed that participants were assigned into five blocks in a Latin-square order, and within one block the order of the five feature-pair types is the same. Participants performed tasks with randomly selected datasets. Each participant performed \(60\) trials (\(3\) tasks \(\times\)\(4\) random data \(\times\)\(5\) feature-pairs). These four random data were from four exponent ranges.
**Participants.** We diversified the participant pool as much as possible, since all tasks could be carried out by those with only some science background. Twenty participants (\(15\) male and \(5\) female, mean age \(=23.3\), and standard deviation \(=4.02\)) participated in the study, with ten in computer science, three in engineering, two in chemistry, one in physics, one in linguistics, one in business administration, one double-major in computer science and math, and one double-major in biology and psychology. The five females were placed in each of the five blocks (Table I). On average, participants spent about \(40\) minutes on the tasks.
**Procedure.** Participants were greed and completed an Institutional Review Board (IRB) consent form (which described the procedure, risks and benefits of the study) and the demographic survey. All participants had normal or corrected-to-normal vision and passed the Ishihara color-blindness test. We showed feature-pair examples and trained the participants with one trial for every feature-pair per task. They were told to be as accurate and as quickly as possible, and that accuracy was more important than time. They could ask questions during the training but were told they could not do so during the formal study. Participants practiced until they fully understood the feature-pairs and tasks. After the formal study, participants filled in a post-questionnaire asking how these feature pairs supported their tasks and were interviewed for their comments. Pilot studies were conducted to examine the procedures.
**Environment.** Participants sat at a \(27\,^{\prime\prime}\) BenQ GTG XL \(2720\)Z, gamma-corrected display with resolution \(1920\)\(\times\)\(1080\) to ensure the colors were displayed properly. The distance between the participants and the display was about \(50\)cm. The minimum visual angle of task-associated glyphs was \(0.2^{\circ}\) in the default view where all data points were visible and the scene filled the screen.
**Interaction.** Participants could rotate the data and zoom in and out. Lighting placement and intensity were chosen to produce visualization with contrast and lighting properties appropriate for human assumptions and the spatial data. The screen background color was neutral stimulus-free gray background to minimize the discriminability and appearance of colors [10]. Using black or white background colors makes the black and white texture stimuli disappear thus bias the results (See Appendix B for examples).
### _Experiment I: Results and Discussion_
#### 3.2.1 Analysis Approaches
We collected \(400\) data points for each task. In preparing the accuracy and task completion time for analysis, we differentiated two error metrics related to the perceptual accuracy of the bivariate pairs:
* Correspondence error (C-Error): A trial is considered to have an answer of C-Error if response's _exponent_ value does not match the correct one. Having a C-Error would mean that participants have trouble differentiating the exponent levels within a glyph.
* participant answer \(\mid\) /(correct answer)_. This measure was used for MAG and RATIO tasks. The benefit of this metric was that it took into account the value of the quantity being compared and thus provided an accurate view of the overall errors.
In subsequent analysis, we separated these two error measurements since Combining these two errors in the analysis would also be problematic. The C-Errors are at least one order of magnitude larger or smaller than the ground truth. We also did not remove participants' data with C-Errors, since the source of errors was caused by glyph design methods independent of trials.
A post-hoc analysis using Tukey's Studentized Range test (HSD) was performed when we observed a significant main effect on R-Errors. When the dependent variable was binary (i.e., answer correct or wrong), we used a logistic regression and reported the \(p\) value from the Wald \(\chi^{2}\) test. When the \(p\) value was less than 0.05, variable levels with \(95\%\) confidence interval of odds ratios not overlapping were considered significantly different. All error bars represent \(95\%\) confidence intervals. We also evaluated effect sizes using _ea-square_, labeled "small" (\(0.01-0.06\)), "medium" [\(0.06-0.14\)), and "large" (\(\geq 0.14\)) effects following Cohen [39].
\begin{table}
\begin{tabular}{l l l} \hline \hline Block & Participant & Feature-pair \\ \hline
1 & P1, P6, P11, P16 & _split_split_ & _split_ Vectors, \(L_{y}L_{x}\), \(LC\), \(LT\), \(LCL\) \\
2 & P2, P7, P12, P17 & \(L_{y}L_{x}\), \(LC\), \(LT\), \(LCL\), \(split\)_split_ Vectors \\
3 & P3, P8, P13, P18 & \(LC\), \(LT\), \(LCL\), \(split\)_split_ Vectors, \(L_{y}L_{x}\), \(L_{z}\) \\
4 & P4, P9, P14, P19 & \(LT\), \(LCL\), \(split\)_split_ Vectors, \(L_{y}L_{x}\), \(LC\) \\
5 & P5, P10, P15, P20 & \(LCL\), \(split\)_ Vectors, \(L_{y}L_{x}\), \(LC\), \(LT\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Experiment I design: \(20\) participants are assigned to one of the five blocks and use all five bivariate pairs. Here, \(L_{y}L_{y}\); \(length_{y}length_{y}\) (split_)\(length_{x}\): \(length_{y}length_{x}\), \(LC\): \(length_{y}color\), \(LT\): \(length_{y}texture\), and \(LCL\): \(length_{y}color/length_{x}\).
#### 3.2.2 Overview of Study Results
Figure 5 show all C-Error occurrences. Table II and Figure 4 show the \(F\) and \(p\) values computed with SAS one-way measures of variance for task completion time and relative error. Our results clearly demonstrated the benefits in terms of task completion time of separable dimensions for comparison. We observed a significant main effect of feature-pair type on task completion time for all three tasks MAG, RATIO, and COMP, and the effect sizes were in the medium range. \(Length_{y}color\) was the most efficient approach. For COMP, \(length_{y}color\), \(length_{y}texture\) and \(length_{y}color/length_{x}\) were most efficient for simple two-point comparisons (Figure 3(c)).
#### 3.2.3 Separable Dimension Coupled with Categorical Features had the Least Correspondence Errors.
We only observed C-Errors in MAG, but not in the RATIO and COMP tasks. The total count was relatively small (\(11\) instances of \(400\) data points). They came from \(9\) participants (error mean = \(1.22\) and \(95\%\) confidence intervals (CI)=\([0.96,1.48]\)). Figure 5 shows all instances of these errors by participant and by encoding methods. It appeared that the degree of separability of integral-separable dimensions influenced the errors: the most integral dimension \(length_{y}length_{x}\) had the highest number (\(5\) instances) of C-Errors and the most separable \(length_{y}color\) had none.
2.4 Separable Dimensions Are Better Than Integral Dimensions for Local Comparisons. But Categorical Feature was not a Statistically Significant Effect.
_Our first two hypotheses H1 and H2 are supported._ In the MAG task, the integral \(length_{y}length_{x}\) was the least efficient and all other separable-pairs were in a separate group, the most efficient one (Figure 3(a)). In RATIO, \(length_{y}color\), \(length_{y}texture\), and _splitVectors_ were the most efficient group (Figure 3(b)); in COMP, the redundant \(length_{y}color/length_{x}\), \(length_{y}color\), and \(length_{y}texture\) were in the most efficient group (Figure 3(c)). _SplitVectors_ was not as bad as we originally thought in perceiving correct exponents. _SplitVectors_ belonged to the same efficient post
\begin{table}
\begin{tabular}{l l l l} \hline \hline Task & Variables & Significance & ES \\ \hline MAG & time & \(\begin{array}{l}\mathbf{F}_{(A,35)}=\mathbf{6.5},\mathbf{p}<\mathbf{0.0001} \\ (LC,LT,LC], splitVectors) >L_{y}L_{x}\\ \end{array}\) & _0.07_ \\ & relative error & \(\begin{array}{l}\mathbf{F}_{(A,384)}=\mathbf{0.9},\mathbf{p}=\mathbf{0.46} \\ \end{array}\) & _0.01_ \\ \hline RATIO & time & \(\begin{array}{l}\mathbf{F}_{(A,35)}=\mathbf{6.2},\mathbf{p}<\mathbf{0.0001} \\ \text{Three groups},\mathbf{A}:LC, splitVectors, LT, LCL\\ \text{C}:LT,LC, lp_{x}L_{x}\\ \end{array}\) & \(\begin{array}{l}0.06\\ \text{C}:LT,LC, lp_{y}L_{x}\\ \end{array}\) & \(\begin{array}{l}0.06\\ \text{C}:LT,LC, lp_{y}L_{x}\\ \end{array}\) & \\ & relative error & \(\begin{array}{l}\mathbf{F}_{(A,395)}=\mathbf{0.8},\mathbf{p}=\mathbf{0.50} \\ \end{array}\) & _0.01_ \\ \hline COMP & time & \(\begin{array}{l}\mathbf{F}_{(A,395)}=\mathbf{104},\mathbf{p}<\mathbf{0.0001} \\ \text{Three groups}:\mathbf{A}:LC,LC, LT,\\ \text{B}:LC, splitVectors,\end{array}\) & _0.09_ \\ & & \(\begin{array}{l}\mathbf{F}_{(A,395)}=\mathbf{104},\mathbf{p}<\mathbf{0.0001} \\ \text{B}:LC, splitVectors\end{array}\) & _0.09_ \\ & & \(\begin{array}{l}\mathbf{F}_{(A,395)}=\mathbf{104},\mathbf{p}<\mathbf{0.0001} \\ \text{B}:LC, splitVectors\end{array}\) & _0.09_ \\ & & \(\begin{array}{l}\mathbf{F}_{(A,395)}=\mathbf{104},\mathbf{p}<\mathbf{0.0001} \\ \text{B}:LC, l_{x}, LT,\\ \text{B}:LC, splitVectors\end{array}\) & _0.09_ \\ & & \(\begin{array}{l}\mathbf{F}_{(A,395)}=\mathbf{104},\mathbf{p}<\mathbf{0.
hoc group as \(length_{y}color\) and \(length_{y}texture\) for RATIO and these three were also most efficient for MAG.
2.5 Separable Pairs of \(Length_{y}color\) And \(Length_{y}color/length_{x}\) Achieved Comparable Efficiency To Direct Linear Glyph
One aspect for motivating this experiment was to quantify the benefits of separable pairs [6, 10]: whether the separable pairs supported COMP and how the separable pairs compared in efficiency to the direct mapping (Figure 2(f)). Since our study had the same numbers of sample data as Zhao et al. [3], we then performed a one-way \(t\)-test to compare against the direct linear encoding in Zhao et al. [3]. Our results indicated that results for COMP (judging large or small) from separable variables was no more time-consuming than direct linear glyphs, and our post-hoc analysis showed that \(length_{y}color\), \(length_{y}color/length_{x}\), and \(linear\) were in the same post-hoc group. We also observed that _splitVectors_ dropped to the least efficient post-hoc group (Figure 3(c)). This result replicated the former study results in Zhao et al. [3] by showing that _splitVectors_ impaired comparison efficiency.
#### 3.2.6 Redundant Feature-Pairs Were Efficient
We also confirmed hypothesis H3. We were surprised by the large performance gain with the redundant encoding \(length_{y}color/length_{x}\) of mapping \(color\) and \(length\) to the exponents in _splitVectors_. With the redundant encoding, the task completion time was significantly shorter than \(length_{y}length_{x}\) for MAG and COMP tasks. While Ware [10] confirmed that the efficiency might not be improved by using separable dimensions, in our case, where color and size (separable) represent the same quantitative value, we suggested that the redundancy worked because participants could use either length or color in different task conditions. We could also consider that \(length_{y}color/length_{x}\) is a redundant encoding of \(length_{y}color\), and those two feature-pairs had similar efficiency and accuracy for all local tasks.
### _Summary_
The separable-pair condition is necessary for effective glyph design because all separable pairs were more efficient than the integral ones. The pre-attentive condition enabled by categorical encoding among the separable pairs may be not since not all conditions were statistically different performance-wise. All tasks (MAG, RATIO, and COMP) lacked of significant main effect on relative errors (in MAG or RATIO) or accuracy (in COMP). Note that none of these three tasks required initial visual search, and target answers were labeled. Wolfe called this type of task-driven with known target guided tasks [8]. \(Length_{y}color\) was the most accurate in all tasks.
We also did not see the needs for the second condition for perceptually accurate glyphs in this experiment. We did not observe differences among categorical dimensions color, texture, and length. We suspect that the reason for this lack of significance could well be their similarities in mentally computing load. The load was relatively small when comparing two values. We suspected that when search-space set-size increases, and when tasks are more complex involving all items, participants will need preattentive global scene features to guide their search. We subsequently ran the second experiment to increase the set size in tasks to the entire scene to study the benefits of categorical features to show quantitative exponent values to benefit global search.
## 4 Experiment II: Scalability of Global Scene Features
The goal in this second experiment is to quantify the benefits of separable feature-pairs when they introduce categorical features of scene guidance for _global_ tasks in search spaces, as large as the entire dataset of several hundreds items. In other word, we measure scene feature scalability of global tasks.
### _Overview_
We had three design considerations for us to carefully choose the categorical features in setting up this experiment, concerning the use of glyphs for showing complex simulation results. Intriguingly, all of these considerations support our second glyph design consideration of using a categorical variable in one of the separable pairs.
The first reason is that the initial _at-a-glance_ global statistical summary of the scene depends on categorical information [7]. One of the most important advances in vision science is to find that viewers can summarize the scene without attending to the specific items [40]. Visual dimensions facilitating this summary process become global scene features and these features are pre-attentive [8]. While visualization is mainly about mapping data values to visual variables, the new theory concerns how features form the structural and content of the scene that can affect efficiency. If the quantum spins contain one object at a time, then the first condition of glyph design considering integral and separable dimensions is sufficient to explain the experience as we have shown in Experiment I. For complex tasks, in general, our visual system has a limited capacity. To cope with this limit, humans first visually summarize the scene to find specific regions of interests [6, 8]. If categorical features stimulate population responses from multiple items, we should observe fewer errors and better efficiency. For example, we have exemplified in the Introduction section for search of "largest" values by looking up "yellow" regions, without attending to every single items of "yellow".
The second concerns _scalability to feature distances_. Here feature distance is meant to represent target-distractor similarity. It is not the absolute features (e.g., yellow) that direct our attention towards the answer; rather, what determines performance is the result of a comparison between target (yellow) and other data features (such as pink and orange) in the scene (e.g., yellow is different from other colors and the yellow regions stand out) [8]. In other words, one must also look at feature distractors [14, 41, 42], whether or not they are heterogeneous, and that the efficiency of a scene guidance will decline as a function of the degree of distractor variation [19, 24, 43]. While generally, subjective reports from Experiment I indicate that \(length_{y}color\) and \(length_{y}texture\) show the similar perceptual speed.
Performance of texture may decline faster than color as the exponent range increases because our vision is not as sensitive to luminance-variation as to hues. For example, at the exponent-range of 7 in Figure 6, the differences between yellow and pink could be more differentiable than the two top-level textures of different amount of black. In this study, we expanded the data range from the single level in Experiment I to five ranges \(\in[3,7]\) to understand feature-pair scalability to feature distances. The efficiency of color in Experiment I could well arise because the range (of \(4\)) was not large enough.
The third concerns the density effects on color choices. Figure 7 shows two densities and two colormaps (a categorical colormap from Colorbrewer [36] and a segmented continuous colormap by the number of exponents generated from the extended blackbody colormap). For a feature to actually _guide_ attention, we can see from Figure 7, the boundary detection with these colormaps is associated with data density. Unless the data density was reasonably high, detecting the boundaries using continuous colormaps (Figures 6(a), 6(b)) is harder than the ColorBrewer colormaps (Figures 6(c), 6(d)).
### _Method_
#### 4.2.1 Feature-Pairs
We used \(length_{y}color\), \(length_{y}texture\), and baseline _splitVectors_ in Experiment II. These three visualizations were chosen because \(length_{y}color\) and \(length_{y}texture\) are among the best feature-pairs from Experiment I and because color and texture are among the most separable features according to Ware [10]. To introduce a "distractor" experience to measure _scalability to feature distances_, we vary the data range from the \(4\) levels in Experiment I to \(3-7\) levels in Experiment II (See mapping in Figure 13, Appendix C.)
#### 4.2.2 Hypotheses
We had the following hypotheses:
* _Exp II.H1 (Accuracy). More categorical feature in the separable pairs will be more effective. We thus anticipate a rank order of effectiveness from high to low: \(length_{y}color\), \(length_{y}texture\), and splitVectors._
* _Exp II.H2 (Correspondence Errors). More categorical feature of color in the separable pairs will reduce C-Errors, when participants will choose the correct exponent level._
* _Exp II.H3 (User behavior). More categorical dimension in the separable feature-pairs will lead to optimal users' behaviors: i.e., participants can quickly locate task-related regions for tasks that demand looking among many vectors due to global scene features._
#### 4.2.3 Tasks
Participants performed three tasks in which they had to compare all vectors to obtain an answer.
**Exp II. Task 1 (SEARCH): visual search.** A vector search within \(20\) seconds (Figure 7(a)). _Find the vector with magnitude \(X\) within \(20\) seconds._ The target vector was shown at the bottom-right corner of the screen. Participants were asked to find this vector.
**Exp II. Task 2 (MAX): find maximum.** An extreme value search within \(20\) seconds (Figure 7(b)). _Within \(20\) seconds, locate the point of maximum magnitude when the exponent is \(X\)._\(X\) in the study was a number from \(0\) to the _maximum exponent_ (\(\in[2,6]\)). This was a global task requiring participants to find the extremum among many vectors.
**Exp II. Task 3 (NUMERSITY): estimate the total number of unique vector exponents (Figure 7(c)).**_Estimate the total number of unique vector exponents in the entire vector field within \(2\) seconds._ Data are randomly chosen and modified to produce the \(3\) to \(7\) range.
#### 4.2.4 Task Choices
Tasks are _use-inspired_ by real-world quantum physics data analyses. Experiment I drilled down to a single or at most two spins. But global tasks are also of quantum physicists' interests, such as those involving understanding the distributions of quantum spin magnitudes. Practically, a spin represents charge density or the measure of the probability of an electron being present at an infinitesimal element of space surrounding any given point. This probability varies due to electron traveling from one grid point to another and is often interpreted together with its neighbors. Quantum physicists are thus interested in searches for regions, where local regions are defined by spin magnitude and different regions would correspond to changes in exponent. Often the most interesting regions are also those with specific charge densities (Task 1) or largest magnitudes (Task 2). The regional task is related to learning the number of interesting regions or magnitude exponent clusters (Task 3).
Performing tasks was limited to \(20\) seconds as a pilot study showed that it took participants about \(\in[15,25]\) seconds or on average about \(20\) seconds to finish search tasks 1 and 2. Also, preattentive processing when used for scene guidance involving a group of similar objects are often fast for viewers to see and increasing the number of items should not significantly impair the search time. From the practical side for the last experiment, participants who would want a perfect score could just spend time counting. Constraining the time allowed us to measure the accuracy when they may have to use the scene feature.
Fig. 6: Visual mapping using color and texture in Experiment II. From the top to bottom, colors and texture segments are mapped to exponent values from the largest to the smallest. The three numbers next to the 7-level colormap are the RGB values. The numbers next to the texture columns are the proportion of black-on-white for the last 7-level texture configuration.
#### 4.2.5 Data Choices
Data were first sampled using the same approach as Experiment I, and no data is used repeatedly in this experiment. We then modified the exponent range from \(3\) to \(7\) for the three tasks by normalizing the data to the desired new data range.
Prior literature used both synthetic data and real-world data to construct the data visualization as test scenarios, enabling tight control over the stimulus parameters (e.g., [44]). Most of the synthetic data in literature were to replicate real-world data characteristics; and others were explained in fictitious use scenarios. The goal was primarily to prevent preconceived user knowledge about the domain-specific attributes. As a result, the synthetic data strike the right balance between real-world uses and the data characteristics.
In our cases, replicating characteristics in quantum physics data was challenging and indeed impossible, since atom behaviors in high-dimensional space were largely unknown and thus were not easily simulated. Our approach was therefore to randomly sample quantum physics simulation results to capture domain-specific attributes and then modify the data to suit evaluation purposes. We showed our data to our physicist collaborators to ensure their validity. We confirmed that these modifications preserved the domain-specific schema of a scene in terms of the domain-specific structures and complexity from real simulations. These modifications represented less than \(4\%\) of overall data points in each scene. Finally, It improves the reuse of our study results.
#### 4.2.6 Empirical Study Design
_Dependent and Independent Variables._ We used a within-subject design with two independent variables of _feature-pair_ (three levels: baseline _splitVectors_, \(length_{y}color\), and \(length_{y}texture\)) and _exponent range_ (five levels: \(3-7\)). The dependent variable was relative error. We did not measure time since all tasks were time-constrained.
Participants performed \(3\) (feature-pairs) \(\times 5\) (magnitude-ranges) \(\times 3\) (repetitions) \(=45\) trials for the first two tasks. Three repetitions were used to give participants enough time to develop strategies. For NUMERSITY tasks, the design runs \(4\) repetitions, resulting in \(3\) (feature-pairs) \(\times 5\) (exponent-ranges) \(\times 4\) (repetitions) \(=60\) trials. Each participant thus executed \(45+45+60=150\) trials. Completing all tasks took about \(32\) minutes.
_Self-Reporting Strategies._ Several human-computer interaction (HCI) approaches can help observe users' behaviors. Answering questions can assist us to determine not just which technique is better but also the strategies humans adopt. For example, cognitive walkthrough (CTW) measures whether or not the users' actions match the designers' pre-designed steps. Here we predicted that participants
Fig. 7: Density effects on color choices to justify the use of dense sampling and categorical colormap (c) in Experiment II. This example dataset shows _two colormaps_: ( segmented-continuous (a and b) and categorical (c and d) colormaps), at _two different data densities. (a) and (c) show data with the raw density from the simulation results; (b) and (d) were produced by removing around \(70\%\) vector glyphs_. The boundaries between the data categories are more recognizable when the data are dense in (a) and (c) (comparing the 1st column and the 2nd column). At the same density (comparing the 1st and 2nd row), the boundaries between levels are easier to recognize when spin vectors are rendered using a categorical colormap of (c) and (d). We thus use the raw dense and categorical colormaps (c) in Experiment II.
would use the global scene-features as guidance to accomplish tasks. We interviewed participants and asked them to verbalize their visual observations in accomplishing tasks.
#### 4.2.7 Participants
Eighteen new participants (\(12\) male and \(6\) female, mean age \(=23.8\), and standard deviation \(=4.94\)) of diverse backgrounds participated in the study (seven in computer science, four in computer engineering, two in information systems, three in engineering, one in business school, and one in physics).
Procedure, interaction, and environment were the same as those in the Experiment I.
### _Experiment II: Results and Discussion_
We collected \(810\) data points per task for the first two tasks of SEARCH and MAX and \(1080\) points for the third NUMERSITY task.
#### 4.3.1 Analysis Approaches
For SEARCH and MAX tasks, we measured relative error (which was the percentage the reported value was away from the ground truth and the same as that of Experiment I) with SAS repeated measure. The last NUMERSITY task used error rate which was the percentage of incorrect answers of all trials for each participant. We also used the same outlier removal methods to remove instances of correspondence errors for SEARCH and MAX.
#### 4.3.2 Overview of Study Results
Table III and Figure 10 show the summary statistics; And all error bars again represent \(95\%\) confidence intervals. We observed a significant main effect of feature-pair type on all three tasks. For the first two tasks, the post-hoc analysis revealed that \(length_{y}color\) and \(length_{y}texture\) were in the same group, the most efficient one and that relative errors were statistically significantly lower than those of the _splitVectors_. \(Length_{y}color\) remained the most accurate pair for the NUMERSITY tasks. Exponent-range was only a significant main effect for NUMERSITY, with power ranges \(3\) and \(4\) were significantly better than \(5\), which was better than \(6\) and \(7\).
#### 4.3.3 More Categorical Features of Separable Dimensions Improved Accuracy
We were interested to see if we could observe significant main effects of categorical features in the separable pairs in this experiment. Here we did observe the significant main effect and confirmed our first hypothesis (H1) for both SEARCH and MAX: in the general trend, more separable \(length_{y}color\) was more effective than \(length_{y}texture\) which was better than _splitVectors_, and \(length_{y}color\) and \(length_{y}texture\) were in the same Tukey group, when viewers were in the correct data sub-categories.
\(Length_{y}color\) led to the most accurate answers, and _splitVectors_ was better than \(length_{y}texture\) for NUMERSITY task. This result can be explained by participants' behaviors - more than half the participants suggested they simply look for the longest cylinder from _splitVectors_ since they know the numerical values in the test were continuous. This behavior deviated from our original purpose of testing the global estimate but did show two perspectives in favor of this work: (1) participants developed task-specific strategies during the experiment for efficiency; (2) 3D length still supported judging large and small and it was not as effective as color perhaps due to ensemble perception from categorical features.
#### 4.3.4 Color Categories of Separable Pairs Reduced Correspondence Errors by a Large Margin
Our second hypothesis H2 was also supported. We first tested the number of correspondence errors in SEARCH and MAX in the same way as in Experiment I. These results when combined with those in Experiment I confirmed again
Fig. 8: Experiment II three task types. The callouts show the task-relevant feature-pair(s).
that the \(length_{y}color\) reduced correspondence errors. For SEARCH, There were only a single instance of correspondence error. \(36\) instances of correspondence errors came from \(14\) participants (mean\(=2.57\), \(95\%\) CIs=\([2.1,3.04]\)) (Figure 9 _top_). Another \(59\) instances for MAX came from \(16\) of \(18\) participants, mean\(=3.68\), \(95\%\) CIs= \([2.85,4.51]\)) (Figure 9 _bottom_).
#### 4.3.5 Compensating The Cost of Search in Complex Data through Preattentive Scene Feature
The visualizations in our study contained hundreds of items from realistic uses. Subjective behaviors through self-report suggested that they adopted a sequential task-driven viewing strategy to first obtain gross regional distribution of task-relevant exponents. After this, a visual comparison within the same exponent region were achieved. With these two steps, judging large or small or perceiving quantities
\begin{table}
\begin{tabular}{l l l l} \hline \hline Task & Variables & Significance & ES \\ \hline SEARCH & feature-pair & \(\textbf{F}_{(2,261)}\) = **18.4**, \(\textbf{\emph{p}}\)\(<\) **0.0001** & **0.46** \\ & & (\(\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{ \emph{\emph{ \emph{ \emph{ \emph{ }}}}}}}{{{{{{{{{\bf{{ }}}}}{{{{{{\bf{ }}}}{{{{{\bf{ }}}}{{{{{\bf{ }}}}{{{{{\bf{ }}}}}{{{{{\bf{ }}}}}{{{{{\bf{{{}}}}}{{{{\bf{ }}}}}{{{{\bf{{{}}}}}{{{\bf{{{}}}}{{{\bf{{}}}}{{{\bf{{{}}}}{{{{\bf{{ }}}}}{{{{\bf{{{}}}}{{{\bf{{{}}}}}{{{\bf{{{}}}}}{{{\bf{{{}}}}{{{\bf{{ }}}}}{{{\bf{{{}}}}{{\bf{{{}}}}}{{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{{\bf{{ }}}}{{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{{\bf{{ }}}}{{{\bf{{{}}}}{{\bf{{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{{}}}}{{{\bf{{ }}}}{{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{{}}}}{{\bf{{{{}}}}{{{\bf{{ }}}}{{{\bf{{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{{}}}}{{\bf{{{{}}}}{{{\bf{{ }}}}{{{\bf{{{{}}}}{{\bf{{{}}}}{{\bf{{{}}}}{{\bf{{{{}}}}{{\bf{{{{}}}}{{{{}}}{{\bf{{{ }}}}{{\bf{{{{}}}}{{\bf{{{{}}}}{{\bf{{{}}}}{{{\bf{{{}}}}{{{}}{\bf{{{{}}}}{{{}}{{\bf{{{}}}}}{{{{}}{\bf{{{{}}}}{{{}}{\bf{{{{}}}}{{{}}{\bf{{{{}}}}{{{{}}}{\bf{{{{}}}}{{{{}}}{\bf{{{{}}}}{{{{}}{\bf{{{{}}}}{{{}}{\bf{{{{}}}}{{{{}}}{\bf{{{{}}}}{{{{}}}{{\bf{{{{}}}}{{{{}}}{{\bf{{{{}}}}{{{{}}}{{\bf{{{{}}}}}{{{{}}{\bf{{{{}}}}{{{{}}}{\bf{{{{}}}}{{{{}}}{{\bf{{{{{}}}}}{{{{}}{\bf{{{{}}}}}{{{{{}}}{{\bf{{{{}}}}{{{{}}}{{\bf{{{{}}}}}{{{{}}{{{}}{\bf{{{{}}}}{{{{}}}{{{}}{\bf{{{{}}}}{{{{}}}{{\bf{{{{}}}}{{{{}}}{{{{}}}{\bf{{{{}}}}{{{}}{{{}}{{{}}{\bf{{{{}}}}{{{}}{{{}}{{{}}{{}}{\bf{{{{}}}}{{{{}}}{{{}}{{{}}{{{}}{{}}{{{}}{{}}{{}{{}}{{}{{}}{{}}{{}{}{{}{}{}{{}}{{}{{}}{{}{{}}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{{}{}{}{{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{}{{}{{}{}{}{{}{}{{}{{}{}{{}{{}{}{}{{}{}{{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{{}{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{{}{}{{}{{}{{}{{}{{}{}{{}{{}{{}{{}{{}{}{{}{{}{{}{{}{}{{}{{}{{}{{}{}{{}{{}{{}{{}{{}{}{{}{{}{{}{{}{{}{{}{{}{}{{}{{}{{}{{}{{}{{}{{{}{}{{{}{{}{{}{{{}}{{{{}{{{}}{{{{}{{{{}}{{{{{{}}{{{{{{{{{{{{{{}{{{{{{}{{}{{}{{}{{{{}{{{{{{{{{{{{{{{{}{{}}{{{{{}{{}{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}
accurately from separable variables would not use object-level information process.
Many participants commented on how the number of powers in the data affected their effectiveness. For \(length_{y}texture\), 10 participants remarked that it was difficult to differentiate adjacent powers when the total power level is around \(4\)-\(5\) for \(length_{y}texture\). The white and black textures were very easy to perceive. All but two participants agreed that \(length_{y}color\) could perhaps support up to \(6\). Chung et al. [42] studied ordering effects and it would be challenging to compare ours to their results because their visual features were not shown as a scene but an isolated feature. More than half of the participants felt that effectiveness of \(length_{y}length_{y}\) was not affected by changing the number of powers, since they looked for the longest outer cylinder to help find the answer. These results may suggest that subregion selection with \(length_{y}texture\) can perhaps be better designed with interfaces when the users can interactively select a texture level.
## 5 General Discussion
We discuss the results from both experiments and suggest future directions.
### _Separable Dimensions with Preattentive Guidance for Large-Magnitude-Range Quantum Physics Spins_
Our first principle in glyph design is to follow the convention to use separable variable pairs [6, 10]. The results of Experiment I showed that separable dimensions could achieve the same efficiency as direct linear visualizations for COMP and was always more efficient than integral pairs. For these local-tasks, we didn't observe significant error reduction.
Our second principle in glyph design is to include categorical features in separable pairs. The results from Experiment II studied the rank order of the separable pairs and found that they indeed improved accuracy for global tasks. \(Length_{y}texture\) and _splitVectors_ in both experiments led to more correspondence errors than \(length_{y}color\). Achieving integrated numerical readings by combining two separable visual features at object level seems not necessary.
The separable-dimension pairs of \(length_{y}color\) and \(length_{y}texture\) worked because they were preattentive scene features. Our experiments show that viewers adopted a sequential task-driven viewing strategy based on a view hierarchy: viewers first obtain _global_ distributions of the scene. Then, a visual scrutiny is possible within a subregion. Although _splitVectors_ are separable, visual search for length among length would be unguided because both targets and distractors contained the same visual variable. The more separable, the easier it would be to guide the attention. Using coloring to provide some initial regional division may be always better than not. Texture (luminance) could achieve similar accuracy and efficiency as long as the task-relevant regions could be detected.
### _Feature Guidance vs. Scene Guidance_
Taking into account both study results, we think an important part of the answer to visualization design is _guidance_ of attention. It is guided to some objects or locations over others by two broad methods: _feature guidance (seeing objects)_ and _scene guidance_ (seeing global structures).
Feature guidance refers to guidance by properties of the task-target as well as the distractors (leading to correspondence errors). These features are limited to a relatively small subset of visual dimensions: color, size, texture, orientation, shape, blur or shinness and so on. These features have been broadly studied in 3D glyph design (see reviews by Healey and Enns [25], Borgo et al. [6], Lie et al. [46], Ropinski et al. [22], and McNabb and Laramee [28]). Take one more example from quantum physics simulation results, but with a different task of searching for the structural distributions in the power of \(3\) in Figure 11 will guide attention to either the fat cylinders (Figure 11a) or the bright yellow color (Figure 11d, 11b) or the very dark texture (Figure 11c), depending on the feature-pair types.
Working with quantum physicists, we have noticed that the _structure and content of the scene_ strongly constrain the possible location of meaningful structures, guided "scene guidance" constraints [8, 47]. Scientific data are not random and are typically structured. Contextual and global structural influences can arise from different sources of visual information. If we return to the MAX search task in Figure 11 again, we will note that the chunk of darker or lighter texture patterns and colors on these regular contour structures strongly influence our quick detection. This is a structural and physical constraint that can be utilized effectively by viewers. This observation coupled with the empirical study results may suggest an interesting future work and hypothesis: **adding scene structure guidance would speed up quantitative discrimination, improve the accuracy of comparison tasks, and reduce the perceived data complexity.**
Another structure acting as guidance is the size itself. It was used by participants seeking to resolve the NUMEROSITY tasks to look for the longest outside cylinders. We have showed several examples like Figure 11, our collaborator suggested that the cylinder-bases of the same size with the redundant encoding (Figure 11b) also helped locate and group glyphs belonging to the same magnitude. This observation agrees with the most recent literature that guidance-by-size in 3D must take advantage of knowledge of the layout of the scene [45].
Though feature guidance can be preattentive and features are detected within a fraction of a second, scene guidance is probably just about as fast (though precise experiments have not been done and our Experiment II only merely shows this effect). Scene 'gist' can be extracted from complex images after very brief exposures [47][48]. This doesn't mean that a viewer instantly knows, say, where the answer is located. However, with a fraction of a second's exposure, a viewer will know enough about the spatial layout of the scene to guide his or her attention towards vector groups in the regions of interest. For example, categorical color becomes scene features since these colorful glyphs were perceived as a whole
A future direction, and also an approach to understanding the efficiency and the effectiveness of scene guidance, is to conduct an eye-tracking study to give viewers a flash-view of our spatial structures and then let the viewer see the
displayy only in a narrow range around the point of fixation: _does this brief preview guide attention and the gaze effectively?_ Work in vision and visualization [49, 50, 51, 52] domain has measured and correlated performance on the glance or global structure formation. Vision science discovered long ago that seeing global scene structures in medical imaging decision making guides experts' attention (experts always know where to look) [53][54].
### _Redundancy and Ensemble Graphical Perception_
Our results showed that adding categorical colors, in which the correspondence parts could be quickly discriminated, is scalable to a large number of items. Our result agrees with that of Northelfer and Gleicher [55]. They observed that redundant encoding using color and shape could strengthen grouping when searching for targets from multiple objects. Their explanation was a race model [55]: for separable dimensions, the performance of a glyph with the redundant encoding might be dominated by the feature with greater
Fig. 11: Contours of simulation data. Size from this viewpoint can guide visual grouping and size in 3D must take advantage of knowledge of the layout of the scene [45].
efficiency. We did not find efficiency improvement - this suggested that the grouping is generally fast. So it might _not_ be the redundancy itself that contributed to scene understanding.
Another possible theory is perhaps _ensemble perception_, i.e., "the visual system's ability to extract summary statistical information from groups of similar objects - often in a brief glance" [40]. Also ensemble features are best represented using the categorical features. To model parallel processing, the target contrast signal theory by Buetti et al. [24] may suit our scenario better. It describes more specific time estimate it takes to evaluate items in _parallel_. In visualization, we just began to understand the ensemble averages (e.g., Chen [11] and Alberts et al. [56]) but have limited understanding of ensemble visual encoding choices to guide attention to optimize behaviors. We leave this to future work.
### _Use Our Results in Visualization Tools and Limitations of Our Work_
Visualization is used when the goal is to augment human capabilities in situations where the problems might not be sufficiently defined for algorithms to communicate certain information. One of our showcase areas is quantum physics. We believe that the design principle of prompting the addition of categorical features in bivariate glyphs would be broadly applicable to glyph design. Also, application domains carrying similar data attributes could reuse of work. Our current study concerns bivariate data visualization in which the bivariate variables are component parts of scalar variables.
Our design could have been improved by following advanced tensor glyph design methods. Both generic [57] and domain-specific requirements for glyph designs [37][58][59] have led to the summary of glyph properties (e.g., invariant, uniqueness, continuity) to guide design and to render 2D and 3D tensors. A logic step is to truly understand the quantum physics principles to combine data attributes and human perception to improve our domain-specific solutions.
One limitation of this work is that we measure only a subset of tasks crucial to showing structures and omitted all tasks relevant to orientation. However, one may argue that the vectors naturally encode orientation. When orientation is considered, we could address the multiple-channel mappings in two ways. The first solution is to use the \(length_{y}texture\) to encode the quantitative glyphs and color to encode the orientation clusters. The second solution is to treat magnitude and orientation as two data facets and use multiple views to display them separately, with one view showing magnitude and the other for orientation (using Munzner's multiform design recommendations [60]). The second limitation here was that our experiments were limited to a relatively small subset of visual dimensions: color, texture, and size. A future direction would be to try shapes and glyphs to produce novel and useful design.
## 6 Conclusion
Our findings in general suggest that, as we hypothesized, distinguishable separable dimensions with preattentive categorical features perform better. The separable pair \(length_{y}color\) was the most efficient and effective for both local and global tasks. The categorical features enable effective complex scene inspections. Our empirical study results provide the following recommendations for designing 3D bivariate glyphs..
* Highly separable pairs can be used for quantitative comparisons as long as these glyphs could guide attention (i.e., category forming). We recommend using \(length_{y}color\).
* Texture-based glyphs (\(length_{y}texture\)) that introduces luminance variation will only be recommended when task-relevant structures can be isolated.
* Integral and separable bivariate feature-pairs have the similar accuracy when the tasks are local. When the search tasks are more complex, introducing categorical features in the separable feature-pairs will lead to perceptually accurate glyphs.
* 3D glyph scene would shorten task completion time by combing two glyph design factors: separability and visual guidance from categorical features.
* The redundant encoding (\(length_{y}color/length_{x}\)) greatly improved on task completion time of integral dimensions (\(length_{y}length_{x}\)) by adding separable and preattentive color features.
## Acknowledgments
The work is supported in part by NSF IIS-1302755, NSF CNS-1531491, and NIST-70NANB13H181. The user study was funded by NSF grants with the OSU IRB approval number 2018B0080. Non-User Study design work was supported by grant from NIST-70NANB13H181. The authors would like to thank Katrina Avery for her excellent editorial support and all participants for their time and contributions.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Certain commercial products are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the products identified are necessarily the best available for the purpose.
|
2303.18034 | Delay-agnostic Asynchronous Distributed Optimization | Existing asynchronous distributed optimization algorithms often use
diminishing step-sizes that cause slow practical convergence, or fixed
step-sizes that depend on an assumed upper bound of delays. Not only is such a
delay bound hard to obtain in advance, but it is also large and therefore
results in unnecessarily slow convergence. This paper develops asynchronous
versions of two distributed algorithms, DGD and DGD-ATC, for solving consensus
optimization problems over undirected networks. In contrast to alternatives,
our algorithms can converge to the fixed-point set of their synchronous
counterparts using step-sizes that are independent of the delays. We establish
convergence guarantees under both partial and total asynchrony. The practical
performance of our algorithms is demonstrated by numerical experiments. | Xuyang Wu, Changxin Liu, Sindri Magnusson, Mikael Johansson | 2023-03-31T13:10:07Z | http://arxiv.org/abs/2303.18034v2 | # Delay-agnostic Asynchronous Distributed Optimization
###### Abstract
Existing asynchronous distributed optimization algorithms often use diminishing step-sizes that cause slow practical convergence, or fixed step-sizes that depend on an assumed upper bound of delays. Not only is such a delay bound hard to obtain in advance, but it is also large and therefore results in unnecessarily slow convergence. This paper develops asynchronous versions of two distributed algorithms, DGD and DGD-ATC, for solving consensus optimization problems over undirected networks. In contrast to alternatives, our algorithms can converge to the fixed-point set of their synchronous counterparts using step-sizes that are independent of the delays. We establish convergence guarantees under both partial and total asynchrony. The practical performance of our algorithms is demonstrated by numerical experiments.
## I Introduction
Distributed optimization has attracted much attention in the last decade and has found applications in diverse areas such as cooperative control, machine learning, and power systems. The literature on distributed optimization has primarily focused on synchronous methods that iterate in a serialized manner, proceeding to the next iteration only after the current one is completed. Synchronous methods also require all nodes to maintain a consistent view of optimization variables without any information delay, which makes the algorithms easier to analyze. Nevertheless, synchronization through a network can be challenging. Additionally, synchronized update is inefficient and unreliable since the time taken per iteration is determined by the slowest node and the optimization process is vulnerable to single-node failure.
Asynchronous distributed methods that do not require synchronization between nodes are often better suited for practical implementation [1]. However, asynchronous methods are subject to information delays and nodes do not have a consistent view of the optimization variables, which makes them difficult to analyze. Despite the inherent challenges, there have been notable successes in studying the mathematical properties of asynchronous optimization algorithms. One area of focus has been on asynchronous consensus optimization algorithms [2, 3, 4, 5, 6, 7, 8, 9, 10], including asynchronous variants of well-established consensus optimization algorithms such as DGD, PG-EXTRA, and gradient-tracking-based methods. Asynchronous distributed algorithms on other optimization problems include ADGD [11], Asy-FLEXA [12], the asynchronous primal-dual algorithm [13], and the asynchronous coordinate descent method [14, 15].
The above work mainly focused on two types of step-size strategies: diminishing step-sizes [3, 4, 5, 6, 7] and fixed delay-dependent step-sizes [8, 9, 10, 12, 13, 14]. While diminishing step-sizes are effective in stochastic optimization or non-smooth optimization, they can result in slow convergence rates in deterministic smooth problems. For these types of problems, faster algorithms can often be obtained with non-diminishing step-sizes. Fixed step-sizes that depend on delay, in contrast, usually require an upper bound on the worst-case delay that is challenging to compute prior to executing the algorithm. Moreover, the use of worst-case delay can result in a conservative step-size condition and consequently, slow down the practical convergence speed. This is because the actual delays experienced in practice may be significantly smaller than the worst-case delay. For example, [16] implements an asynchronous SGD on a 40-core CPU, and reports a maximum and average delay of around \(1200\) and \(40\), respectively. Convergence of asynchronous distributed algorithms with fixed step-sizes that do not include any delay information have been considered in [2, 11, 15]. However, [2, 15] only consider quadratic programming and [11] studies only star networks.
In this paper, we study the asynchronous variants of two distributed algorithms, the decentralized gradient descent (DGD) [17] and the DGD using the adapt-then-combine technique (DGD-ATC) [18], for solving consensus optimization over undirected networks. Our contributions include:
1. We establish the optimality gap between the fixed point of DGD-ATC with fixed step-sizes and the optimum of the consensus optimization problem. This result is absent in the literature.
2. We show theoretically that, under the total asynchrony assumption, the two asynchronous methods can converge to the same fixed-point sets of their synchronous counterparts with _fixed step-sizes that do not include delay information_.
3. We improve the above asymptotic convergence to linear convergence by assuming bounded information delays.
Compared to the delay-dependent fixed step-sizes, our proposed delay-free step-sizes are easy to tune and, in general, less restrictive. Although algorithms that use delay-dependent fixed step-sizes [8, 9, 10, 12, 13, 14] or diminishing step-sizes [3, 4, 5, 6, 7] can theoretically converge to the optimum while our algorithms suffer from unfavourable inexact convergence inherited from their synchronous counterparts, our
algorithms may achieve faster practical convergence due to their less restrictive fixed step-sizes, which is demonstrated by numerical experiments.
The outline of this paper is as follows: Section II formulates the problem, revisits the synchronous algorithms DGD and DGD-ATC, and reviews/establishes their optimality error bounds. Section III introduces the asynchronous DGD and the asynchronous DGD-ATC, and Section IV provides convergence results. Finally, Section V tests the practical performance of the two asynchronous algorithms by numerical experiments and Section VI concludes the paper.
### Notation and Preliminaries
We use \(\mathbf{1}_{d}\), \(\mathbf{0}_{d\times d}\), and \(I_{d}\) to denote the \(d\)-dimensional all-one vector, the \(d\times d\) all-zero matrix, and the \(d\times d\) identity matrix, respectively, where the subscript is omitted when it is clear from context. The notation \(\otimes\) represents the Kronecker product and \(\mathbb{N}_{0}\) is the set of natural numbers including \(0\). For any symmetric matrix \(W\in\mathbb{R}^{n\times n}\), \(\lambda_{i}(W)\), \(1\leq i\leq n\) denotes the \(i\)th largest eigenvalue of \(W\), \(\mathrm{Null}(W)\) is its null space, and \(W\succ\mathbf{0}\) means that \(W\) is positive definite. For any vector \(x\in\mathbb{R}^{n}\), we use \(\|x\|\) to represent the \(\ell_{2}\) norm and define \(\|x\|_{W}=\sqrt{x^{T}Wx}\) for any positive definite matrix \(W\in\mathbb{R}^{n\times n}\). For any differentiable function \(f:\mathbb{R}^{d}\to\mathbb{R}\), we say it is \(L\)-smooth for some \(L>0\) if
\[\|\nabla f(y)-\nabla f(x)\|\leq L\|y-x\|,\ \forall x,y\in\mathbb{R}^{d}\]
and it is \(\mu\)-strongly convex for some \(\mu>0\) if
\[\langle\nabla f(y)-\nabla f(x),y-x\rangle\geq\mu\|y-x\|^{2},\ \forall x,y\in \mathbb{R}^{d}.\]
## II Problem Formulation and Synchronous distributed Algorithms
This section describes consensus optimization and revisits the synchronous distributed algorithms, DGD [17] and DGD-ATC [18], for solving it. The asynchronous version of the two methods will be introduced in Section III.
### _Consensus Optimization_
Consider a network of \(n\) agents described by an undirected, connected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,\ldots,n\}\) is the vertex set and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the edge set. In the network, each agent \(i\) observes a local cost function \(f_{i}:\mathbb{R}^{d}\to\mathbb{R}\) and can only interact with its neighbors in \(\mathcal{N}_{i}=\{j:\{i,j\}\in\mathcal{E}\}\). Consensus optimization aims to find a common decision vector that minimizes the total cost of all agents:
\[\operatorname*{minimize}_{x\in\mathbb{R}^{d}}\ \ f(x)=\sum_{i\in\mathcal{V}}f_{i}(x). \tag{1}\]
Distributed algorithms for solving Problem (1) include the distributed subgradient method [19], DGD [17], distributed gradient-tracking-based algorithm [20], distributed dual averaging [21], and PG-EXTRA [22]. While these algorithms were originally designed to be executed synchronously, they have since been extended to allow for asynchronous implementations. However, existing asynchronous methods for solving (1) often suffer from slow convergence due to the use of either diminishing step-sizes or fixed step-sizes that depend on a (usually unknown and large) upper bound on all delays.
In this paper, we analyse the asynchronous version of two algorithms with delay-free fixed step-sizes: Decentralized Gradient Descent (DGD) and DGD using Adapt-Then-Combine Technique (DGD-ATC).
### _Decentralized Gradient Descent (DGD)_
The first algorithm is DGD [17]. To present the algorithm compactly, define \(\mathbf{x}=(x_{1}^{T},\ldots,x_{n}^{T})^{T}\in\mathbb{R}^{nd}\), \(F(\mathbf{x})=\sum_{i\in\mathcal{V}}f_{i}(x_{i})\), and let \(\mathbf{W}=W\otimes I_{d}\) where \(W\) is an averaging matrix1associated with \(\mathcal{G}\). We use \(k\in\mathbb{N}_{0}\) as iteration index and \(\mathbf{x}^{k}\) as the value of \(\mathbf{x}\) at iteration \(k\). Then the DGD algorithm progresses according to the following iterations:
Footnote 1: We say a matrix \(W=(w_{ij})\in\mathbb{R}^{n\times n}\) is an averaging matrix associated with \(\mathcal{G}\) if it is non-negative, symmetric (\(W=W^{T}\)), stochastic (\(W\mathbf{1}=\mathbf{1}\)), and satisfies \(w_{ij}=0\) if and only if \(\{i,j\}\notin\mathcal{E}\).
\[\mathbf{x}^{k+1}=\mathbf{W}\mathbf{x}^{k}-\alpha\nabla F(\mathbf{x}^{k}), \tag{2}\]
where \(\alpha>0\) is the step-size for a given initialization \(\mathbf{x}^{0}\).
As shown in [17], the DGD algorithm converges to a fixed point under reasonable assumptions. However, while the set of fixed points of DGD is not identical to the set of optimal solution of Problem (1), it is possible to bound the difference between the two sets under the following assumptions:
**Assumption 1**: _Each \(f_{i}\) is proper closed convex, lower bounded, and \(L_{i}\)-smooth for some \(L_{i}>0\). Further, Problem (1) has a non-empty and bounded optimal solution set._
**Assumption 2**: _Each \(f_{i}\) is \(\mu_{i}\)-strongly convex._
It is now possible to show that DGD converges to a fixed point to derive an optimal gap between the fixed point of DGD and the optimal solution. To this end, define
\[L =\max_{i\in\mathcal{V}}L_{i},\ \bar{L}=\frac{1}{n}\sum_{i\in \mathcal{V}}L_{i}, \tag{3}\] \[\beta =\max\{|\lambda_{2}(W)|,|\lambda_{n}(W)|\}, \tag{4}\]
where \(\beta\in(0,1)\) since \(\mathcal{G}\) is connected [20]. We first state the following lemma that follows similarly to Lemma 2 and Theorem 4 in [17].
**Lemma 1**: _Suppose that Assumption 1 holds. If_
\[\alpha\leq\min\left(\frac{1+\lambda_{n}(W)}{L},\frac{1}{\bar{L}}\right),\]
_then the fixed point set of DGD (2) is non-empty, and DGD converges to a point \(\mathbf{x}^{\star}\in\mathbb{R}^{nd}\) satisfying_
\[\|x_{i}^{\star}-\bar{x}^{\star}\|\leq\frac{\alpha\sqrt{C}}{1- \beta},\quad\forall i\in\mathcal{V},\] \[f(\bar{x}^{\star})-f^{\star}\leq\frac{\alpha CC_{1}}{1-\beta},\]
_where \(\bar{x}^{\star}=\frac{1}{n}\sum_{i\in\mathcal{V}}x_{i}^{\star}\), \(f^{\star}\) is the optimal value of (1), \(L\), \(\bar{L}\), and \(\beta\) are given in (3)-(4),_
\[C=2L(f^{\star}-\sum_{i\in\mathcal{V}}\inf_{x_{i}\in\mathbb{R}^{d}}f_{i}(x_{i})), \tag{5}\]
and \(C_{1}=2\sqrt{2}L\|\mathbf{x}^{\star}-\mathbf{1}_{n}\otimes z^{\star}\|\) with \(z^{\star}\) being an optimal solution of (1). If, in addition, Assumption 2 holds, then the fixed point is unique._
See Appendix A.
### _DGD using Adapt-Then-Combine Technique (DGD-ATC)_
DGD-ATC [18] is a variant of DGD that uses the adapt-then-combine technique and follows the update
\[\mathbf{x}^{k+1}=\mathbf{W}(\mathbf{x}^{k}-\alpha\nabla F(\mathbf{x}^{k})), \tag{6}\]
where \(\mathbf{W}\) is the same as in (2) and \(\alpha>0\) is the step-size.
We are not aware of any previous work that analyses the convergence of DGD-ATC with fixed step-sizes. In the lemma below, we show that DGD-ATC has a similar optimality gap as DGD. The convergence of DGD-ATC (6) follows as a special case of Theorem 2 in Section III.
**Lemma 2**: _Suppose that Assumption 1 holds. If \(W\succ\mathbf{0}\), then the fixed-point set of DGD-ATC (6) is non-empty and for any fixed point \(\mathbf{x}^{\star}\in\mathbb{R}^{nd}\),_
\[\|x_{i}^{\star}-\bar{x}^{\star}\|\leq\frac{\alpha\sqrt{C}}{1- \beta},\quad\forall i\in\mathcal{V}, \tag{7}\] \[f(\bar{x}^{\star})-f^{\star}\leq\frac{\alpha C}{1-\beta}+\frac{ L\alpha^{2}C}{2(1-\beta)^{2}}, \tag{8}\]
_where \(\beta\) and \(C\) are defined in (4) and (5), respectively. If, in addition, Assumption 2 holds, then the fixed point is unique._
See Appendix B.
In Lemma 2 we do not require any condition on \(\alpha>0\).
## III Asynchronous distributed Algorithms
In this section, we introduce the asynchronous DGD and DGD-ATC algorithms. A key advantage of these algorithms is that they do not require global synchronization between nodes or a global clock. Both algorithms are analyzed in a setting where each node \(i\in\mathcal{V}\) is activated at discrete time points, and can update and share its local variables once it is activated. In addition, every node \(i\in\mathcal{V}\) has a buffer \(\mathcal{B}_{i}\) in which it can receive and store messages from neighbors all the time (even when it is inactive).
### _Asynchronous DGD_
In the asynchronous DGD, we let each node \(i\in\mathcal{V}\) hold \(x_{i}\in\mathbb{R}^{d}\) and \(x_{ij}\in\mathbb{R}^{d}\)\(\forall j\in\mathcal{N}_{i}\), where \(x_{i}\) is the current local iterate of node \(i\) and \(x_{ij}\) records the most recent \(x_{j}\) it received from node \(j\in\mathcal{N}_{i}\). Once activated, node \(i\) reads all \(x_{j}\) in the buffer \(\mathcal{B}_{i}\) and then sets \(x_{ij}=x_{j}\). If \(\mathcal{B}_{i}\) contains multiple \(x_{j}\)'s for a particular \(j\in\mathcal{N}_{i}\), then node \(i\) sets \(x_{ij}\) be the most recent received \(x_{j}\). Next, it updates \(x_{i}\) by
\[x_{i}\gets w_{ii}x_{i}+\sum_{j\in\mathcal{N}_{i}}w_{ij}x_{ij}- \alpha\nabla f_{i}(x_{i}) \tag{9}\]
and broadcasts the new \(x_{i}\) to all its neighbors. Once a node \(j\in\mathcal{N}_{i}\) receives \(x_{i}\), it stores \(x_{i}\) in its buffer \(\mathcal{B}_{j}\). A detailed implementation is given in Algorithm 1.
```
1:Initialization: All the nodes agree on \(\alpha>0\), and cooperatively set \(w_{ij}\)\(\forall\{i,j\}\in\mathcal{E}\).
2:Each node \(i\in\mathcal{V}\) chooses \(x_{i}\in\mathbb{R}^{d}\), creates a local buffer \(\mathcal{B}_{i}\), and shares \(x_{i}\) with all neighbors in \(\mathcal{N}_{i}\).
3:for each node \(i\in\mathcal{V}\)do
4: keep receiving \(x_{j}\) from neighbors and store \((x_{j},j)\) in \(\mathcal{B}_{i}\) until node \(i\) is activated2.
5: set \(x_{ij}=x_{j}\)\(\forall(x_{j},j)\in\mathcal{B}_{i}\).
6: empty \(\mathcal{B}_{i}\).
7: update \(x_{i}\) according to (9).
8: send \(x_{i}\) to all neighbors \(j\in\mathcal{N}_{i}\).
9:Until a termination criterion is met.
```
**Algorithm 1** Asynchronous DGD
To describe the asynchronous DGD mathematically, we index the iterates by \(k\in\mathbb{N}_{0}\). The index \(k\) is increased by \(1\) whenever an update is performed on a local variable \(x_{i}\) of some nodes \(i\in\mathcal{V}\). The index \(k\) does not need to be known by the nodes - it is only introduced to order events in our theoretical analysis. We can now see that each \(x_{ij}\) in (9) is a delayed \(x_{j}\). Let \(\mathcal{K}_{i}\subseteq\mathbb{N}_{0}\) denote the set of iterations where node \(i\) updates its iterate. For convenient notation, we define \(\bar{\mathcal{N}_{i}}=\mathcal{N}_{i}\cup\{i\}\) for all \(i\in\mathcal{V}\). Then, the asynchronous DGD can be described as follows. For each \(i\in\mathcal{V}\) and \(k\in\mathbb{N}_{0}\),
\[x_{i}^{k+1}=\begin{cases}\sum_{j\in\mathcal{N}_{i}}w_{ij}x_{j}^{s_{ij}^{k}}- \alpha\nabla f_{i}(x_{i}^{k}),&k\in\mathcal{K}_{i},\\ x_{i}^{k},&\text{otherwise},\end{cases} \tag{10}\]
where \(s_{ij}^{k}\in[0,k]\) for \(j\in\mathcal{N}_{i}\) is the iteration index of the most recent version of \(x_{j}\) available to node \(i\) at iteration \(k\) and \(s_{ii}^{k}=k\). If \(\mathcal{K}_{i}=\mathbb{N}_{0}\)\(\forall i\in\mathcal{V}\) and \(s_{ij}^{k}=k\)\(\forall\{i,j\}\in\mathcal{E},\forall k\in\mathbb{N}_{0}\), then (10) reduces to the synchronous DGD (2).
### _Asynchronous DGD-ATC_
To implement the asynchronous DGD-ATC, each node \(i\in\mathcal{V}\) holds \(x_{i}\in\mathbb{R}^{d}\), \(y_{i}\in\mathbb{R}^{d}\), and \(y_{ij}\in\mathbb{R}^{d}\) for \(j\in\mathcal{N}_{i}\), where \(x_{i}\) is the current local iterate of node \(i\), \(y_{i}=x_{i}-\alpha\nabla f_{i}(x_{i})\), and \(y_{ij}\), \(j\in\mathcal{N}_{i}\) records the most recent value of \(y_{j}\) it received from node \(j\). Once activated, node \(i\in\mathcal{V}\) first reads all \(y_{j}\) in its buffer \(\mathcal{B}_{i}\) and then sets \(y_{ij}=y_{j}\). If \(\mathcal{B}_{i}\) contains multiple values of \(y_{j}\) for a particular \(j\in\mathcal{N}_{i}\), then node \(i\) sets \(y_{ij}\) as the most recent \(y_{j}\) it has received. Next, it updates \(x_{i}\) by
\[x_{i}\gets w_{ii}y_{i}+\sum_{j\in\mathcal{N}_{i}}w_{ij}y_{ij}, \tag{11}\]
computes \(y_{i}=x_{i}-\alpha\nabla f_{i}(x_{i})\), and broadcasts \(y_{i}\) to all \(j\in\mathcal{N}_{i}\). Once a node \(j\in\mathcal{N}_{i}\) receives \(y_{i}\), it stores \(y_{i}\) in its buffer \(\mathcal{B}_{j}\). A detailed implementation of the asynchronous DGD-ATC is described in Algorithm 2.
Note that each \(y_{ij}\) in (11) is a delayed \(x_{j}-\alpha\nabla f_{j}(x_{j})\). Then, similar to (10), the asynchronous DGD-ATC can be described as follows. For each \(i\in\mathcal{V}\) and \(k\in\mathbb{N}_{0}\),
\[x_{i}^{k+1}{=}\begin{cases}\sum_{j\in\bar{\mathcal{N}_{i}}}w_{ij}(x_{j}^{s_{ij} ^{k}}-\alpha\nabla f_{j}(x_{j}^{s_{ij}^{k}})),&k\in\mathcal{K}_{i},\\ x_{i}^{k},&\text{otherwise},\end{cases} \tag{12}\]
where \(k\in\mathbb{N}_{0}\) is the iteration index, \(\mathcal{K}_{i}\subseteq\mathbb{N}_{0}\) denotes the set of iterations where node \(i\) updates \(x_{i}\), \(s_{ij}^{k}\in[0,k]\), \(j\in\mathcal{N}_{i}\) is the iteration index of the most recent \(y_{j}\) that node \(i\) has received from \(j\), and \(s_{ii}^{k}=k\). When \(\mathcal{K}_{i}=\mathbb{N}_{0}\ \forall i\in\mathcal{V}\) and \(s_{ij}^{k}=k\ \forall\{i,j\}\in\mathcal{E},\forall k\in\mathbb{N}_{0}\), the asynchronous DGD-ATC reduces to the synchronous DGD-ATC.
## IV Convergence Analysis
In this section, we analyse the convergence of the asynchronous DGD and the asynchronous DGD-ATC under two different models of asynchrony. Our first results allow for total asynchrony in the sense of Bertsekas and Tsitsiklis [23], _i.e._ the information delays \(k-s_{ij}^{k}\) may grow arbitrarily large but no node can cease to update and old information must eventually be purged from the system. More formally, we make the following assumption.
**Assumption 3** (total asynchrony): _The following holds:_
1. \(\mathcal{K}_{i}\) _is an infinite subset of_ \(\mathbb{N}_{0}\) _for each_ \(i\in\mathcal{V}\)_._
2. \(\lim_{k\to+\infty}s_{ij}^{k}=+\infty\) _for any_ \(i\in\mathcal{V}\) _and_ \(j\in\mathcal{N}_{i}\)_._
The following theorem provides delay-free step-size conditions that guarantee that the asynchronous DGD and DGD-ATC algorithms converge under total asynchrony.
**Theorem 1** (total asynchrony): _Suppose that Assumptions 1-3 hold. Also suppose that in the asynchronous DGD,_
\[\alpha\in\left(0,2\min_{i\in\mathcal{V}}\frac{w_{ii}}{L_{i}}\right), \tag{13}\]
_and in the asynchronous DGD-ATC,_
\[\alpha\in\left(0,\frac{2}{\max_{i\in\mathcal{V}}L_{i}}\right). \tag{14}\]
_Then, \(\{\mathbf{x}^{k}\}\) generated by either method converges to some element in the fixed point set of the synchronous counterpart._
Proof:: See Appendix C.
Under total asynchrony, there is no lower bound on the update frequency of nodes and no upper bound on the information delays, and we are only able to give asymptotic convergence guarantees. To derive non-asymptotic convergence rate guarantees, we consider the more restrictive notion of partial asynchrony [23].
**Assumption 4** (partial asynchrony): _There exist positive integers \(B\) and \(D\) such that_
1. _For every_ \(i\in\mathcal{V}\) _and for every_ \(k\geq 0\)_, at least one element in the set_ \(\{k,\ldots,k+B\}\) _belongs to_ \(\mathcal{K}_{i}\)_._
2. _There holds_ \[k-D\leq s_{ij}^{k}\leq k\] _for all_ \(i\in\mathcal{V}\)_,_ \(j\in\mathcal{N}_{i}\)_, and_ \(k\in\mathcal{K}_{i}\)_._
In Assumption 4, \(B\) and \(D\) characterize the minimum update frequency and the maximal information delay, respectively. If \(B=D=0\), then Assumption 4 reduces to the synchronous scheme where all local variables \(x_{i}^{k}\ \forall i\in\mathcal{V}\) are instantaneously updated at every iteration \(k\in\mathbb{N}_{0}\).
To state our convergence result, we define the block-wise maximum norm for any \(\mathbf{x}=(x_{1}^{T},\ldots,x_{n}^{T})^{T}\in\mathbb{R}^{nd}\) as
\[\|\mathbf{x}\|_{\infty}^{b}=\max_{i\in\mathcal{V}}\|x_{i}\|.\]
The following theorem establishes linear convergence for the two algorithms under partial asynchrony.
**Theorem 2** (partial asynchrony): _Suppose that Assumptions 1, 2, 4 hold. Also suppose that (13) holds in the asynchronous DGD and (14) holds in the asynchronous DGD-ATC. Then, \(\{\mathbf{x}^{k}\}\) generated by either method satisfies_
\[\|\mathbf{x}^{k}-\mathbf{x}^{\star}\|_{\infty}^{b}\leq\rho^{\lfloor k/(B+D+1) \rfloor}\|\mathbf{x}^{0}-\mathbf{x}^{\star}\|_{\infty}^{b},\]
_where \(\mathbf{x}^{\star}\) is the fixed point of their synchronous counterpart and \(\rho\in(0,1)\). Specifically, for_
\[\text{async DGD}:\ \rho=\sqrt{1-\alpha\min_{i\in\mathcal{V}}\left( \mu_{i}\left(2-\frac{\alpha L_{i}}{w_{ii}}\right)\right)}, \tag{15}\] \[\text{async DGD-ATC}:\ \rho=\sqrt{1-\alpha\min_{i\in\mathcal{V}} \left(\mu_{i}(2-\alpha L_{i})\right)}. \tag{16}\]
Proof:: See Appendix D.
By Lemmas 1-2 and Theorems 1-2, the two asynchronous methods can converge to an approximate optimum of Problem (1), where the optimality gap is given in Lemmas 1-2. Note that the range of step-sizes that guarantees convergence is independent of the degree of asynchrony in the system. The two algorithms converge even under total asynchrony, but the guarantees that we can give improve as the amount of asynchrony decreases.
### _Comparison with Related Methods_
To the best of our knowledge, Theorem 1 provides the first convergence result for solving (1) with non-quadratic \(f_{i}\) on general networks under total asynchrony. Other works that consider total asynchrony include [15, 24]. In particular, the asynchronous coordinate descent method in [15] can solve problem (1) with quadratic objective functions over undirected, connected networks, and the asynchronous proximal gradient method in [24] can address (1) with non-quadratic \(f_{i}\)'s, but only considers star networks.
In order to distinguish our results from the state-of-the-art on asynchronous consensus optimization algorithms [2, 1, 2, 10, 24], we categorize these works based on their step-sizes and compare them to our results.
delay-dependent step-size_: [8, 9, 10] assume the existence of an upper bound on the information delay and use fixed parameters relying on and decreasing with the delay bound. Although the works [8, 9, 10] can achieve convergence to the exact optimum under partial asynchrony, which is more desirable than the inexact convergence of our algorithms, they suffer from difficult parameter determination and unnecessary slow convergence for two reasons. Firstly, the delay bound is often unknown and hard to obtain in advance. Secondly, the delay bound is typically large, which leads to small step-sizes and further slows down the convergence process. Our numerical experiments in Section V suggest that the asynchronous DGD and DGD-ATC can significantly outperform PG-EXTRA [9] for the simulated problem. In addition, our algorithms can converge under total asynchrony that is not allowed in [8, 9, 10].
_delay-free and non-diminishing step-size_: This category includes [2, 15, 24]. However, [2, 15] can only solve simple problems. The work [2] focuses on the consensus problem which is equivalent to problem (1) with \(f_{i}(x)\equiv 0\), and [15] can only deal with problem (1) with quadratic objective functions. The work [24] can solve problem (1) with non-quadratic objective functions, but requires star networks. In contrast, our results in Theorem 1-2 allow for non-quadratic objective functions and non-star communication networks, which is a substantial improvement.
_diminishing step-size_: [3, 4, 5, 6] consider diminishing step-sizes that are also delay-free. However, the diminishing step-sizes decrease rapidly and can lead to slow practical convergence. Moreover, [3, 4, 5, 6, 7] all focus on partial asynchrony, while our algorithms can converge under total asynchrony.
## V Numerical Experiments
We evaluate the practical performance of the asynchronous DGD and the asynchronous DGD-ATC on decentralized learning using the \(\ell_{2}\)- regularized logistic loss:
\[\operatorname*{minimize}_{x\in\mathbb{R}^{d}}\;\frac{1}{N}\sum_{i=1}^{N} \left(\log(1+e^{-b_{i}(a_{i}^{T}x)})+\frac{\lambda}{2}\|x\|^{2}\right), \tag{17}\]
where \(N\) is the number of samples, \(a_{i}\) is the feature of the \(i\)th sample, \(b_{i}\) is the corresponding label, and \(\lambda=10^{-3}\) is the regularization parameter. The experiments use the training set of Covertype [25] and MNIST [26] summarized below:
We compare our algorithms with the asynchronous PG-EXTRA [9]. We do not compare with the algorithms in [3, 4, 5, 6] with diminishing step-sizes because [3, 4, 5, 6] require Lipschitz continuous objective functions which does not hold for problem (17) and the maximum allowable step-size in [7] is negligibly small (\(\leq 10^{-10}\) in our experiment setting). We set \(n=8\) and implement all the methods on a multi-core computer using the message-passing framework MPI4py [27], where each core serves as a node and the communication graph \(\mathcal{G}\) is displayed in Figure 1. Each node has roughly an equal number of samples. In the experiments, each node \(i\in\mathcal{V}\) is activated once its buffer \(\mathcal{B}_{i}\) is non-empty, and the delays are generated by real interactions between the nodes and not by any theoretical delay model. We set \(\alpha=\min_{i\in\mathcal{V}}w_{ii}/\max_{i\in\mathcal{V}}L_{i}\) in the asynchronous DGD and \(\alpha=1/\max_{i\in\mathcal{V}}L_{i}\) in the asynchronous DGD-ATC, which meet their conditions in Theorems 1-2. We fine-tune the parameters of the asynchronous PG-EXTRA within their theoretical ranges for guaranteeing convergence. The theoretical ranges involve the maximum delay, which is determined by recording the maximum observed delay during a 20-second run of the method.
We run all methods for \(20\) seconds and plot the training error \(f(\bar{x}^{k})-f^{\star}\) at the average iterate \(\bar{x}^{k}=\frac{1}{n}\sum_{i=1}^{n}x_{i}^{k}\) in Figure 2, where \(f^{\star}\) is the optimal value of (17). We can see that for both datasets, the asynchronous DGD-ATC outperforms the asynchronous DGD, and they both converge faster than the asynchronous PG-EXTRA. The slow convergence of the asynchronous PG-EXTRA may be because of its conservative parameters caused by the large delay, while our algorithms can converge under much more relaxed delay-free parameter conditions.
## VI Conclusion
We have investigated the asynchronous version of two distributed algorithms, DGD and DGD-ATC, for solving consensus optimization problems. We first reviewed existing results on the optimality gap of DGD and developed a corresponding results for the optimality gap of DGD-ATC. Then, we developed _delay-free_ parameter conditions under which both asynchronous methods converge to the fixed point set of their synchronous counterparts under total and partial asynchrony. Finally, we demonstrated superior practical convergence of the two asynchronous algorithms via numerical experiments. Future work includes developing asynchronous algorithms with delay-free parameter conditions for other distributed optimization problems.
### _Proof of Lemma 1_
The results in [17] implicitly assume that there exists a fixed point to DGD. However, this is not straightforward in general. As a result, we include a proof to show the existence of the fixed point.
\begin{table}
\begin{tabular}{c|c|c} \hline Data set & sample number \(N\) & feature dimension \(d\) \\ \hline Covertype & 581012 & 54 \\ \hline MNIST & 60000 & 784 \\ \hline \end{tabular}
\end{table} TABLE I: Information about training data sets.
Fig. 1: Communication graph in simulation.
#### Iii-A1 Non-empty fixed-point set of (1) under Assumption 1
Let \(z^{\star}\) be an optimum to (1) and \(\mathbf{z}^{\star}=\mathbf{1}_{n}\otimes z^{\star}\). Define \(L_{\alpha}(\mathbf{x})=F(\mathbf{x})+\frac{\|\mathbf{x}\|_{I=\mathbf{w}}^{2}- \mathbf{w}}{2\alpha}\). It can be verified that every minimum of \(L_{\alpha}\) is a fixed point of (2). Therefore, to show the fixed-point set of (2) is non-empty, it suffices to show the minimum of \(L_{\alpha}\) exists. Define
\[\mathcal{S}=\{\mathbf{x}:L_{\alpha}(\mathbf{x})\leq L_{\alpha}(\mathbf{z}^{ \star})\}.\]
Since \(\min_{\mathbf{x}\in\mathbb{R}^{nd}}L_{\alpha}(\mathbf{x})\) is equivalent to \(\min_{\mathbf{x}\in\mathcal{S}}L_{\alpha}(\mathbf{x})\), the minimum of \(L_{\alpha}\) exists if the optimum of the later problem exists which can be guaranteed by the nonemptiness and compactness of \(\mathcal{S}\). Clearly, \(\mathcal{S}\) is non-empty since \(\mathbf{z}^{\star}\in\mathcal{S}\).
Below, we prove that \(\mathcal{S}\) is compact. To this end, fix \(\mathbf{x}\in\mathcal{S}\) and define \(h_{i}=\inf_{y\in\mathbb{R}^{d}}f_{i}(y)\ \forall i\in\mathcal{V}\). By the Lipschitz continuity of \(\nabla f_{i}\),
\[h_{i}\leq f_{i}(x_{i}-\frac{1}{L_{i}}\nabla f_{i}(x_{i}))\leq f_{i}(x_{i})- \frac{1}{2L_{i}}\|\nabla f_{i}(x_{i})\|^{2}.\]
Then, by letting \(h=\sum_{i\in\mathcal{V}}h_{i}\),
\[\|\nabla F(\mathbf{x})\|^{2}\leq 2L(F(\mathbf{x})-h)\leq 2L(f^{\star}-h), \tag{18}\]
where the last step is due to \(F(\mathbf{x})\leq L_{\alpha}(\mathbf{x})\leq L_{\alpha}(\mathbf{z}^{\star})= f^{\star}\). In addition, because \(F(\mathbf{x})\geq h\), we have
\[\|\mathbf{x}\|_{I-\mathbf{W}}^{2}=2\alpha(L_{\alpha}(\mathbf{x})-F(\mathbf{x} ))\leq 2\alpha(f^{\star}-h). \tag{19}\]
Let \(\bar{\mathbf{x}}=\mathbf{1}_{n}\otimes\frac{1}{n}\sum_{i\in\mathcal{V}}x_{i}\). Since \(\mathcal{G}\) is connected, we have \(\mathrm{Null}(I-\mathbf{W})=\{\mathbf{y}:y_{1}=\ldots=y_{n}\}\). Then by (19),
\[\|\mathbf{x}-\bar{\mathbf{x}}\|^{2}\leq\frac{\|\mathbf{x}\|_{I-\mathbf{W}}^{2 }}{\lambda_{\min}(I-\mathbf{W})}\leq\frac{2\alpha(f^{\star}-h)}{\lambda_{\min }(I-\mathbf{W})}, \tag{20}\]
where \(\lambda_{\min}(\cdot)\) represents the minimal positive eigenvalue. By the \(L\)-smoothness of \(f\),
\[\begin{split} F(\bar{\mathbf{x}})-F(\mathbf{x})&\leq \langle\nabla F(\mathbf{x}),\bar{\mathbf{x}}-\mathbf{x}\rangle+\frac{L}{2}\| \bar{\mathbf{x}}-\mathbf{x}\|^{2}\\ &\leq\frac{\|\nabla F(\mathbf{x})\|^{2}}{2}+\frac{L+1}{2}\|\bar{ \mathbf{x}}-\mathbf{x}\|^{2}.\end{split} \tag{21}\]
Substituting \(F(\mathbf{x})\leq f^{\star}\), (18), and (20) into (21), we have
\[F(\bar{\mathbf{x}})\leq C_{0}=f^{\star}+\left(L+\alpha(L+1)/\lambda_{\min}(I- \mathbf{W})\right)(f^{\star}-h).\]
In addition, by [28, Proposition B.9] and the bounded optimum set of \(f\), we have that every level set of \(f\) is bounded, which yields the compactness of
\[\{y\in\mathbb{R}^{d}:f(y)\leq C_{0}\}. \tag{22}\]
Due to the arbitrariness of \(\mathbf{x}\in\mathcal{S}\), we have for any \(\mathbf{x}\in\mathcal{S}\), \(\frac{1}{n}\sum_{i=1}^{n}x_{i}\) belongs to the compact set (22) and (20) holds. Therefore, \(\mathcal{S}\) is compact. Concluding all the above, the fixed point set of DGD (2) is non-empty.
#### Iii-A2 Optimality gap and uniqueness of fixed point
The optimality gap results can be directly obtained by letting \(x_{i}^{0}=z^{\star}\) in [17, Lemma 2, Theorem 4].
Suppose that each \(f_{i}\) is strongly convex. Then, the function \(L_{\alpha}\) is strongly convex and the minimum of \(L_{\alpha}\) is unique. Note that every fixed point of DGD (2) is a minimum of \(L_{\alpha}\). Therefore, the fixed point of DGD (2) is also unique.
### _Proof of Lemma 2_
Define \(\tilde{L}_{\alpha}(\mathbf{x})=F(\mathbf{x})+\frac{\|\mathbf{x}\|_{W^{-1}-I}^ {2}}{2\alpha}\). Note that we assume \(W\) is invertible, so is \(\mathbf{W}\). Moreover, every minimum of \(\tilde{L}_{\alpha}\) is a fixed point of (6) and vice versa. By almost the same proof with that of Lemma 1, the minimum of \(\tilde{L}_{\alpha}\) exists, and it is unique if, in addition, each \(f_{i}\) is strongly convex. Therefore, the fixed-point set of (6) is non-empty, and if each \(f_{i}\) is strongly convex, then it is a singleton.
Next, we prove (7)-(8). Suppose \(\mathbf{x}^{\star}\) is a fixed point of (6) and \(z^{\star}\) is an optimum of (1). Let \(h\) be a lower bound of \(F(\mathbf{x})\), which exists due to Assumption 1. Similar to (18),
\[\|\nabla F(\mathbf{x}^{\star})\|^{2}\leq 2L(f^{\star}-h)=C. \tag{23}\]
Then, by \(\mathbf{x}^{\star}=\mathbf{W}(\mathbf{x}^{\star}-\alpha\nabla F(\mathbf{x}^{ \star}))\), we have
\[\|(\mathbf{W}^{-1}-I)\mathbf{x}^{\star}\|=\alpha\|\nabla F(\mathbf{x}^{\star}) \|\leq\alpha\sqrt{C}. \tag{24}\]
Moreover, \(\mathrm{Null}(\mathbf{W}^{-1}-I)=\{\mathbf{y}:y_{1}=\ldots=y_{n}\}\) and \(\lambda_{\min}(\mathbf{W}^{-1}-I)=\frac{1}{\lambda_{2}(W)}-1=\frac{1}{\beta}-1\). Then, by letting \(\bar{\mathbf{x}}^{\star}=\mathbf{1}_{n}\otimes\bar{x}^{\star}\) and by (24), we have
\[\begin{split}&\|x_{i}^{\star}-\bar{x}^{\star}\|\leq\|\mathbf{x}^{ \star}-\bar{\mathbf{x}}^{\star}\|\\ \leq&\frac{\beta\|(\mathbf{W}^{-1}\!-\!I)\mathbf{x}^{ \star}\|}{1-\beta}\leq\frac{\alpha\sqrt{C}}{1-\beta},\end{split} \tag{25}\]
Fig. 2: Convergence on logistic regression
i.e., (7) holds.
Let \(z^{\star}\) be an optimum to (1) and \(\mathbf{z}^{\star}=\mathbf{1}_{n}\otimes z^{\star}\). Similar to (21), we have that for any \(\eta>0\),
\[F(\bar{\mathbf{x}})-F(\mathbf{x})\leq\frac{\|\nabla F(\mathbf{x})\|^{2}}{2\eta} +\frac{L+\eta}{2}\|\bar{\mathbf{x}}-\mathbf{x}\|^{2}. \tag{26}\]
By (26) with \(\mathbf{x}=\mathbf{x}^{\star}\) and \(\eta=(1-\beta)/\alpha\), (23), and (25),
\[f(\bar{x}^{\star})=F(\mathbf{1}_{n}\otimes\bar{x}^{\star}) \tag{27}\] \[\leq F(\mathbf{x}^{\star})+\frac{\alpha C}{1-\beta}+\frac{L\alpha^{2 }C}{2(1-\beta)^{2}}.\]
In addition,
\[F(\mathbf{x}^{\star})\leq\tilde{L}_{\alpha}(\mathbf{x}^{\star})\leq\tilde{L}_ {\alpha}(\mathbf{z}^{\star})=f^{\star}. \tag{28}\]
Substituting (28) into (27) yields (8). Completes the proof.
### _Proof of Theorem 1_
The proof includes two steps. Step 1 rewrites the two methods as a unified form and introduce a convergence theorem for the unified algorithm form. Step 2 proves that the two asynchronous methods satisfy the conditions in the convergence theorem.
**Step 1: a unified description for the asynchronous DGD and the asynchronous DGD-ATC.** Both DGD (2) and DGD-ATC (6) can be described by the general fixed-point update:
\[\mathbf{x}^{k+1}=\mathrm{T}(\mathbf{x}^{k}), \tag{29}\]
where \(\mathrm{T}:\mathbb{R}^{nd}\rightarrow\mathbb{R}^{nd}\) is a function and
**DGD:** \[\mathrm{T}(\mathbf{x})=\mathbf{W}\mathbf{x}-\alpha\nabla F( \mathbf{x}),\] (30)
**DGD-ATC:** \[\mathrm{T}(\mathbf{x})=\mathbf{W}(\mathbf{x}-\alpha\nabla F( \mathbf{x})).\] (31)
In addition, let \(\mathrm{T}_{i}:\mathbb{R}^{nd}\rightarrow\mathbb{R}^{d}\) be the \(i\)th block of \(\mathrm{T}\) for any \(i\in\mathcal{V}\) and consider the asynchronous version of (29):
\[x_{i}^{k+1}=\begin{cases}\mathrm{T}_{i}(\mathbf{z}_{i}^{k}),&k\in\mathcal{K}_{ i},\\ x_{i}^{k},&\mathrm{otherwise},\end{cases} \tag{32}\]
where \(\mathbf{z}_{i}^{k}=(x_{1}^{t_{1}^{k}},\ldots,x_{n}^{t_{n}^{k}})\) for some non-negative integers \(t_{ij}^{k}\). By letting
\[t_{ij}^{k}=\begin{cases}s_{ij}^{k},&j\in\bar{\mathcal{N}}_{i},\\ k,&\mathrm{otherwise},\end{cases}\forall i\in\mathcal{V},\ k\in\mathcal{K}_{i}, \tag{33}\]
(32) with \(\mathrm{T}\) in (30) and (31) describes the asynchronous DGD and the asynchronous DGD-ATC, respectively.
For the asynchronous update (32), [29] presents the following convergence results for pseudo-contractive operator \(\mathrm{T}\): for some \(\rho\in(0,1)\),
\[\|\,\mathrm{T}(\mathbf{x})-\mathbf{x}^{\star}\|_{\infty}^{b}\leq\rho\|\mathbf{ x}-\mathbf{x}^{\star}\|_{\infty}^{b},\forall\mathbf{x}\in\mathbb{R}^{nd}, \mathbf{x}^{\star}\in\mathrm{Fix}\,\mathrm{T}, \tag{34}\]
where \(\mathrm{Fix}\,\mathrm{T}\) is the fixed-point set of \(\mathrm{T}\).
**Lemma 3** (Theorem 3.20, [29]): _Suppose that Assumption 3 holds and \(0\in\mathcal{K}_{i}\)\(\forall i\in\mathcal{V}\). If (34) holds for some \(\rho\in(0,1)\), then \(\{\mathbf{x}^{k}\}\) generated by the iteration (32) converges asymptotically to the unique fixed point of \(\mathrm{T}\)._
Although Theorem 3.20 in [29] assumes
\[0\in\mathcal{K}_{i},\forall i\in\mathcal{V} \tag{35}\]
for simplicity of presentation, the convergence still holds without (35). With Lemma 3, to show Theorem 1, it suffices to show the pseudo-contractivity (34) for \(\mathrm{T}\) in (30) and (31).
**Step 2: Proof of pseudo-contractivity** (34). Let \(\rho_{\mathrm{c}}\) and \(\rho_{\mathrm{a}}\) be the value in (15) and (16), respectively. Below, we show (34) for the two operators in (30) and (31).
**1) \(\mathrm{T}\) in (30)**: For any \(i\in\mathcal{V}\), since \(x_{i}^{\star}=\mathrm{T}_{i}(\mathbf{x}^{\star})\),
\[\|\,\mathrm{T}_{i}(\mathbf{x})-x_{i}^{\star}\|^{2}=\|\,\mathrm{T}_ {i}(\mathbf{x})-\mathrm{T}_{i}(\mathbf{x}^{\star})\|^{2} \tag{36}\] \[= \|\sum_{j\in\mathcal{N}_{i}}w_{ij}(x_{j}-x_{j}^{\star})+\] \[\quad w_{ii}(x_{i}-x_{i}^{\star}-\frac{\alpha}{w_{ii}}(\nabla f_{ i}(x_{i})-\nabla f_{i}(x_{i}^{\star})))\|^{2}\] \[\leq \sum_{j\in\mathcal{N}_{i}}w_{ij}\|x_{j}-x_{j}^{\star}\|^{2}+\] \[\quad w_{ii}\|x_{i}-x_{i}^{\star}-\frac{\alpha}{w_{ii}}(\nabla f_ {i}(x_{i})-\nabla f_{i}(x_{i}^{\star}))\|^{2},\]
where the last step uses Jensen's inequality on the norm square. Since each \(f_{i}\) is \(\mu_{i}\)-strongly convex and \(\nabla f_{i}\) is Lipschitz continuous, by [30, Equation (2.1.8)],
\[\langle\nabla f_{i}(x_{i})-\nabla f_{i}(x_{i}^{\star}),x_{i}-x_{i }^{\star}\rangle\geq\mu_{i}\|x_{i}-x_{i}^{\star}\|^{2}, \tag{37}\] \[\langle\nabla f_{i}(x_{i})-\nabla f_{i}(x_{i}^{\star}),x_{i}-x_{i }^{\star}\rangle\geq\frac{\|\nabla f_{i}(x_{i})-\nabla f_{i}(x_{i}^{\star})\|^{2 }}{L_{i}}. \tag{38}\]
Then,
\[\|x_{i}-x_{i}^{\star}-\frac{\alpha}{w_{ii}}(\nabla f_{i}(x_{i})- \nabla f_{i}(x_{i}^{\star}))\|^{2} \tag{39}\] \[= \|x_{i}-x_{i}^{\star}\|^{2}-2\frac{\alpha}{w_{ii}}\langle\nabla f_ {i}(x_{i})-\nabla f_{i}(x_{i}^{\star}),x_{i}-x_{i}^{\star}\rangle\] \[+(\frac{\alpha}{w_{ii}})^{2}\|\nabla f_{i}(x_{i})-\nabla f_{i}(x_{i }^{\star})\|^{2}\] \[\stackrel{{\eqref{eq:1}}}{{\leq}} \|x_{i}-x_{i}^{\star}\|^{2}-\frac{\alpha}{w_{ii}}(2-\frac{L_{i} \alpha}{w_{ii}})\langle\nabla f_{i}(x_{i})\!-\!\nabla f_{i}(x_{i}^{\star}),x_{i}- x_{i}^{\star}\rangle\] \[\stackrel{{\eqref{eq:2}}}{{\leq}} (1-\frac{\alpha}{w_{ii}}(2-\frac{L_{i}\alpha}{w_{ii}})\mu_{i})\|x_{i }-x_{i}^{\star}\|^{2}.\]
Substituting (39) into (36) yields
\[\|\,\mathrm{T}_{i}(\mathbf{x})-x_{i}^{\star}\|^{2}\leq\rho_{\mathrm{c}}^{2}(\| \mathbf{x}-\mathbf{x}^{\star}\|_{\infty}^{b})^{2},\]
which leads to (34) with \(\rho=\rho_{\mathrm{c}}\).
**2) \(\mathrm{T}\) in (31)**: For any \(i\in\mathcal{V}\), since \(x_{i}^{\star}=\mathrm{T}_{i}(\mathbf{x}^{\star})\),
\[\|\,\mathrm{T}_{i}(\mathbf{x})-x_{i}^{\star}\|^{2}=\|\,\mathrm{T}_ {i}(\mathbf{x})-\mathrm{T}_{i}(\mathbf{x}^{\star})\|^{2} \tag{40}\] \[= \|\sum_{j\in\bar{\mathcal{N}}_{i}}w_{ij}(x_{j}-x_{j}^{\star}- \alpha(\nabla f_{j}(x_{j})-\nabla f_{j}(x_{j}^{\star})))\|^{2}\] \[\leq \sum_{j\in\bar{\mathcal{N}}_{i}}w_{ij}\|x_{j}-x_{j}^{\star}- \alpha(\nabla f_{j}(x_{j})-\nabla f_{j}(x_{j}^{\star}))\|^{2},\]
where the last step uses Jensen's inequality on the norm square. Similar to (39) with \(\alpha=1\),
\[\|x_{j}-x_{j}^{\star}-\alpha(
Substituting the above equation into (40) yields
\[\|\operatorname{T}_{i}(\mathbf{x})-x_{i}^{\star}\|^{2}\leq\rho_{\text{a}}^{2}(\| \mathbf{x}-\mathbf{x}^{\star}\|_{\infty}^{b})^{2},\]
which results in (34) with \(\rho=\rho_{\text{a}}\) and completes the proof.
### _Proof of Theorem 2_
The proof uses Theorem 3.21 in [29].
**Lemma 4** (Theorem 3.21, [29]): _Suppose that Assumption 4 and (35) hold. If (34) holds for some \(\rho\in(0,1)\), then \(\{\mathbf{x}^{k}\}\) generated by the asynchronous iteration (32) satisfy_
\[\|\mathbf{x}^{k}-\mathbf{x}^{\star}\|_{\infty}^{b}\leq\rho^{\frac{b}{2B+D+1}} \|\mathbf{x}^{0}-\mathbf{x}^{\star}\|_{\infty}^{b}. \tag{41}\]
_Note that in **Step 2** of Appendix C, we have shown the pseudo-contractivity for both \(\operatorname{T}\) in (30) and (31). In addition, although we do not assume (35), the proof of [29, Theorem 3.21] still holds, with the convergence rate (41) becomes_
\[\|\mathbf{x}^{k}-\mathbf{x}^{\star}\|_{\infty}^{b}\leq\rho^{\lfloor\frac{k}{B +D+1}\rfloor}\|\mathbf{x}^{0}-\mathbf{x}^{\star}\|_{\infty}^{b}.\]
_Completes the proof._
|
2309.05758 | Zero Metallicity with Zero CPU Hours: Masses of the First Stars on the
Laptop | We develop an analytic model for the mass of the first stars forming in the
center of primordial gas clouds as a function of host halo mass, redshift, and
degree of rotation. The model is based on the estimation of key timescales
determining the following three processes: the collapse of the gas cloud, the
accretion onto the protostellar core, and the radiative feedback of the
protostellar core. The final stellar mass is determined by the total mass
accreted until the radiative feedback halts the accretion. The analytic
estimation, motivated by the result of the full numerical simulations, leads to
algebraic expressions allowing an extremely fast execution. Despite its
simplicity, the model reproduces the stellar mass scale and its parameter
dependences observed in state-of-the-art cosmological zoom-in simulations. This
work clarifies the basic physical principles undergirding such numerical
treatments and provides a path to efficiently calibrating numerical predictions
against eventual observations of the first stars. | James Gurian, Donghui Jeong, Boyuan Liu | 2023-09-11T18:37:07Z | http://arxiv.org/abs/2309.05758v2 | # Zero Metallity with Zero CPU Hours: Masses of the First Stars on the Laptop
###### Abstract
We develop an analytic model for the mass of the first stars forming in the center of primordial gas clouds as a function of host halo mass, redshift, and degree of rotation. The model is based on the estimation of key timescales determining the following three processes: the collapse of the gas cloud, the accretion onto the protostellar core, and the radiative feedback of the protostellar core. The final stellar mass is determined by the total mass accreted until the radiative feedback halts the accretion. The analytic estimation, motivated by the result of the full numerical simulations, leads to algebraic expressions allowing an extremely fast execution. Despite its simplicity, the model reproduces the stellar mass scale and its parameter dependences observed in state-of-the-art cosmological zoom-in simulations. This work clarifies the basic physical principles undergirding such numerical treatments and provides a path to efficiently calibrating numerical predictions against eventual observations of the first stars.
Cosmology(343), Population III stars(1285), Star formation(1569) 0000-0002-4882-8879]James Gurian
## 1 Introduction
The primordial universe contained only trace metals from big bang nucleosynthesis (BBN; Steigman, 2007). The first (Population III, hereafter Pop. III) stars must have formed from this pristine gas, which was cooled principally by atomic and molecular hydrogen. Still, no one has observed a Pop. III star, so our knowledge of this process comes entirely from simulations and analytic estimates (see Bromm & Larson, 2004; Bromm, 2013; Haemmerle et al., 2020; Klessen & Glover, 2023, for reviews).
Due to the lack of observational comparison and complex physics involved, the Pop. III star formation process is still an uncertain and active area of research. In the hierarchical structure formation scheme, these stars first formed in the universe when halos massive enough to cool and collapse their gas began to virialize. This process began at redshift \(z\sim 30\) and in halos of masses \(\sim 10^{5}\)-\(10^{6}\,\mathrm{M}_{\odot}\)(Tegmark et al., 1997; Bromm & Larson, 2004). The Pop. III stars ended the dark ages and began the reionization and metal enrichment of the universe, and the turn-on of Pop. III stars marks the epoch when, for the first time since BBN, nuclear physics becomes relevant in the universe.
In the standard \(\Lambda\)CDM cosmology, the characteristic abundance of the mini-halos which could host Pop. III stars is \(\sim 100\,\mathrm{cMpc}^{-3}\) at redshift \(20-30\)(Yoshida et al., 2006). Here, cMpc is the comoving megaparsec. Hence, to develop a statistical sample of tens to hundreds of primordial star-forming clouds, simulations must consider a volume of \(\sim\mathrm{cMpc}^{3}\), while the characteristic scale of the proto-stellar accretion disk is \(\sim 100\mathrm{AU}\). This corresponds to a dynamic range of some ten orders of magnitude, which requires zoom-in simulations. The zoom-in simulations beginning from cosmological initial conditions (Susa et al., 2014; Hirano et al., 2015; Stacy et al., 2016; Susa, 2019) reveal that Pop. III stars form in small star clusters, with relatively more massive stars (between a few tens and a few thousand solar mass) forming in the center of the collapsing cloud and several lower mass companions which originate from fragmentation in the star-forming disk, as illustrated in Figure 1 of Liu et al. (2020).
Studying the primordial star-cluster systems' formation and evolution demands three-dimensional radiation hydrodynamics simulations in a dense, optically thick environment. Due to computational limits, simulations often do not evolve these primordial clusters to their full
maturity, when accretion is shut off by (proto-)stellar feedback, and can only report on the trends observed during the simulation time span. Moreover, since these simulations typically select the first star-forming cloud to form in the cosmological volume as the zoom-in region, they are subject to sampling bias and variance (Stacy & Bromm, 2013).
On the other hand, tracking the evolution of a single proto-star in a collapsing cloud including accretion, feedback, and radiative transfer is numerically tractable (Hosokawa et al., 2011, 2012; Hirano et al., 2014; Hosokawa et al., 2016; Sharda et al., 2020, 2021; Latif et al., 2022). Hirano et al. (2014) attempted to characterize the population of primordial stars by beginning with a cosmological volume of \((2\,{\rm Mpc}/h)^{3}\), resolving \(\sim 100\) individual star-forming clouds. After its formation, the density of each cloud was azimuthally averaged, then used to initialize 2D radiation hydrodynamics simulations, sidestepping the most computationally demanding part of the problem. Note that the azimuthal averaging _smears out_ any possible small-scale fragmentation and does not resolve the formation and evolution of stellar multiples. However, it allows the central, most massive star in each halo to be evolved until accretion is shut off by feedback. Hirano et al. (2014) found that in their simulations the mass of the central star in each gravitationally unstable cloud is tightly correlated with the collapse timescale of the star-forming cloud. They also found that a long collapse timescale (associated with a low halo mass) allows for the production of HD molecules, permitting cooling to lower temperatures and promoting the formation of low-mass stars. The similar study of Susa et al. (2014) did not include HD chemistry, and found no such correlations.
Here, we expand upon Hirano et al. (2014) by deriving the relationships revealed by the simulations. We show that a simple analytic model based on algebraic timescale arguments can capture the formation of the most massive Pop. III star in the center of a collapsing gas cloud. In particular, this model reproduces the stellar-mass distribution of the sophisticated numerical treatment of Hirano et al. (2014). In the process, we clarify the most important physics underlying Pop. III star formation.
Such an analytic method provides a theoretical "handle" on the Pop. III stellar-mass as a function of the mass and formation redshift of the hosting halo and the rotation parameter. As observations of stellar populations at high redshift become available using, for example, the JWST satellite, such a handle will be necessary to efficiently connect the simulated universe to the physical Universe and to extract cosmological information from the Pop. III observables. We draw upon previous analytic studies (McKee & Tan, 2008; Stahler et al., 1986), generalizing those results to include the effect of the host halo mass and formation redshift on the final stellar mass while wherever possible simplifying the arguments to simple, algebraic relations.
Finally, we contrast the Pop. III star modeling presented here to the modeling of the later generations (Population I and II). The project of deriving the mass of Pop. III stars _ab initio_ is saved from hopelessness only by the simplicity of their environment. These stars form from the first clouds to become gravitationally unstable as halos which exceed the cosmological Jeans mass begin to form. Because the first stars arise from the first baryonic structures to depart from the underlying dark matter distribution, there is reason to believe that their properties can be inferred from the well-understood evolution of the dark matter distribution and basic timescale arguments. For this reason, environmental effects can be described in terms of a small number of parameters at the scale of the halo or star-forming cloud. That is, deriving the population statistics of the first stars does not require resolving the detailed gas physics, feedback, and radiative transfer in the star-forming clouds over a cosmological volume because the initial conditions can be accurately described using only a few parameters. We defer a detailed study of the environmental effects such as the Lyman-Werner radiation (e.g. Nebrin et al., 2023) and the relative velocity between the baryons and dark matter (e.g. Nakazato et al., 2022) to a future study.
The paper is organized as follows. In Sec. 2 we present a basic description of the dynamics of the collapsing gas, which informs our determination of the mass of the gravitationally unstable cloud. In Sec. 3 we describe the chemical-thermal network for the primordial gas and compare our results with standard one-zone calculations. In Sec. 4 we develop relations governing the growth of the central star and evaluate the final stellar mass over a range of environmental conditions. We conclude with a discussion of the context of this work and possible future avenues for research.
## 2 Collapse dynamics and cloud mass
The evolution of a primordial gas cloud can be broadly understood through the relations between the following timescales:
* The free-fall timescale \(t_{ff}=\sqrt{\frac{3\pi}{32G\rho}}\), where \(\rho\) is the total, dark matter and baryon, density, which is the time for a test particle accelerated by gravity to fall to the center of the cloud.
* The sound crossing timescale \(t_{s}=r/c_{s}\) where \(r\) is the radius of the cloud and \(c_{s}=\sqrt{\frac{\gamma k_{B}T}{\mu m_{P}}}\) is the adiabatic sound speed with the adiabatic index \(\gamma\), temperature \(T\) and mean molecular weight \(\mu\) of the cloud, which is the time for a pressure wave to propagate across the cloud.
* The cooling timescale \(t_{\mathcal{C}}=\mathcal{E}_{th}/\mathcal{C}\), where \(\mathcal{E}_{th}\) is the thermal energy density of the cloud ([energy][volume]\({}^{-1}\)) and \(\mathcal{C}\), the volumetric cooling rate ([energy][time]\({}^{-1}\)[volume]\({}^{-1}\)), which is the time for a gas cloud to loose its thermal energy by cooling. Note that the cooling rate depends on the density, composition, and temperature of the cloud.
* The H\({}_{2}\) formation timescale \(t_{\mathrm{H}_{2}}\), which is the time for enough molecular hydrogen to be generated. Although we use \(t_{\mathrm{H}_{2}}\) only in qualitative discussions in this section, we will provide a more quantitative definition of \(t_{\mathrm{H}_{2}}\) in Sec. 4.1.
* The collapse timescale \(t_{\mathrm{col}}\) is an e-folding time for the cloud's density, which will be determined from the preceding timescales.
Here's the qualitative sketch of how these timescales interplay to determine the collapsing gas cloud's mass.
The gas in a primordial halo is initially near virial equilibrium, which is equivalent (up to factors of order unity) to the boundary (\(t_{ff}=t_{s}\)) of the Jeans stability condition \(t_{ff}>t_{s}\). This stability condition can also be translated to the Jean's mass,
\[M_{J} =\frac{4\pi\rho c_{s}^{3}}{3t_{ff}^{3}} \tag{1}\] \[\approx 1.44\left(\frac{k_{B}T}{\mu m_{P}G}\right)^{3/2}\rho^{-1/2}, \tag{2}\]
with \(k_{B}\) Boltzmann's constant, \(T\) the gas temperature, \(\mu\) the mean molecular weight, \(m_{P}\) the proton mass, \(G\) Newton's constant, and \(\rho\) the gas density. In the second line, we have taken \(\gamma=5/3\) for a monatomic ideal gas. Note that different derivations of the Jean's criterion yield different values for the order-unity prefactor, here 1.44. Setting \(M_{J}\) approximately equal to \(M_{H}\), the halo mass, determines the virial temperature \(T\) of the halo at its formation \(\rho\simeq 178\bar{\rho}_{\mathrm{m}}(z)\), where \(\bar{\rho}_{\mathrm{m}(z)}\) is the background matter (dark matter and baryon) density at redshift \(z\).
Collapse from this marginally stable configuration at the virial equilibrium requires cooling, which increases the sound crossing time by reducing the temperature. The dominant coolant of primordial clouds, which consist of pristine gas and whose virial temperature is less than \(\sim 10^{4}\) K, is molecular hydrogen. The cosmological molecular hydrogen fraction \(x_{\mathrm{H}_{2},0}\equiv n_{\mathrm{H}_{2}}/n_{\mathrm{H}}\) that sets the cloud's initial molecular hydrogen abundance is insufficient to cool the cloud. Therefore, cooling occurs only after enough molecular hydrogen piles up, and the collapse timescale is ultimately determined by the H\({}_{2}\) production timescale \(t_{\mathrm{H}_{2}}\).
The ability of pressure to inhibit collapse depends not only on the temperature but also on the mass of the gas cloud: pressure support can be overcomed either by cooling (reducing \(M_{J}\)) or by growing more massive (\(M_{\mathrm{cloud}}\) exceeding fixed \(M_{J}\)). This fact is responsible for the "loitering phase" (Bromm et al., 2002), which is a pause in the condensation of the gas at the density where cooling becomes inefficient. At a critical density \(n_{\mathrm{crit}}\sim 10^{3}\,\mathrm{cm}^{-3}\), collisional de-excitation reduces the efficiency of H\({}_{2}\) cooling. Beyond this density, collapse can continue only once enough gas particles have condensed into a cloud at this "loitering" density to exceed the corresponding Jeans mass. Molecular hydrogen alone can cool the gas to \(\sim 200\,\mathrm{K}\) at \(n_{\mathrm{crit}}\) (see, for example, Fig. 1). The Jean's mass at this temperature and \(n_{\mathrm{crit}}\) is \(\sim 1000\,M_{\odot}\), which simulations confirm is the approximate mass scale of the gravitationally unstable clouds, when the cloud is cooled by H\({}_{2}\) alone (Bromm et al., 2002).
However, if appreciable HD is formed, the gas can reach temperatures as low as \(\sim 50\,\mathrm{K}\), which suppresses the Jeans mass at the critical density and hence the mass of the star-forming cloud. At temperatures above \(\sim 500\,\mathrm{K}\), HD is efficiently converted into H\({}_{2}\), so the HD formation begins at \(T=500\,\mathrm{K}\). When collapse occurs rapidly (on the free-fall timescale \(t_{ff}\), for example) the time between \(T=500\,\mathrm{K}\) and the end of the loitering phase at \(T=200\,\mathrm{K}\) is too brief to form significant HD. However, if the collapse occurs more slowly, with a long enough H\({}_{2}\) production timescale \(t_{\mathrm{H}_{2}}\), then appreciable HD can form, allowing for further cooling and hence a lower cloud mass. The mass of the collapsing cloud in turn determines the mass scale of the central protostar. Environmental effects including mergers and shocks can influence the efficiency of HD cooling (Magnus et al., 2023; Johnson & Bromm, 2006). In this work we consider only the case where HD is formed due to a delay in the collapse according to the intrinsic properties of the halo: its mass and its redshift, which are assumed to capture all environmental effects.
Our analytical estimation is based on a so-called "one-zone" calculation, which follows the chemical-thermal evolution of a uniform density parcel of gas evolving in a free-fall timescale. For the typical density profile of primordial gas clouds, where the density profile \(\rho(r)\) decreases as a function of radius, the free-fall timescale of the inner part of the cloud is much shorter than the outskirts. The inner part therefore undergoes more time-scales after a fixed elapsed time, and one can think of this inner part as the later evolutionary stage of the one-zone calculation. We therefore expect that the time evolution of one-zone calculation describes the radial profile of the cloud at a fixed time, and indeed the radial profile of full three-dimensional simulations shows a clear correspondence with the results of one-zone calculations (Yoshida et al., 2006).
The thermal evolution of such a gas parcel subject to radiative cooling and adiabatic heating is described by:
\[\frac{dT}{dt}=(\gamma-1)\left(\frac{\dot{n}}{n}T-\frac{\mathcal{C}(T,\vec{n}) }{k_{B}n}\right), \tag{1}\]
where \(T\) is the temperature, \(\gamma\) the adiabatic index, \(k_{B}\) Boltzmann's constant, and \(\vec{n}\) the number densities of the various species and \(n\) the total nucleon density (i.e. including helium). The first term in the parentheses describes compressional heating due to adiabatic collapse \(TV^{\gamma-1}=\)const.
One-zone calculations do not self-consistently solve for gravity: the density evolution \(\dot{n}\) must be independently specified, and we use a generic parameterization in terms of the collapse timescale \(t_{\rm col}\) as
\[\frac{dn}{dt}=-3n\frac{\dot{r}}{r}\approx\frac{n}{t_{\rm col}(n)}, \tag{2}\]
which comports with the definition of \(t_{\rm col}\). Then, the temperature evolution can be rewritten in terms of the relevant timescales, the collapse timescale \(t_{\rm col}\) and the cooling timescale \(t_{\mathcal{C}}\).
\[\frac{d\log T}{d\log n}=(\gamma-1)\left[1-\frac{t_{\rm col}(n)}{(\gamma-1)t_{ \mathcal{C}}(\vec{n},T)}\right]. \tag{3}\]
As expected, for \(t_{\rm col}\ll t_{\mathcal{C}}\) we regain the adiabatic heating \(T\propto n^{(\gamma-1)}\), while for \(t_{\rm col}\gg t_{\mathcal{C}}\) the gas cools.
The temperature and density evolutions described in Eqs. (1)-(2) are self regulatory: if \(t_{\rm col}\) differs greatly from \(t_{\mathcal{C}}\) at any point, the system will evolve towards \(t_{\rm col}\sim t_{\mathcal{C}}\) and vice versa. Specifically, the solution
\[t_{\rm col}(n)=(\gamma-1)t_{\mathcal{C}}(\vec{n},T) \tag{4}\]
is an attractor. In more physical language, the collapsing gas will evolve towards the equilibrium between heating and cooling described by Eq. (4).
Using the attractor solution in Eq. (4), therefore, we can find the phase-space diagram \(T(n)\) from the algebraic equation without solving the full differential equation of Eq. (3). That is the approach that we take in this paper. To do so, we need to estimate the collapse timescale \(t_{\rm col}(n)\) and the chemical abundances for the relevant species: free electrons and the primordial coolants H\({}_{2}\) and HD as functions of temperature and density.
We estimate the collapse timescale by parameterizing \(t_{\rm col}(n)=ft_{ff}(n)\). Here, \(f\) is a factor that is approximated to be independent of temperature and density (Hirano et al., 2014). The approximation is reasonable because \(f\) accounts for the factor by which the condensation to the loitering phase is extended, which is a well-defined epoch with a single characteristic timescale. In our analysis, we determine \(f\) from the H\({}_{2}\) formation timescale \(t_{\rm H_{2}}\) (see Sec. 4 for a quantitative discussion).
Following the argument in Tegmark et al. (1997), we estimate the chemical abundances by adopting a minimal reaction network (Tab. 1) and analytically solve for the abundances at a fixed temperature, instead of solving for the full coupled differential equations for the reaction rates. For example, we take the equation for the free electron abundance, \(x_{e}=n_{e}/n_{\rm H}\), where \(n_{\rm H}\) is the total number density of hydrogen (ionized and neutral),
\[\frac{dx_{e}}{dn_{\rm H}}=-k_{\rm H,1}x_{e}^{2}n_{\rm H}t_{\rm col}(n_{\rm H}) \frac{dt}{dn_{\rm H}}, \tag{5}\]
\begin{table}
\begin{tabular}{c|l} \hline \hline \(k_{\rm H,1}\) & \(\rm p+e\rightarrow\rm H\) \\ \(k_{\rm H,3}\) & \(\rm H+e\rightarrow\rm H^{-}\) \\ \(k_{\rm D,3}\) & \(\rm D+H^{+}\rightarrow\rm D^{+}+H\) \\ \(k_{\rm D,4}\) & \(\rm D^{+}+H\rightarrow\rm D+H^{+}\) \\ \(k_{\rm D,8}\) & \(\rm D^{+}+H_{2}\rightarrow\rm HD+H^{+}\) \\ \(k_{\rm D,10}\) & \(\rm HD+H^{+}\rightarrow\rm H_{2}+D^{+}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The minimal reaction network, which includes only the dominant formation pathways for H\({}_{2}\) and HD, as well as H\({}_{2}\)-HD interconversion.
\begin{table}
\begin{tabular}{c l l} \hline \hline Species & Initial Abundance & Source \\ \hline \(x_{e}\) & \(2.5\times 10^{-4}\) & Recfast (Seager et al., 1999) \\ \(x_{\rm H_{2}}\) & \(7\times 10^{-7}\) & Hirata \& Padmanabhan (2006) \\ \(x_{\rm D}\) & \(2.5\times 10^{-5}\) & Cooke et al. (2018) \\ \(x_{\rm D^{+}}\) & \(6.3\times 10^{-9}\) & \(x_{\rm D+}/x_{\rm D}\equiv x_{\rm H+}/x_{\rm H}\) \\ \(x_{\rm HD}\) & \(1.8\times 10^{-11}\) & \(x_{\rm HD}/x_{\rm D}\equiv x_{\rm H_{2}}/x_{\rm H}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The initial fractional abundances and their sources.
and approximate the right-hand side as
\[\frac{dx_{e}}{dn_{\rm H}}=-k_{\rm H,1}x_{e}^{2}n_{\rm H}\left(\frac{t_{\rm col}(n _{\rm H})}{\sqrt{n_{\rm H}/n_{\rm H,0}}}\frac{1}{n_{\rm H}}\right), \tag{10}\]
to find the analytic solution. As a result, we obtain the following solutions for abundances given their initial values (with the subscript 0):
\[x_{\rm e}(n_{\rm H}) =\frac{x_{\rm e,0}}{1+2k_{\rm H,1}t_{\rm col,0}x_{\rm e,0}(\sqrt{n _{\rm H}n_{\rm H,0}}-n_{\rm H,0})} \tag{11}\] \[x_{\rm H_{2}}(n_{\rm H}) =x_{\rm H_{2},0}\] \[\quad+\frac{k_{\rm H,3}}{k_{\rm H,1}}\log\left[1+2k_{\rm H,1}t_{ \rm col,0}x_{\rm e,0}(\sqrt{n_{\rm H}n_{\rm H,0}}-n_{\rm H,0})\right]\] (12) \[x_{\rm HD}(n_{\rm H}) =x_{\rm HD,0}\left(\frac{x_{\rm e}(n_{\rm H})}{x_{\rm e,0}} \right)^{k_{\rm D,10}/k_{\rm H,1}}\] \[\quad+\frac{k_{\rm HD,eff}}{k_{\rm D,10}}x_{\rm D,0}\left(1- \left(\frac{x_{\rm e}(n_{\rm H})}{x_{\rm e,0}}\right)^{k_{\rm D,10}/k_{\rm H,1 }}\right), \tag{13}\]
where we have assumed that neutral D and H are not depleted and that \(x_{e}\) is reduced only by the recombination of neutral hydrogen. Here, we have defined the effective HD formation rate
\[k_{\rm HD,eff}=k_{\rm D,3}\frac{k_{\rm D,8}}{k_{\rm D,8}+k_{\rm D,4}/x_{\rm H _{2}}}. \tag{14}\]
In deriving Eq. (13), we treat \(x_{\rm H_{2}}\) as constant because the H\({}_{2}\) fraction is typically nearing its asymptotic value before HD production becomes significant. Tab. 2 summarizes the initial abundances that we use for the computation. The rate coefficients \(k_{X}\) are drawn from Table 1 of Grassi et al. (2014).
Finally, to compute the cooling rate \(\mathcal{C}\) for given abundances, we use the H\({}_{2}\) cooling rates of Hollenbach and McKee (1979) and the HD cooling rate of Lipovka et al. (2005). More recent calculations of the H\({}_{2}\) cooling rates differ only marginally from the results of Hollenbach and McKee (1979), which have a simpler functional form (Ryan et al., 2022).
We solve Eq. (11) for a halo with mass \(M_{\rm Halo}\) at redshift \(z\). First, we use the spherical collapse (section 5.1 of Desjacques et al., 2018) value of the virial density of the halo \(\rho_{V}=178\bar{\rho}_{\rm m}(z)\), and estimate the virial temperature by
\[T_{V}=\left(\frac{4\pi}{3}\rho_{V}\right)^{1/3}\frac{G\mu m_{H}}{k_{B}}M_{\rm Halo }^{2/3}. \tag{15}\]
We start the computation by setting the cosmological abundances (shown in Tab. 2) as initial values, at \(\rho_{i}=\rho_{V}/3\) (that is, somewhat before the virialization) to allow for any changes in the chemical compositions prior to virialization. Because the cooling is inefficient (\(t_{\mathcal{C}}>t_{\rm col}\)) under these initial conditions, we initially assume adiabatic heating \(T\propto n^{2/3}\), and correspondingly initialize the temperature at \(T_{i}=T_{V}(1/3)^{2/3}\). The density and temperature then evolve along the adiabatic track until the density \(n\) reaches the attractor track: \(t_{\rm col}(n)=(\gamma-1)t_{\mathcal{C}}(\vec{n},T)\). Computing abundances using Eqs. (11)-(13) requires a temperature, which during this heating phase is supplied as the geometric mean of the initial and current temperature, \(\bar{T}=\sqrt{TT_{i}}\).
From this intercept with the \(t_{\rm col}=(\gamma-1)t_{\mathcal{C}}\) curve onward, the trajectory is determined by solving \(t_{\rm col}=(\gamma-1)t_{\mathcal{C}}\) for the temperature. In the case where \(f=1\) (that is, \(t_{\rm col}=t_{ff}\)), collapse occurs too quickly for HD to contribute to the cooling, and only the first two reactions in Tab. 1 are necessary. Moreover, the H\({}_{2}\) production rate depends weakly enough on temperature that evaluating the temperature at each density as \(\bar{T}=\sqrt{200T_{\rm int}}\), where \(\sim 200\,\)K is the molecular cooling limit and \(T_{\rm int}\) is the temperature when for the first time \(t_{\rm col}=(\gamma-1)t_{\mathcal{C}}\), can reproduce the one-zone results calculated using KROME (Grassi et al., 2014), as shown in Fig. 1.
However, for \(f=t_{\rm col}/t_{ff}\gtrsim 3\), HD production becomes important. The relevant reaction rates, for exam
ple, \(k_{\rm D,10}\), are strongly temperature dependent. To capture the temperature-dependent interaction rates, we discretize the attractor curve to 20 pieces up to a density of \(n_{\rm H}=10^{8}\,{\rm cm}^{-3}\). That is, we solve Eq. (3.4) on 20 logarithmically spaced steps of \(n_{\rm H}\) beginning at \(T_{\rm int}\) and \(\vec{n}_{\rm int}\). In each subsequent step \(n_{\rm H}\), to solve for the temperature \(T_{n}\), the abundances with the subscript 0 in Eqs. (3.7), (3.8) and (3.9) are supplied by the previous step, and the reaction rates are evaluated at a temperature \(\bar{T}=\sqrt{T_{n_{\rm prev}}T_{n}}\) given the temperature \(T_{n_{\rm prev}}\) from the previous step \(n_{\rm prev}\). Fig. 2 demonstrates an excellent match between the resulting phase-space diagram over a range of values of \(f\in[1,10]\). Here, the dashed lines are the result from the algebraic approximation in this paper, and solid lines are the solution of the full ODE using the KROME (Grassi et al., 2014) package. Note that although the results disagree somewhat for \(f=2\) due to our omission of the subdominant HD dissociation reaction \({\rm HD}+{\rm H}\to{\rm H}_{2}+{\rm D}\), our model will require only the temperature at the critical density \(n_{\rm crit}\sim 10^{3}\,{\rm cm}^{-3}\), (Sec. 4.1) at which point the curves still agree very closely. With Eq. (3.4) we have transposed the problem of solving a system of coupled ordinary differential equations (already a great simplification of the full, three dimensional problem) to a root finding problem involving inexpensive function calls at a small number of grid points.
## 4 Stellar Mass
We are now in a position to estimate the final stellar mass. We will define the collapse timescale \(t_{\rm col}\) (that is, fixing \(f\)) to estimate the mass of the collapsing gas cloud. We then determine the final stellar mass in two steps. First, we compute the mass of the initial stellar core that forms before Kelvin-Helmholtz contraction dominates over accretion. We then compute the mass accreted onto this stellar core: the accretion rate is given by dividing the cloud mass with the viscous timescale, and the accretion shut-off time is defined as when the star-forming cloud is completely ionized by the radiation from the protostar. Finally, we compare our result with the fitting formula for the Pop. III star mass given in Hirano et al. (2014).
### Collapse Timescale and Mass of the Cloud
As discussed in Sec. 2, the collapse timescale is determined by the H\({}_{2}\) production timescale. If the col
Figure 1: Evolution of the one-zone parcel in phase space for the \(t_{\rm col}=t_{ff}\) case. The solid black line shows the result of solving the chemical network using KROME, and the other two lines show two analytic approximations. For the blue dot-dashed line, the reaction rates in Eqs. (3.7-3.9) are always evaluated at a single temperature \(\bar{T}=\sqrt{200T_{\rm int}}\) when Eq. (3.4) is solved at any density. For the green dashed line, we solve Eq. (3.4) at twenty logarithmically spaced steps of density \(n_{\rm H}\in[n_{\rm int},10^{8}\ {\rm cm}^{-3}]\), where to solve for the temperature \(T_{n}\) at a step \(n\), the reaction rates are evaluated at a temperature \(\bar{T}=\sqrt{T_{n_{\rm prev}}T_{n}}\) given the temperature \(T_{n_{\rm prev}}\) from the previous step \(n_{\rm prev}\). The initial density and temperature are those of a halo with \(M_{\rm Halo}=5\times 10^{5}M_{\odot}\) at \(z=15\).
Figure 2: Evolution of the one-zone parcel in phase space for five \(f\equiv t_{\rm col}/t_{ff}\) values. The solid lines show the results of integrating the chemical network using KROME, and the dashed lines show the solution of the algebraic relationship \(t_{\mathcal{C}}=(\gamma-1)t_{\rm col}\), with 20 discretized density values (see the main text). The algebraic solution matches well with the KROME result. Also shown are lines of constant Jeans mass \(M_{J}\): The HD formation lowers the minimum temperature, which reduces the Jeans mass at the critical density \(n_{\rm crit}\sim 10^{3}\,{\rm cm}^{-3}\). The initial density and temperature are those of a halo with \(M_{\rm Halo}=5\times 10^{5}M_{\odot}\) at \(z=15\).
lapse timescale is long, significant HD can form leading to a lower minimum temperature (Ripamonti, 2007). It is this minimum temperature that ultimately sets the mass of the Jeans-unstable cloud. Relatively massive halos form H\({}_{2}\) efficiently within their initial (virial) free-fall timescale. On the other hand, low-mass halos (\(M_{\rm halo}\sim 10^{5}\,M_{\odot}\) at \(z\sim 20\), for example) are initially too cold to rapidly form H\({}_{2}\) and cool further.
To be specific, we define the collapse timescale as
\[t_{\rm col}\approx\min{(t_{ff},t_{\rm H_{2}})}, \tag{1}\]
where \(t_{\rm H_{2}}\) is the H\({}_{2}\) production timescale that we define in Sec. 2. Specifically, \(t_{\rm H_{2}}\) is the timescale required to produce enough H\({}_{2}\) to satisfy \(t_{\mathcal{C}}=t_{ff}\) at the virial density and temperature. That is, the critical H\({}_{2}\) abundance for cooling \(x_{\rm H_{2},\mathcal{C}}\) is defined by
\[t_{\mathcal{C}}(n_{V},x_{\rm H_{2},\mathcal{C}},T_{V})-t_{\mathcal{C}}(n_{V}, x_{\rm H_{2},\mathcal{C}},T_{\rm H_{2}})=t_{ff}(n_{V}), \tag{2}\]
where the subscript \(V\) stands for quantities in the initial, virial equilibrium and \(T_{\rm H_{2}}\approx 200\) K is the minimum temperature achievable by H\({}_{2}\) cooling at the loitering phase. The second term on the left-hand side signifies that we are interested in the time until arrival at the loitering phase at \(T=T_{\rm H_{2}}\). Then, with the H\({}_{2}\) production rate
\[\dot{x}_{\rm H_{2}}=k_{\rm H,3}(T_{V})n_{\rm H,V}x_{e,V}, \tag{3}\]
we have
\[t_{\rm H_{2}}=\frac{x_{\rm H_{2},\mathcal{C}}}{\dot{x}_{\rm H_{2}}}. \tag{4}\]
In Fig. 2, we find that the HD production indeed lowers the minimum achievable temperature. Fig. 2 also shows that the HD production saturates before \(t_{\rm col}\approx 10t_{ff}\), which motivates us to set \(10t_{ff}\) as the maximum of \(t_{\rm col}\).
We then extract the mass of the cloud from the \(n\)-\(T\) trajectory as \(M_{c}=M_{J}(n_{\rm crit},T(n_{\rm crit}))\), where \(n_{\rm crit}=1.75\times 10^{3}\,{\rm cm}^{-3}\) characterizes the loitering phase, the density at which the H\({}_{2}\) collisional de-excitation rates equal the spontaneous radiative decay rates. For both large and small values of \(f\) (corresponding to efficient and inefficient HD formation, respectively) the critical density also approximately corresponds to the minimum temperature. However, for the transitional values of \(f\sim 3\) the minimum temperature occurs at higher densities as HD continues to form later in the collapse. Even for these cases, we persist in extracting Jean's mass from the temperature at \(n_{\rm crit}\) because, in order to affect the mass of the cloud, the HD must be formed _before_ H\({}_{2}\) cooling becomes inefficient at \(n_{\rm crit}\).
Note that while Hirano et al. (2014) have discussed rotation as an important determinant of \(t_{\rm col}\), their results reveal little correlation between the cloud mass and the rotational parameter. This is because, typically, rotation becomes an important stabilizing force only after gravitational instability sets in. An unusually high angular momentum is required for rotational support to develop before the loitering phase. Instead, we argue that rotation determines the infall rate onto the protostar once the cloud mass is fixed.
### Accretion Rate
The initial infall rate, as assumed in the KROME calculation [Eq. (2)], is
\[\dot{M}_{\star}=M_{c}/t_{\rm col}. \tag{5}\]
However, the accretion rate in later stages and on small scales is typically limited by angular momentum transport. To model this, we adopt the \(\alpha\)-disk parameterization (Shakura & Sunyaev, 1973) for the viscosity \(\nu=\alpha hc_{s}\) with \(h\) being the scale height. For the thin disk,
\[\frac{h}{R}\simeq\frac{c_{s}}{v_{c}}\,\rightarrow\,\nu=\alpha c_{s}^{2}/\Omega, \tag{6}\]
where \(R\) is disk radius, \(v_{c}=\Omega R\) (with \(\Omega\) the angular velocity) is the circular velocity, the viscous timescale becomes
\[t_{\nu}=\frac{R}{\nu/R}=\frac{R^{2}\Omega}{\alpha c_{s}^{2}}. \tag{7}\]
Meanwhile, the angular momentum can be characterized by the spin parameter
\[\beta\equiv\frac{v_{c}^{2}}{3|\Phi|}=\frac{\Omega^{2}R^{3}}{3GM_{c}}, \tag{8}\]
which is the ratio of rotational to gravitational energy in the cloud. In terms of \(\beta\), the viscous timescale is
\[t_{\nu}=\frac{\sqrt{3\beta_{\rm crit}GM_{c}c_{s,\rm crit}t_{ff,\rm crit}}}{ \alpha c_{s}^{2}} \tag{9}\]
where the subscript "crit" indicates the value of the quantity evaluated at the critical density. Initially, \(t_{\rm col}>t_{\nu}\). As the density increases, \(t_{\rm col}\) decreases as \(1/\sqrt{n}\) while \(t_{\nu}\) varies only due to the factor-of-few change in sound speed as the gas cools and then heats. Hence, eventually \(t_{\nu}>t_{\rm col}\). Therefore, the accretion rate onto the protostar is ultimately set by the viscous timescale. We estimate the accretion rate as
\[\dot{M}_{\star}=M_{c}/t_{\nu}. \tag{10}\]
In calculating \(t_{\nu}\) we assume \(T=1000\,{\rm K}\), typical of the molecular disk. As the fiducial case we adopt \(\beta=0.3\), which is a typical value in Hirano et al. (2014). Here, we take \(\alpha=1\). The typical values of \(\alpha\) in Hirano et al. (2014) are \(\alpha=1\), \(\beta=0.3\), and \(\alpha=1\). The value of \(\alpha\) in Hirano et al. (2014) is \(\alpha=1\). The value of \(\alpha\) in Hirano et al.
et al. (2014) are a factor of a few lower, but we find that applying \(\alpha=1\) in Eq. (4.10) better matches the accretion rates in that work (see App. A). We assume \(\dot{M}_{\star}\) is constant, which is equivalent to replacing \(\dot{M}_{\star}(t)\) with its average value. Note that Liu et al. (2020) gives a semi-analytic universal solution for Pop. III growth \(M\propto t^{4-3\gamma_{eff}}\approx t^{0.7}\) (with \(\gamma_{eff}=1.09\) as the effective polytropic index of gas in primordial star-forming disks (Omukai & Nishi, 1998)) which is not far from the constant growth rate.
### Mass, Radius, and Luminosity
Once the proto-stellar core is formed, the radiation field from the core competes against the accretion flow to determine the evolution during the protostar phase.
For Pop. III stars, Stahler et al. (1986) have studied the evolution of protostars while accretion dominates the dynamics. We adopt their analytical estimate for the protostar core radius:
\[R_{\star}=26R_{\odot}\left(\frac{M_{\star}}{M_{\odot}}\right)^{0.27}\left( \frac{\dot{M}_{\star}}{10^{-3}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}}\right)^ {0.41}, \tag{4.11}\]
which is surrounded by optically thick radiative precursor with a photospheric radius of \(R_{\mathrm{ph}}=1.4R_{\star}\). Here, the star symbol denotes protostellar quantities. Also, when the opacity is dominated by electron scattering, the luminosity of the hydrostatic equilibrium object is proportional to \(M_{\star}^{3}\), and Hosokawa et al. (2012) find the following approximate relationship:
\[L_{\star}\simeq 10L_{\odot}\left(\frac{M_{\star}}{M_{\odot}}\right)^{3}\,. \tag{4.12}\]
Two timescales are relevant in determining the contracting mass scale of the protostellar core: the accretion timescale
\[t_{\mathrm{acc}}=\frac{M_{\star}}{\dot{M}_{\star}}, \tag{4.13}\]
and the Kelvin-Helmholtz timescale,
\[t_{\mathrm{KH}}=\frac{GM_{\star}^{2}}{R_{\star}L_{\star}}. \tag{4.14}\]
Initially, the accretion timescale is short compared to the Kelvin-Helmholtz timescale, and the protostar expands according to Eq. (4.11). Eventually, however, the mass and luminosity growth of the protostar cause Kelvin-Helmholtz contraction to dominate over accretion: \(t_{KH}<t_{\mathrm{acc}}\), which happens at \(M_{\mathrm{eq}}\)(Hosokawa et al., 2012):
\[M_{\mathrm{eq}}\simeq 15\,\mathrm{M}_{\odot}\left(\frac{\dot{M}_{\star}}{10^{- 2}\mathrm{M}_{\odot}\mathrm{yr}^{-1}}\right)^{0.26}\,, \tag{4.15}\]
at which point the protostellar cores with luminosity less than the Eddington luminosity, \(L(M_{\mathrm{eq}})<L_{\mathrm{Edd}}\), begin to contract. The following energy balance equation can model the contraction:
\[\frac{d}{dt}\left(\frac{W}{2}\right)=\frac{3}{2(5-n)}\frac{d}{dt}\left(\frac{ GM_{\star}^{2}}{R}\right)=L(M_{\star}), \tag{4.16}\]
where \(W\) is the gravitational binding energy of the protostar and we assume that each step of contraction plus accretion maintains a new virial equilibrium by radiating the energy difference. Here, \(n\) is the polytropic index \(P\propto\rho^{(n+1)/n}\) and we choose \(n=3\), consistent with the Eddington-beta model (Eddington, 1926), which is reasonably accurate for massive stars.
Again, assuming the hydrostatic equilibrium protostellar core with constant opacity, dominated by electron scattering, \(L_{\star}\propto M_{\star}^{3}\), we solve the energy balance equation (Eq. (4.16)) to find the radius-mass relationship:
\[R(M_{\star})=R_{\mathrm{eq}}\left[\frac{(M_{\star}/M_{\mathrm{eq}})^{2}}{((M_ {\star}/M_{\mathrm{eq}})^{4}-1)/3+1}\right]\,. \tag{4.17}\]
Eventually, the luminosity of the collapsing proto-star reaches the Eddington limit (Hosokawa et al., 2012),
\[L_{\mathrm{Edd}}=3.8\times 10^{6}\left(\frac{M_{\star}}{100M_{\odot}}\right)L_ {\odot}\,, \tag{4.18}\]
from which point radiation pressure prevents the contraction of the envelope (Hosokawa et al., 2012; Hirano et al., 2014). Then, the protostellar radius must either oscillate or grow. Substituting the Eddington luminosity into Eq. (4.16) gives a nearly constant radius. However, Hosokawa et al. (2012) find that once the protostars reach the Eddington luminosity, the gravothermal evolution contracts the core while expanding the envelope. The result of these more complicated dynamics is to grow the characteristic radius as \(R_{\star}\propto M_{\star}^{0.5}\), which we adopt here. Note that Hosokawa et al. (2012) find that for very high accretion rates \(\gtrsim 0.1\,\mathrm{M}_{\odot}/\mathrm{yr}\) the proto-star begins to expand even before \(L_{\mathrm{Edd}}\), an effect which we neglect, both because the underlying physics are complex and because this mechanism principally operates for rotation rates lower than our fiducial \(\beta=0.3\). This omission will lead to an underestimate of the masses of the most massive, slowly rotating stars.
We show the evolution of radius and effective temperature of the protostars in Fig. 3 for three different accretion rates.
### Ionizing Feedback and Stellar Mass
Equipped with the effective temperature and radius that we estimated in the previous section (see Fig. 3
for the result), we can also estimate the ionizing photon flux, \(S_{\rm EUV}\), by integrating the black body spectrum.
The interaction between the UV flux and the hydrodynamical gas launches a shock. We assume that the shock is launched at \(t_{\rm eq}\) (i.e. the time when the mass reaches \(M_{\rm eq}\)) which is the characteristic epoch where the protostar begins to emit significant UV radiation. Initially, the UV radiation is trapped behind the shock front by recombinations. Eventually, the density behind the shock (that is, towards the center of the cloud) falls low enough that recombination is no longer efficient. Then, the ionization front escapes the shock front, rapidly ionizing the cloud and shutting down accretion. This breakout time \(t_{B}\) is estimated in Alvarez et al. (2006) using the similarity solution of Shu et al. (2002) for a singular isothermal sphere (SIS) with a density profile of \(\rho\propto r^{-2}\) as the initial condition. There, \(t_{B}\) is calculated by equating the ionizing photon flux to the recombination rate behind the shock front:
\[S_{\rm EUV}(t_{B})=4\pi\alpha_{B}\int_{0}^{r_{sh}(t_{B})}drr^{2}n(r,t_{B})^{2}, \tag{4.19}\]
where \(\alpha_{B}\) is the case-B recombination rate coefficient for hydrogen at the temperature characteristic of the photoheated gas \(\sim 10^{4}\,\)K, and \(r_{sh}=x_{s}c_{s}t\) (with \(c_{s}\) the sound speed in the shocked gas) is the radius of the shock front, and the normalization of the density profile \(n\) is proportional to the temperature \(T_{\rm SIS}\) of the initial SIS, which we take to be the temperature at the loitering phase, i.e. \(T_{\rm SIS}=T(n_{\rm crit})\). When the shocked gas is much hotter than the surrounding isothermal sphere (which holds in all cases we encounter), we have \(x_{s}\approx 2.56\).
Alvarez et al. (2006) provide the following approximate relationship for the breakout time, which we find matches the exact result to within \(\sim 10\%\):
\[\begin{split} t_{B}&=6.5\times 10^{4}\,{\rm yr} \left(\frac{c_{s}x_{s}}{40\,{\rm km\,s^{-1}}}\right)^{-1}\left(\frac{T_{\rm SIS }}{300\,{\rm K}}\right)^{2}\\ &\times\left[\frac{S_{\rm EUV}(t_{B})}{3\times 10^{50}\,{\rm s ^{-1}}}\right]^{-1}.\end{split} \tag{4.20}\]
Note that helium at the cosmological mass fraction \(Y=0.24\) enters the calculation of the sound speed via the mean molecular weight. We assume the first ionization of helium is coupled with that of hydrogen, and correspondingly have multiplied the prefactor in Eq. 4.20 by \(x_{\rm H}^{-2}=1.16\) compared to Alvarez et al. (2006) (who neglected the consumption of ionizing photons by helium) given the primordial number fraction of hydrogen nuclei \(x_{\rm H}=0.927\). We finally solve Eq. (4.20) for \(t_{B}\), and then compute the final stellar mass as
\[M_{\star}=M_{\rm eq}+\dot{M}_{\star}t_{B}. \tag{4.21}\]
Having relied on the timescales and analytical estimate, our estimation of the final stellar mass only takes a few seconds. On the other hand, with a series of 2D radiative hydrodynamic simulations of protostar accretion from initial conditions of primordial star-forming clouds produced by cosmological hydrodynamic (zoom-in) simulations, Hirano et al. (2014) fit the redshift and
Figure 3: The radius and effective temperature implied by Eq. (4.11) (radius at \(t_{eq}\)), Eq. (4.17) (evolution to the Eddington luminosity), and \(R_{\star}\propto M_{\star}^{0.5}\) (upon reaching the Eddington limit). Compare with Fig. 12 of Hirano et al. (2014) and Fig. 5 of Hosokawa et al. (2012).
mass dependence of the stellar mass as
\[M_{\star}=100\,{\rm M}_{\odot}\,\left(\frac{1+z}{20}\right)^{3}\left(\frac{M_{ \rm halo}}{3\times 10^{5}\,{\rm M}_{\odot}}\right)^{2}. \tag{4.22}\]
Fig. 4 shows that our estimation very closely resembles the result of the state of the art simulations! The figure shows the predictions of our model for \(\beta=0.3\) as a function of halo mass and redshift. We have also plotted the stellar sample of Hirano et al. (2014) (dots), their fit to the redshift and halo mass dependence of the sample (dashed white lines) and the line corresponding to a halo abundance of one in the simulation volume of Hirano et al. (2014) in the Press-Schechter Formalism (Press & Schechter, 1974; Sheth et al., 2001). Halos become common enough to typically appear in the simulation volume of Hirano et al. (2014) only to the right of this line. However, Hirano et al. (2014) employ additional selection criteria to ensure that each star-forming cloud is pristine.
Our model succeeds quite well in predicting the transition between H\({}_{2}\) cooled clouds (final stellar mass of hundreds of solar masses) and HD cooled clouds (final stellar masses of order tens of solar masses). Over the range where Hirano et al. (2014) have reasonable statistics (\(\sim 30-300\,{\rm M}_{\odot}\)), our model agrees with the simulation results to within a factor of a few. However, our model fails to produce the most massive stars observed in Hirano et al. (2014), a fact which cannot be entirely explained by our choice of fixed \(\beta=0.3\) in Fig. 4 (see Fig. 5). There are several physically reasonable explanations which may contribute to this discrepancy, which are explored in App. A. First, the rapidly accreting stars of Hirano et al. (2014) can grow by around \(100\,{\rm M}_{\odot}\) after HII breakout due to the geometry of the accretion disk which is not captured by our spherically-symmetric breakout model. Relatedly, our assumption of a thin disk in calculating \(t_{\nu}\) breaks down for the slowly rotating clouds which produce the most massive stars, leading to an underestimate of the accretion rate. Finally, as mentioned above our model fails to account for the finding of Hosokawa et al. (2012) that for very high accretion rates the proto-star never undergoes a period of contraction. The low surface temperature associated with this large stellar radius is necessary to attain stellar masses \(\sim 1000\,M_{\odot}\). These facts can largely explain the systematically lower masses predicted by our model at high accretion rates.
We also show in Fig. 5 the effect of varying \(\beta\) on a low mass, HD cooled cloud and a high mass, H\({}_{2}\) cooled cloud at redshift 25. In our model, the largest stellar mass attainable with a minimal realistic rotation parameter \(\beta\approx 0.05\) is \(M\sim 200\,{\rm M}_{\odot}\), with an accretion rate of \(\sim 0.01\,{\rm M}_{\odot}/{\rm yr}\). Rotation has a larger effect at higher accretion rates (lower \(\beta\)), because the efficiency of the UV feedback has a strong dependence on the temperature at which the surface of the Eddington radiating stars is "frozen" (Fig. 3).
## 5 Conclusion & Discussion
We have developed a simplistic model of the formation of Pop. III stars in the center of collapsing primordial gas clouds. The model consists of two parts. The first determines the chemical-thermal evolution of the collapsing gas cloud using the dynamical-thermal equilibrium relation \(t_{\rm col}=(\gamma-1)t_{\mathcal{C}}\). The second relates the mass and spin parameter of the collapsing cloud to the final mass of the star. For a typical value of the spin parameter, this model agrees with the masses predicted by sophisticated simulations to within a factor of a few for stellar masses between \(\sim 30-300\,{\rm M}_{\odot}\). However, the model struggles to produce the most massive stars seen in simulations, for which the aspherical accretion geometry strongly affects the final mass (see App. A).
Figure 4: The predicted stellar mass in our model compared with the primordial stars found in the simulations of Hirano et al. (2014) (colored circles) and the fit of Hirano et al. (2014) to these stellar masses (dashed lines). The gray shaded region indicates halos which will not typically appear in the simulation volume of Hirano et al. (2014) in the Press-Schechter formalism. Our model is accurate to within a factor of a few where Hirano et al. (2014) have robust data (between \(\sim 30\) and \(\sim 300\) solar mass).
The model makes liberal use of the \(\approx\) symbol. Although it contains no explicit free parameters, there are implicit choices involving various order unity factors which can bring the model into better or worse agreement with simulations. These include the numerical prefactors in the sound crossing and free-fall timescales, the value of the viscosity parameter \(\alpha\), the characteristic temperature of the disk used to evaluate the viscous timescale \(t_{\nu}\), and the characteristic temperature of the singular isothermal sphere used in Eq. (4.19). The overall trends are robust to these choices.
We comment briefly that based only on the chemical-thermal evolution of the gas (Sec. 3) two alternative estimates of the stellar mass are already possible. First, the Jean's mass at the loitering phase can be multiplied by some global efficiency factor \(M_{\star}=\epsilon M_{c}\), where to match the characteristic \(\sim 100\,\mathrm{M}_{\odot}\) Pop. III stellar mass \(\epsilon\sim 0.1\). However, in our model the efficiency factor depends on the cloud mass. We find that \(\epsilon\) is a factor of a few larger for smaller, HD cooled clouds than for larger, H\({}_{2}\) cooled clouds. This is because while smaller clouds lead to slower accretion, the slower accretion in turn leads to a longer period of growth before the breakout of the ionization front. Alternatively, one could estimate \(M_{\star}=NM_{l}\), where \(M_{l}\approx 1.4M_{\odot}\mu^{-9/4}\left(\frac{k_{B}T}{m_{p}c^{2}}\right)^{1/4}\) is the opacity limited minimum fragment mass (Rees, 1976), evaluated at the minimum temperature of the gas (Shandera et al., 2018; Singh et al., 2021), and \(N\sim 10^{4}\) (as the effective number of fragments that merge to make the central star) to produce reasonable stellar masses. In this case, if \(N\) is constant, the weak dependence of \(M_{f}\) on \(T\) means that the temperature difference between H\({}_{2}\) and HD cooled clouds amounts to less than a factor of two difference in \(M_{l}\) and hence in \(M_{\star}\). Additionally, neither of these simpler estimates include the role of angular momentum.
Our model correctly determines the order-of-magnitude mass scale of the first stars, and the dependence of that mass scale on redshift, halo mass, and the rotation parameter. These dependencies, which were initially emergent properties in radiation-hydrodynamic simulations, are here distilled to simple timescale arguments. If Pop. III stars are observed, this kind of understanding will provide a powerful lever to calibrate simulations against observations. In the meanwhile, this model can be inserted in simulations of cosmological volumes at a low computational cost, allowing a novel treatment of the metal enrichment of the universe and subsequent reionization history.
Another possible application is to the study of dissipative dark matter, which can itself cool to eventually form compact objects (Shandera et al., 2018; Hippert et al., 2022; Gurian et al., 2022; Ryan and Radice, 2022). These objects could have masses and compactnesses impossible under ordinary stellar astrophysics, leading to distinctive gravitational wave signatures. However, given the large model space there is a need for simple, inexpensive, and reasonably accurate models of the compact object formation process. The core ideas present in this model could be generalized to other dissipative physics, providing just such a tool.
Finally, although this work only considers the formation of the massive central star in each cloud, Liu et al. (2020) found that the formation of Pop. III star clusters by disk fragmentation can be described by simple scaling laws that capture the key trends in 3D hydrodynamic simulations of primordial star-forming clouds. Our model can be generalized and improved to consider Pop. III star clusters and self-shielding in aspherical accretion flows using the results of 3D radiative hydrodynamic simulations that follow the growth and feedback of multiple protostars (e.g. Sugimura et al., 2020, 2023; Park et al., 2023, 2023), which is an intriguing direction for future research.
Figure 5: The role of the rotation parameter \(\beta\) in the final stellar mass for an H\({}_{2}\) cooled (\(10^{6}\,\mathrm{M}_{\odot}\), gold) cloud and HD cooled (\(10^{5}\,\mathrm{M}_{\odot}\), purple) cloud, both at \(z=25\), showing the stronger dependence on \(\beta\) at higher cloud masses.
## Acknowledgments
We thank Naoki Yoshida for helpful comments and for tracking down and sharing the data from Hirano et al. (2014). This work was supported at Pennsylvania State University by NASA ATP Program No. 80NSSC22K0819. DJ is also supported by KIAS Individual Grant PG088301 at Korea Institute for Advanced Study. BL is supported by the Royal Society University Research Fellowship. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
|
2309.16042 | Towards Best Practices of Activation Patching in Language Models:
Metrics and Methods | Mechanistic interpretability seeks to understand the internal mechanisms of
machine learning models, where localization -- identifying the important model
components -- is a key step. Activation patching, also known as causal tracing
or interchange intervention, is a standard technique for this task (Vig et al.,
2020), but the literature contains many variants with little consensus on the
choice of hyperparameters or methodology. In this work, we systematically
examine the impact of methodological details in activation patching, including
evaluation metrics and corruption methods. In several settings of localization
and circuit discovery in language models, we find that varying these
hyperparameters could lead to disparate interpretability results. Backed by
empirical observations, we give conceptual arguments for why certain metrics or
methods may be preferred. Finally, we provide recommendations for the best
practices of activation patching going forwards. | Fred Zhang, Neel Nanda | 2023-09-27T21:53:56Z | http://arxiv.org/abs/2309.16042v2 | # Towards Best Practices of Activation Patching in Language Models: Metrics and Methods
###### Abstract
Mechanistic interpretability seeks to understand the internal mechanisms of machine learning models, where localization--identifying the important model components--is a key step. Activation patching, also known as causal tracing or interchange intervention, is a standard technique for this task (Vig et al., 2020), but the literature contains many variants with little consensus on the choice of hyperparameters or methodology. In this work, we systematically examine the impact of methodological details in activation patching, including evaluation metrics and corruption methods. In several settings of localization and circuit discovery in language models, we find that varying these hyperparameters could lead to disparate interpretability results. Backed by empirical observations, we give conceptual arguments for why certain metrics or methods may be preferred. Finally, we provide recommendations for the best practices of activation patching going forwards.
## 1 Introduction
Mechanistic interpretability (MI) aims to unravel complex machine learning models by reverse engineering their internal mechanisms down to human-understandable algorithms (Geiger et al., 2021; Olah, 2022; Wang et al., 2023). With such understanding, we can better identify and fix model errors (Vig et al., 2020; Hernandez et al., 2021; Meng et al., 2022; Hase et al., 2023), steer model outputs (Li et al., 2023) and explain emergent behaviors (Nanda et al., 2023; Barak et al., 2022).
A basic goal in MI is localization: identify the specific model components responsible for particular functions. Activation patching, also known as causal tracing, interchange intervention, causal mediation analysis or representation denoising, is a standard tool for localization in language models (Vig et al., 2020; Meng et al., 2022). The method attempts to pinpoint activations that causally affect on the output. Specifically, it involves 3 forward passes of the model: (1) on a clean prompt while caching the latent activations; (2) on a corrupted prompt; and (3) on the corrupted prompt but replacing the activation of a specific model component by its clean cache. For instance, the clean prompt can be "The Eiffel Tower is in" and the corrupted one with the subject replaced by "The Colosseum". If the model outputs "Paris" in step (3) but not in (2), then it suggests that the specific component being patched is important for producing the answer (Vig et al., 2020; Pearl, 2001).
This technique has been widely applied for language model interpretability. For example, Meng et al. (2022); Geva et al. (2023) seek to understand which model weights store and process factual information. Wang et al. (2023); Hanna et al. (2023); Lieberum et al. (2023) perform circuit analysis: identify the sub-network within a model's computation graph that implements a specified behavior. All these works leverage activation patching or its variants as a foundational technique.
Despite its broad applications across the literature, there is little consensus on the methodological details of activation patching. In particular, each paper tends to use its own the method of generating corrupted prompts and the metric of evaluating patching effects. Concerningly, this lack of standardization leaves open the possibility that prior interpretability results may be highly sensitive to the hyperparameters they adopt. In this work, we study the impact of varying the metrics and methods in activation patching, as a step towards understanding best practices. To our knowledge, this is the first such systematic study of the technique.
Specifically, we identify three degrees of freedom in activation patching. First, we focus on the approach of generating corrupted prompts and evaluate two prominent methods from the literature:
* Gaussian noising (GN) adds a large Gaussian noise to the token embeddings of the tokens that contain the key information to completing a prompt, such as its subject (Meng et al., 2022).
* Symmetric token replacement (STR) swaps these key tokens with semantically related ones; for example, "The Eiffel Tower"\(\rightarrow\)"The Colosseum" (Vig et al., 2020; Wang et al., 2023).
Second, we examine the choice of metrics for measuring the effect of patching and compare probability and logit difference; both have found applications in the literature (Meng et al., 2022; Wang et al., 2023; Conmy et al., 2023). Third, we study sliding window patching, which jointly restores the activations of multiple MLP layers, a technique used by Meng et al. (2022); Geva et al. (2023).
We empirically examine the impact of these hyperparameters on several interpretability tasks, including factual recall (Meng et al., 2022) and circuit discovery for indirect object identification (IOI) (Wang et al., 2023), greater-than (Hanna et al., 2023), Python docstring completion (Heimersheim and Janiak, 2023) and basic arithmetic (Solfo et al., 2023). In each setting, we apply methods distinct from the original studies and assess how different interpretability results arise from these variations.
FindingsOur contributions uncover nuanced discrepancies within activation patching techniques applied to language models. On corruption method, we show that GN and STR can lead to inconsistent localization and circuit discovery outcomes (Section 3.1). Towards explaining the gaps, we posit that GN breaks model's internal mechanisms by putting it off distribution. We give tentative evidence for this claim in the setting of IOI circuit discovery (Section 3.2). We believe that this is a fundamental concern in using GN corruption for activation patching. On evaluation metrics, we provide an analogous set of differences between logit difference and probability (Section 4), including an observation that probability can overlook negative model components that hurt performance.
Finally, we compare sliding window patching with patching individual layers and summing up their effects. We find the sliding window method produces more pronounced localization than single-layer patching and discuss the conceptual differences between these two approaches (Section 5).
Recommendations for practiceAt a high-level, our findings highlight the sensitivity of activation patching to methodological details. Backed by our analysis, we make several recommendations on the application of activation patching in language model interpretability (Section 6). We advocate for STR, as it supplies in-distribution corrupted prompts that help to preserve consistent model behavior. On evaluation metric, we recommend logit difference, as we argue that it offers fine-grained control over the localization outcomes and is capable of detecting negative modules.
## 2 Background
### Activation patching
Activation patching identifies the important model components by intervening on their latent activations. The method involves a clean prompt (\(X_{\text{clean}}\), e.g.,"The Eiffel Tower is in") with an associated answer \(r\) ("Paris"), a corrupted prompt (\(X_{\text{corrupt}}\), e.g., "The Colosseum is in"), and three model runs:
1. [leftmargin=*]
2. Clean run: run the model on \(X_{\text{clean}}\) and cache activations of a set of given model components, such as MLP or attention heads outputs.
Figure 1: **The workflow of activation patching for localization: run the intervention procedure (a) on every relevant component, such as all the attention heads, and plot the effects (b).**
2. Corrupted run: run the model on \(X_{\text{corrupt}}\) and record the model outputs.
3. Patched run: run the model on \(X_{\text{corrupt}}\) with a specific model component's activation restored from the cached value of the clean run (Figure 0(a)).
Finally, we evaluate the patching effect, such as \(\mathbb{P}(\text{``Paris''})\) in the patched run (3) compared to the corrupted run (2). Intuitively, corruption hurts model performance while patching restores it. Patching effect measures how much the patching intervention restores performance, which indicates the importance of the activation. We can iterate this procedure over a collection of components (e.g., all attention heads), resulting in a plot that highlights the important ones (Figure 0(b)).
Corruption methodsTo generate \(X_{\text{corrupt}}\), GN adds Gaussian noise \(\mathcal{N}(0,\nu)\) to the embeddings of certain key tokens, where \(\nu\) is \(3\) times the standard deviation of the token embeddings from the textset. STR replaces the key tokens by similar ones with equal sequence length. In STR, let \(r^{\prime}\) denote the answer of \(X_{\text{corrupt}}\) ("Rome"). All implementations of STR in this paper yield in-distribution prompts such that \(X_{\text{corrupt}}\) is identically distributed as a fresh draw of a clean prompt.
MetricsThe patching effect is defined as the gap of the model performance between the corrupted and patched run, under an evaluation metric. Let \(\text{cl}\), \(*\), pt be the clean, corrupted and patched run.
* Probability: \(\mathbb{P}(r)\); e.g., \(\mathbb{P}(\text{``Paris''})\). The patching effect is \(\mathbb{P}_{\text{pt}}(r)-\mathbb{P}_{*}(r)\);
* Logit difference: \(\text{LD}(r,r^{\prime})=\text{Logit}(r)-\text{Logit}(r^{\prime})\); e.g., Logit("Paris") \(-\) Logit("Rome"). The patching effect is given by \(\text{LD}_{\text{pt}}(r,r^{\prime})-\text{LD}_{*}(r,r^{\prime})\). Following Wang et al. (2023), we always normalize this by \(\text{LD}_{\text{cl}}(r,r^{\prime})-\text{LD}_{*}(r,r^{\prime})\), so it typically lies in \([0,1]\), where \(1\) corresponds to fully restored performance and \(0\) to the corrupted run performance.
* KL divergence: \(D_{\text{KL}}(P_{\text{cl}}||P)\), the Kullback-Leibler (KL) divergence from the probability distribution of model outputs in the clean run. The patching effect is \(D_{\text{KL}}(P_{\text{cl}}||P_{*})-D_{\text{KL}}(P_{\text{cl}}||P_{\text{ pt}})\).
GN does not provide a corrupted prompt with a well-defined answer \(r^{\prime}\) ("Rome"). To make a fair comparison, the same \(r^{\prime}\) is used for evaluating the logit difference metric under GN.
### Problem settings
Factual recallIn the setting of factual association, the model is prompted to fill in factual information, e.g., "The Eiffel Tower is in". Meng et al. (2022) posits that Transformer-based language models complete factual recall (i) at middle MLP layers and (ii) specifically at the processing of the subject's last token. In this work, we do not treat the hypothesis as ground-truth but rather reevaluate it using other approaches than what was attempted by Meng et al. (2022).
IoiAn IOI sentence involves an initial dependent clause, e.g., "When John and Mary went to the office", followed by a main clause, e.g., "John gave a book to Mary." In this case, the indirect object (IO) is "Mary" and the subject (S) "John". The IOI task is to predict the final token in the sentence to be the IO. We use S1 and S2 to refer to the first and second occurrences of the subject (S).
We let \(p_{\text{IOI}}\) denote the distribution of IOI sentences of Wang et al. (2023) containing single-token names. GPT-2 small performs well on \(p_{\text{IOI}}\) and Wang et al. (2023) discovers a circuit within the model for this task. The circuit consists of attention heads. This is also the focus of our experiments, where we uncover nuanced differences when using different techniques to replicate their result.
## 3 Corruption methods
In this section, we evaluate GN and STR on localizing factual recall in GPT-2 XL and discovering the IOI circuit in GPT-2 small.
Experiment setupFor factual recall, we investigate Meng et al. (2022)'s hypothesis that model computation is concentrated at early-middle MLP layers (by processing the last subject token). Specifically, we corrupt the subject token(s) to generate \(X_{\text{corrupt}}\). In the patched run, we override the MLP activations at the last subject token. Following Meng et al. (2022); Hase et al. (2023), at
each layer we restore a set of \(5\) adjacent MLP layers. (More results on other window sizes can be found in Section G.1. We examine sliding window patching more closely in Section 5.)
For IOI circuit discovery, we follow Wang et al. (2023) and focus on the role of attention heads. Corruption is applied to the S2 token. Then we patch a single attention head's output (at all positions) and iterate over all heads in this way. To avoid relying on visual inspection, we say that a head is _detected_ if its patching effect is \(2\) standard deviation (SD) away from the mean effect.
Dataset and corruption methodSTR requires pairs of \(X_{\text{clean}}\) and \(X_{\text{corrupt}}\) that are semantically similar. To perform STR, we construct PairedFacts of \(145\) pairs of prompts on factual recall. All the prompts are in-distribution, as they are selected from the original dataset of Meng et al. (2022); see Appendix B for details. GPT-2 XL achieves an average of \(49.0\%\) accuracy on this dataset.
For the IOI circuit, we use the \(p_{\text{IOI}}\) distribution to sample the clean prompts. For STR, we replace S2 by IO to construct \(X_{\text{corrupt}}\) such that \(X_{\text{corrupt}}\) is still a valid in-distribution IOI sentence. For GN, we add noise to the S2's token embedding. The experiments are averaged over \(500\) prompts.
### Results on corruption methods
Difference in MLP localizationFor patching MLPs in the factual association setting, Meng et al. (2022) show that the effects concentrate at early-middle layers, where they apply GN as the corruption method. Our main finding is that the picture can be largely different by switching the corruption method, regardless of the choice of metric. In Figure 2, we plot the patching effects for both metrics. Notice that the clear peak around layer 16 under GN is not salient at all under STR.
This is a robust phenomenon: across window sizes, we find the peak value of GN to be 2x-5x higher than STR; see Appendix G.1 for further plots on GPT-2 XL in this setting.
These findings illustrate potential discrepancies between the two corruption techniques in drawing interpretability conclusions. We do not, though, claim that results from GN are illusory or overly inflated. In fact, GN does not always yield sharper peaks than STR. For certain basic arithmetic tasks in GPT-J, STR can show stronger concentration in patching MLP activations; see Appendix C.
Difference in circuit discoveryWe focus on discovering the main classes of attention heads in the IOI circuit, including (Negative) Name Mover (NM), Duplicate Token (DT), S-Inhibition (SI), and Induction Heads. The results are summarized in Table 1 and more details in Appendix H.
Most importantly, we observe that STR and GN produce inconsistent discovery results. In particular, for any fixed metric, STR and GN detect different sets of heads as important, highlighted in Table 1.
We remark that all the detections are in the IOI circuit as found by Wang et al. (2023). However, the discovery we achieved here appear far from complete, with some critical misses such as NM. This suggests that the extensive manual inspection and the use of path patching, a more surgical patching method, are both necessary to fully discover the IOI circuit.
Figure 2: **Disparate MLP patching effects for factual recall in GPT-2 XL. (a) We patch MLP activations at the last subject token. (b)(c) The patching effects using different corruption methods with a window size of \(5\). STR suggests much a weaker peak, regardless of the evaluation metric.1**
We also validate our high-level conclusions on the Python docstring (Heimersheim and Janiak, 2023) and the greater-than (Hanna et al., 2023) task. In particular, we find GN can produce highly noisy localization outcomes in these settings; see Appendix D and Appendix E for details.
### Evidence for OOD behavior in Gaussian noise corruption
We suspect that the gaps between the corruption methods can be attributed partly to model's OOD behavior under GN corruption. In particular, the Gaussian noise may break model's internal mechanisms by introducing OOD inputs to the layers. We now give some tentative evidence for this hypothesis. Following the notation of Wang et al. (2023), a head is denoted by "layer.head".
Negative detection of 0.10 under GNAlthough most localizations we obtain above seem aligned with the findings of Wang et al. (2023), a major anomaly in the GN experiment is the "negative" detection of 0.10. In particular, probability and KL divergence suggest that it contributes negatively to model performance. (Logit difference also assigns a negative effect, though to a lesser degree; see Figure 1(b).) This is not observed at all in the experiments with STR corruption.
The detection is in the wrong direction, given the evidence from Wang et al. (2023) that 0.10 _helps_ with IOI; on clean prompts, it is active at S2, attends to S1 and signals this duplication. However, by visualizing the attention patterns, we find that this effect largely disappears under GN corruption. We intuit that the Gaussian noise is strongest at influencing early layers, and 0.10's behavior may be broken here, since it directly receives the noised token embeddings from the residual stream.
Attention of Name MoversTo exhibit the OOD behavior of the model internals under GN corruptions, we examine the Name Mover (NM) Heads, a class of attention heads that directly affects the model's logits in the IOI circuit (Wang et al., 2023). NMs are active at the last token and copy what they attend to. We plot the attention of NMs in clean and corrupted runs in Figure 2(a).
Indeed, on \(500\) clean IOI prompts, the NMs assign an average of \(0.58\) attention probability to IO. In the corrupted runs, since STR simply exchanges IO by S1, the attention patterns of NMs are preserved (with the role of IO and S1 switched). On the other hand, with GN corruption, we see that the attention is shared between IO and S1 (\(0.26\) and \(0.21\)). This suggests that GN not only removes the relevant information but also disrupts the internal mechanism of NMs on IOI sentences.
To take a deeper dive, Wang et al. (2023) shows that the output of NMs is determined largely by the values of the S-Inhibition Heads. Indeed, we can fully recover model's logit on IO in STR
\begin{table}
\begin{tabular}{l|l||c|c|c|c|c} \hline \hline
**Corruption** & **Metric** & NM & DT & SI & Negative NM & Induction \\ \hline STR & Probability & \(\mathbf{1/3}\) & \(\mathbf{0/2}\) & \(\mathbf{3/4}\) & \(\mathbf{1/2}\) & \(1/2\) \\ GN \({}^{\dagger}\) & Probability & \(\mathbf{0/3}\) & \(\mathbf{1/2}\) & \(\mathbf{2/4}\) & \(\mathbf{2/2}\) & \(1/2\) \\ \hline STR & Logit difference & \(1/3\) & \(\mathbf{0/2}\) & \(3/4\) & \(2/2\) & \(1/2\) \\ GN & Logit difference & \(1/3\) & \(\mathbf{1/2}\) & \(3/4\) & \(2/2\) & \(1/2\) \\ \hline STR & KL divergence & \(\mathbf{1/3}\) & \(0/2\) & \(\mathbf{3/4}\) & \(2/2\) & \(1/2\) \\ GN \({}^{\dagger}\) & KL divergence & \(\mathbf{0/3}\) & \(0/2\) & \(\mathbf{2/4}\) & \(2/2\) & \(1/2\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Inconsistency in circuit discovery from activation patching on the IOI task**. We patch the attention heads outputs and list the detections of each class. \({}^{\dagger}\)Also detect 0.10, a fuzzy Duplicate Token Head, as _negatively_ influencing model performance. We expect it to be positive (Wang et al., 2023).
Figure 3: **Attention of the Name Movers** from the last token, in corrupted and patched runs.
(logit difference: \(1.04\)) by restoring the values of the S-Inhibition Heads (Figure 2(b)). The same intervention, however, is fairly unsuccessful under GN (logit difference: \(0.49\)).
Towards explaining this gap, we again examine the attention of NMs. Figure 2(c) shows that patching nearly restores the NMs' in-distribution attention pattern under STR, but fails under GN corruption. We speculate that GN introduces further corrupted information flowing into the NMs such that restoring the clean activations of S-Inhibition Heads cannot correct their behaviors.
## 4 Evaluation Metrics
We now study the choice of evaluation metrics in activation patching. We perform two experiments that highlight potential gaps between logit difference and probability. Along the way, we provide a conceptual argument for why probability can overlook negative components in certain settings.
### Localizing factual recall with logit difference
The prior work of Meng et al. (2022) hypothesizes that factual association is processed at the last subject token. Motivated by this claim, we extend our previous experiments to patching the MLP outputs at all token positions and consider the effect of changing evaluation metrics.
Experimental setupWe apply the same setting as in Section 3. We extend our MLP patching experiments to all token positions and again use logit difference and probability as the metric.
Experimental resultsFor STR and window size of \(5\), we plot the patching effects across layers and positions in Figure 4. The visualization shows that probability assigns stronger effects at the last subject token than logit difference. Specifically, we calculate the ratio between the sum of effects (over all layers) on the last subject token and those on the middle subject tokens. In both corruptions, probability assigns more effects to the last subject token than logit difference:
* Using STR corruption, the ratio is \(4.33\)x in probability \(>1.22\)x in logit difference.
* Using GN corruption, the ratio is \(1.74\)x in probability \(>0.77\)x in logit difference.
This observation holds for other window sizes, too, for which we provide details in Appendix G.2. We also validate our findings on GPT-J \(6\)B (Wang and Komatsuzaki, 2021) in Appendix G.5. The results show that the choice of evaluation metrics influences the patching effects across tokens.
### Circuit discovery with probability
Wang et al. (2023) discovers two Negative Name Mover (NNM) heads, 10.7 and 11.10, that noticeably hurt model performance on IOI. In our previous experiments on STR, both are detected, except when using probability as the metric where 11.10 is overlooked. In fact, the patching effect of 11.10
Figure 4: **Activation patching on MLP** across layers and token positions in GPT-2 XL, with a sliding window patching of size \(5\). Note that probability (b) highlights the importance of the last subject token, whereas logit difference (a) displays less effects.
under STR in probability is well within 2 SD from the mean (mean \(0.003\), SD \(0.015\), and 11.10 receives \(-0.022\)). Looking closely, the reason is simple:
* In the corrupted run of STR, the average probability of outputting the original IO is \(0.03\). Hence, the patching effect in probability, \(\mathbb{P}_{\text{pt}}(\text{IO})-\mathbb{P}_{*}(\text{IO})\), is at least \(-0.03\), as \(\mathbb{P}_{\text{pt}}(\text{IO})\) is non-negative. This is already close to 2 SD below the mean (\(-0.027\)). Hence, for an NNM to be detected via patching, its \(\mathbb{P}_{\text{pt}}(\text{IO})\) needs to be near \(0\), which may be hard to reach.
* By contrast, under GN corruption, the average probability of IO is \(0.13\). Intuitively, this makes a lot more space for NNMs to demonstrate their effects.
In general, probability must fail to detect negative model components, if corruption reduces the correct token probability to near zero. We now give a cleaner experimental demonstration of this concern, using an original approach of Wang et al. (2023).
Experimental setupWe revisit an alternative corruption method proposed by Wang et al. (2023), where S1, S2 and IO are replaced by three unrelated random names2; for example, "John and Mary [...], John" \(\rightarrow\) "Alice and Bob [...], Carol." We use probability of the original IO as the metric. Intuitively, this replacement method would achieve much stronger corruption effect, since it removes all the relevant information (S and IO) of the original IOI sentence.
Footnote 2: This corrupted distribution is denoted by \(p_{\text{ABC}}\) in the original paper of Wang et al. (2023)
Experimental resultsFirst, we observe that the probability of outputting the IO of the original IOI sentence is negligible (\(5\mathrm{e}{-4}\)) under this corruption. As a result, using probability detects neither NNMs. On the other hand, we find that logit difference still can. See Appendix H.3 for the plots. In Appendix F, we confirm the same finding when corruption is applied to S1 and IO only.
At a high-level, we believe that this is a pitfall of probability as an evaluation metric. Its non-negative nature makes it incapable of discovering negative model components in certain settings.
## 5 Sliding window patching
In this section, we examine the technique of sliding window patching in localizing factual information (Meng et al., 2022). For each layer, the method patches multiple adjacent layers simultaneously and computes the joint effects. Hence, one should interpret the result of Meng et al. (2022) as the effects being constrained within a window rather than at a single layer. We argue that such as hypothesis can be tested by an alternative approach and we compare the results from these two.
Experimental setupInstead of restoring multiple layers simultaneously, we patch each individual MLP layer one at a time. Then as an aggregation step, for each layer, sum up the single-layer patching effects of its adjacent layers. For example, we add up the effect at layer 2 to layer 6 to get an aggregated effect for layer \(4\). We patch the MLP output at the last subject token.
Experimental resultsFor each window size, we compute the ratio of the maximum patching effect at the middle MLP layers between sliding window patching and summation of single-layer
Figure 5: **Sliding window patching vs summing up individual patching effects; patching MLP activation at the last subject token in GPT-2 XL on factual recall prompts. Sliding window patching offers \(1.40\)x, \(1.75\)x and \(1.59\)x peak value than summation of single-layer patchings. Single-layer patching (a) suggests a weak peak.**
patching. Over the combinations of window sizes, metrics and corruption methods, we find sliding window patching typically provides at least \(20\%\) more peak effect than the summation method.
In Figure 5, for window sizes of \(3,5,10\), we plot the results using GN corruption and probability as the metric, the original setting as in Meng et al. (2022). We observe significant gaps between the sliding window and the summation method. Moreover, for single-layer patching, the peak at layer 15 is fairly weak (Figure 5a). Sliding window patching appears to generate more pronounced the concentration, as we increase the window sizes.
The result suggests that sliding window patching tends to amplify weak localization from single-layer patching (see Figure 12 for plots on single-layer MLP patching in GPT-2 XL). We believe this may arise due to certain non-linear effects in joint patching and therefore results from which should be carefully interpreted; see Section 6 for more discussions.
## 6 Discussion and Recommendations
We have observed a variety of gaps between corruption methods and evaluation metrics used in activation patching on language models. In this section, we summarize our findings and provide recommendations.
Corruption methodsWe are concerned that GN corruption puts the model off distribution by introducing noise never seen during training. Indeed, in Section 3.2, we provide evidence that in the corrupted run, model's internal functioning is OOD relative to the clean distribution. This may induce unexpected anomalies in the model behavior, interfering with our ability to localize behavior to specific components. Conceivably, GN corruption could even lead to unreliable or illusory results.
More broadly, this presents a challenge to any intervention techniques that introduce OOD inputs to the model or its internal layers, including ablations. In fact, similar concerns have been raised earlier in the interpretability literature on feature attribution as well; see e.g. Hooker et al. (2019); Janzing et al. (2020); Hase et al. (2021).
In contrast, STR provides counterfactual prompts ("The Eiffel Tower is in" vs "The Colosseum is in") that are in-distribution and thus induces in-distribution activations, avoiding the OOD issue. Therefore, we recommend STR whenever possible. GN may be considered as an alternative when token alignment or lack of analogous tokens makes STR unsuitable.
Evaluation metricsWe generally recommend avoiding using probability as the metric, given that it may fail to detect negative model components.
We find logit difference a convincing metric for localization in language models. Consider an IOI setting where a model contains an attention head that boosts the logits of all (single-token) names. This head, though important, should not be viewed as part of the IOI circuit, but our interventions may still affect it.3 By measuring \(\text{Logit}(\text{IO})-\text{Logit}(\text{S})\), logit difference controls for such components and ensures they are not detected. This may not be achieved by other metrics, such as probability or Logit(IO) alone.
Footnote 3: We note that if our interventions do not affect the head, then it will not show up on any metric.
KL divergence tracks the full model output distributions, rather than focused only on the correct or incorrect answer, and can be a reasonable metric for circuit discovery as well (Conmy et al., 2023).
Sliding window patchingWe speculate that simultaneously patching multiple layers could capture the following non-linear effects and results in inflated localization plots:
* Joint patching may suppress the flow of corrupted information within the window of patched layers, where single-layer patching offers no such control.
* A window of patched layers may jointly perform a crucial piece of computation, such as a major boost to the logit of the correct token, which no individual layer can single-handedly achieve.
Generally, when examining the outcome from sliding window patching, one should be aware of the possibility of multiple layers working together. Thus, the results from the technique are to
be interpreted as the joint effects of the full window, rather than of a single layer. In practice, we recommend experimenting with single-layer patching first and only consider sliding window patching when individual layers seem to induce small effects.
Which tokens to corrupt?In some problem settings, a prompt contains multiple key tokens, all relevant to completing the task. This would offer the flexibility to choose which tokens to corrupt. This is another important dimension of activation patching. For instance, our experiments on IOI in Section 3 corrupt the S2 token. An alternative is to corrupt the S1 and IO. While this may seem an implementation detail, we find that this can greatly affect the localization outcomes.
Specifically, in Appendix F, we test corrupting S1 and IO in activation patching on IOI sentences, by changing their values to random names or adding noise to the token embeddings. We find that almost all techniques discover the \(3\) Name Mover (NM) Heads of the IOI circuit (Table 4 and Figure 11). These are attention heads that directly contribute to Logit(IO) as shown by Wang et al. (2023). In contrast, our prior experiments corrupting S2 miss most of them (Table 1).
We intuit that corrupting different tokens allows activation patching to trace different information within the model, thereby suggesting varying localizations results. For instance, in our prior experiments replacing S2 by IO, patching traces the value of IO or its position. On the other hand, in changing the values of S1 and IO while fixing their positions, patching highlights exactly where the model processes these values.
In practice, we recommend trying out different tokens to corrupt when the problem setting offers such flexibility. This may lead to more exhaustive circuit discovery.
## 7 Related work
Activation patchingActivation patching is a variant of causal mediation analysis (Vig et al., 2020; Pearl, 2001), similar forms of which are used broadly in the interpretability literature (Soulos et al., 2020; Geiger et al., 2020; Finlayson et al., 2021; Geiger et al., 2022). The specific one with GN corruption was first proposed by Meng et al. (2022) under the name of causal tracing. Wang et al. (2023); Goldowsky-Dill et al. (2023) generalize this to a more sophisticated version of path patching.
Circuit analysisCircuit analysis provides post-hoc model interpretability (Casper et al., 2022). This line of work is inspired by Cammarata et al. (2020); Elhage et al. (2021). Other works include Geva et al. (2022); Li et al. (2023a); Nanda et al. (2023a); Chughtai et al. (2023); Zhong et al. (2023); Nanda et al. (2023b); Varma et al. (2023); Wen et al. (2023); Hanna et al. (2023); Lieberum et al. (2023). Circuit analysis often requires manual effort by researchers, motivating recent work to scale or automate parts of the workflow (Chan et al., 2022; Bills et al., 2023; Conmy et al., 2023; Geiger et al., 2023; Wu et al., 2023; Lepori et al., 2023).
Mechanistic interpretability (MI)MI aims to explain the internal computations and representations of a model. While circuit analysis is a major direction under this broad theme, other recent case studies of MI in language model include Mu and Andreas (2020); Geva et al. (2021); Yun et al. (2021); Olsson et al. (2022); Scherlis et al. (2022); Dai et al. (2022); Gurnee et al. (2023); Merullo et al. (2023); McGrath et al. (2023); Bansal et al. (2023); Dar et al. (2023); Li et al. (2023c); Brown et al. (2023); Katz and Belinkov (2023); Cunningham et al. (2023).
## 8 Conclusion
We examine the role of metrics and methods in activation patching in language models. We find that variations in these techniques could lead to different interpretability results. We provide several recommendations towards the best practice, including the use of STR as the corruption method.
In terms of limitations, our experiments are on decoder-only language models of size up to \(6\)B. We leave it as a future direction to study other architectures and even larger models. Our work tests overriding corrupted activations by clean activations. The other direction--patching corrupted to clean--has also been used for circuit discovery, and it is interesting to compare these two. In addition, we provide tentative evidence that certain corruption methods lead to OOD model behaviors
and suspect that this can make the resulting interpretability claims unreliable. Future work should examine this hypothesis closely and furnish further demonstrations. Finally, it is interesting to develop more principled techniques for activation patching or propose other methods for localization.
#### Acknowledgments
FZ would like to thank Matthew Farhbach, Dan Friedman, Johannes Gasteiger, Asma Ghandeharioun, Stefan Heimersheim, Janos Kramar, Kaifeng Lyu, Vahab Mirrokni, Jacob Steinhardt and Peilin Zhong for helpful discussions, and Jiahai Feng, Yossi Gandelsman, Oscar Li and Alex Wei for comments on early drafts of the paper.
|
2309.16056 | Optimization of Magnetized Electron Cooling with JSPEC | The Electron-Ion-Collider (EIC) will be a next-generation facility located at
Brookhaven National Laboratory (BNL), built with the goal of accelerating heavy
ions up to 275 GeV. To prevent ion beam size growth during the acceleration
phase, cooling techniques will be required to keep the beam size from growing
due to intra-beam scattering. The JSPEC (JLab Simulation Package for Electron
Cooling) $\texttt{C++}$ package is a tool designed to numerically model
magnetized and unmagnetized cooling through friction forces between
co-propagating electron and ion bunches.
Here we describe a feature that has been added to the JSPEC package, which
implements a Nelder-Mead Simplex optimization algorithm to allow a user to
optimize certain beam parameters in order to achieve a target cooling time. | Stephen J. Coleman, David L. Bruhwiler, Dan T. Abell, Boaz Nash, Ilya Pogorelov, He Zhang | 2023-09-27T22:35:18Z | http://arxiv.org/abs/2309.16056v1 | # Optimization of Magnetized Electron Cooling with JSPEC
###### Abstract
The Electron-Ion-Collider (EIC) will be a next-generation facility located at Brookhaven National Laboratory (BNL), built with the goal of accelerating heavy ions up to 275 GeV. To prevent ion beam size growth during the acceleration phase, cooling techniques will be required to keep the beam size from growing due to intra-beam scattering. The JSPEC (JLab Simulation Package for Electron Cooling) C++ package is a tool designed to numerically model magnetized and unmagnetized cooling through friction forces between co-propagating electron and ion bunches. Here we describe a feature that has been added to the JSPEC package, which implements a Nelder-Mead Simplex optimization algorithm to allow a user to optimize certain beam parameters in order to achieve a target cooling time.
## 1 Introduction
The Electron-Ion-Collider (EIC), the layout of which is shown in figure 1, will be a next-generation facility located at Brookhaven National Laboratory (BNL), built with the goal of accelerating heavy ions up to \(275\,\mathrm{GeV}\) and luminosities up to \(10^{34}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}\)[1]. To achieve these luminosities, ion beams will need to have high intensity and low momentum spread. Ion beam size growth arises due to stochastic interactions associated with Intra-Beam Scattering (IBS). The growth can be slowed or balanced by introducing some form of cooling tuned to the characteristics of the beam. Initial design studies plan for strong cooling of the ion beam using a coherent electron cooling technique, but the implementation of this technique depend on future R&D efforts. Magnetized electron cooling could be an alternative or a backup plan for controlling growth should the primary effort fall short [1, 2].
In magnetized electron cooling, stochastic Coulomb collisions of individual ions in an ion bunch occur with co-propagating electrons that are confined in a solenoidal section. Interactions with electrons following 'frozen' Larmor trajectories cause the ensemble of ions in the bunch to
eel a drag, reducing the momentum spread through dynamical friction. As this friction force is reducing the average velocity of the ion beam in the beam rest frame, it can be considered as 'cooling' the ion beam. The possible benefits of magnetized electron cooling are reviewed in [3, 4, 5].
## 2 Jspec
The JLab Simulation Package for Electron Cooling (JSPEC) package is an open-source C++ package originally developed at the Thomas Jefferson National Accelerator Facility [6, 7]. In this package, friction force models and IBS models are applied to a model beam orbiting an accelerator with MAD-X lattice and other properties supplied by the user. JSPEC was extensively benchmarked against the Betacool electron cooling simulation package [8], an older package based on MAD8 that also models IBS and cooling.
The JSPEC dynamic calculation models the evolution of an ion bunch over time, after many passes in the cooler. The user has the option to produce dynamic simulations in one of two ways, either propagating with moments of distributions or with individual macro-particles. In the former, the properties of representative ions are drawn randomly from the initial moments of each distribution. At the conclusion of each step in the dynamic calculation, new moments are calculated after perturbations from IBS and cooling kicks have been applied to the ensemble of representative ions. At the beginning of the subsequent step, new representative ions are initialized with properties drawn from those perturbed moments. In this way, the distributions
Figure 1: Current overview of the electron-ion collider concept[1]. The electron cooler is at 3 o’clock.
are guaranteed to stay Gaussian. In the macro-particle method, initial properties of macro-particles are drawn from Gaussian distributions as before, but in the series of dynamic simulation steps each macro-particle experiences its own history of cooling and IBS kicks. The dynamics and the summary statistics are drawn from the ensemble of independent macro-particles. This is a more accurate representation of the physics, and non-Gaussian distributions are possible. For the first time step, where initial cooling rates are calculated and reported, these two methods yield identical results.
#### 2.0.1 Rate Calculation
JSPEC reports the rate of change of the emittance in horizontal, vertical, and longitudinal directions in units of 1/sec. The initial rate of change is also broken down into components caused by IBS and by electron cooling. Initial IBS rates are calculated after initializing an ion bunch with known emittance values, propagating that bunch forward one step to induce IBS kicks (but not cooling kicks) to a set of macro-particles. The emittances are then re-calculated from the perturbed distributions. The IBS rate is simply the difference in the emittances divided by the simulation time step. Two IBS models are available in JSPEC, the Bjorken-Mtingwa model [9] and the Martini model [10]. The initial cooling rate calculation is performed in a similar way to the IBS rate calculation. The rate is calculated by determining the difference in the emittance before and after cooling kicks are applied in the absence of IBS kicks, and dividing that difference by the time step.
Several new friction force models were added to the JSPEC package, bringing it in line with the set of models available in Betacool. This set includes the Derbenev & Skrinsky model [11], the Meshkov asymptotic model [12], the Budker un-magnetized model [13], and two other numerical approximations of the first-principles unmagnetized model [8]. Implementation of these models required numerical evaluation of indefinite integrals. These were solved through numerical routines provided by the GSL library [14], and these calculations were parallelized through use of the OpenMP package [15]. These friction force formulas in JSPEC have been benchmarked with BETACOOL as shown in figure 2.
### Optimization
There are many parameters that may affect the cooling rate calculated for a proposed magnetized electron cooler. We have implemented a Nelder-Mead simplex optimization algorithm in JSPEC, which allows users to search a multi-dimensional parameter space for a set of values that meets their design needs. A full listing of parameters that may be varied within the optimization algorithm is shown in Table 1.
The Nelder-Mead simplex algorithm [16], available within the GSL library [14], was implemented because it is a gradient-free method, which makes it insensitive to the statistical noise encountered when cooling rates are repeatedly calculated with multiple independent simulations using a finite number of macro-particles. Each of the macro-particle parameters are Gaussian distributed and are independently drawn at each optimization step. While the parameter space may be smooth in the limit of an infinite number of simulated macro-particles, the statistical noise associated with smaller numbers of macro-particles can lead to problems when evaluating the gradient. The impact of this statistical noise is further mitigated by sampling a large number of macro particles, at the cost of longer simulation times (the default value is \(10^{7}\)).
### Cost function
The cost function for this optimization approach uses existing JSPEC cooling rate calculations. The cooling rate is then converted into an approximate cooling time \(T_{i}\) in a particular direction, \(i\in x,y,s\). In units of minutes, this time is given by
\[T_{i}=60\frac{1}{R_{i}} \tag{1}\]
where \(R_{i}\) is the initial cooling rate in the \(i\) direction. The value being minimized within the cost function \(C\) is then
\[C_{i}=|T_{0}-T_{i}| \tag{2}\]
where the target cooling time is \(T_{0}\). Note that cooling time and thus optimization can only be performed in one direction, so users must decide to prioritize cooling in a longitudinal or transverse direction based on their design constraints. Without a defined target time, the cost function would effectively behave with \(T_{0}=0\), leading the algorithm to select undesirable or extreme values for the free parameters, particularly for parameters that have approximately linear relationships with cooling time. In that case, the optimization algorithm finds reductions to the cost function by repeatedly varying a single parameter, ignoring all others.
### Uniqueness of results
For an optimization problem with a large number of free parameters, there exist an infinite number of possible solutions along a locus in the multi-dimensional parameter space. As a trivial example, consider an optimization problem with three free parameters, with selected values that satisfy the optimization condition at \(\alpha=\alpha_{0}\), \(\beta=\beta_{0}\), and \(\gamma=\gamma_{0}\). Now assume the two parameters \(\alpha\) and \(\beta\) are anti-correlated, meaning that an infinitesimal increase in one parameter
Figure 2: Friction force curves as modeled in JSPEC and in Betacool.
coupled with an infinitesimal decrease in the other will yield the same result. While this may remain true for perceptible deviations, a significant change (say, \(\alpha\rightarrow\alpha_{1}\) and \(\beta\rightarrow\beta_{1}\)) may cause the third free parameter to compensate as \(\gamma\rightarrow\gamma_{1}\). Now, at this new point, infinitesimal changes about \(\alpha_{1}\) can be compensated with anti-correlated changes about \(\beta_{1}\). Thus, many equivalent solutions exist and repeated attempts at optimization with identical initial conditions may not yield the same sets of optimal parameters.
### Searching beyond local minima
Nelder-Mead Simplex optimization procedures may sometimes fall into local minima and fail to emerge. For this reason, the optimization procedure is attempted many times, each with slightly different initial conditions. The initial conditions for each attempt are drawn randomly from a Gaussian distribution, using the user-provided starting point as the mean of the distribution. For each attempt, initial conditions are then drawn at random and used to initialize the Nelder-Mead Simplex algorithm. After a number of attempts (15 by default) the optimal set of parameters producing the minimum value of the cost function \(C\) from any of these attempts is stored in a plaintext output file, best.txt.
If the Nelder-Mead Simplex suggests a parameter that is unphysical (e.g. a negative bunch length, or a negative electron density), the optimization routine will alert the user that an unphysical attempt has been made and will discard the suggestion.
### Parameter Scans
Users may hold \(N-1\) parameter values constant and scan values of the holdout parameter in order to see its effect on cooling rates. Betacool does not have a built-in support for parameter scans. A comparison of scanned values for matching configurations in JSPEC and in Betacool is shown in Figure 3.
A reasonable strategy for optimization would be to allow the optimization algorithm to find a suitable set of parameters, then examine the 1-d parameter scans of each free parameter in order to understand the nature of the dependence of cooling time on each parameter in the local region of parameter space. This can also be useful for practical purposes. For example,
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter & Variable & i values \\ \hline Ion Twiss \(\alpha_{i}\) & alpha\_i & v,h \\ Ion Twiss \(\beta_{i}\) & beta\_i & v,h \\ \(e^{-}\) bunch RMS width \(\sigma_{i}\) & sigma\_i & x,y,s \\ Dispersion & disp\_i & v,h \\ Dispersion Derivative & disp\_der\_i & v,h \\ Cooler Magnetic Field \(B\) & bfield & — \\ \(e^{-}\) Temperature & temp\_i & tr,long \\ \# of electrons & n\_electron & — \\ \hline \hline \end{tabular}
\end{table}
Table 1: A full listing of variables that users may make available to the optimizer. Variables that are not initialized within the optimize section of the input file are held fixed. Here replace i by v or h for vertical or horizontal variables respectively.
an optimum set of parameters may include a solenoid magnetic field of \(>10\,\mathrm{T}\), a difficult and expensive element to produce. A parameter scan might reveal that a lower solenoid field strength will suffice to satisfy the design objectives of the cooler. The syntax for configuring a JSPEC parameter scan are shown in Appendix B.
## 3 Examples for EIC
The EIC will accelerate ions, from protons (\(Z=1\)) to \(\mathrm{Au}^{+}\) (\(Z=79\)) to collision energies ranging from 41-275 GeV. We will construct examples at both ends of the ion mass spectrum and use them to demonstrate the optimization and parameter scan features of JSPEC. We will also construct an example to demonstrate magnetized electron cooling at pre-injection. We will select the Parkhomchuk [17] friction force model for all cases.
### Cooling at pre-injection
One concept being considered for cooling at EIC is to cool protons at pre-injection, with ion beam energies of 23.8 GeV. This is favorable for magnetized electron cooling, because the friction force scales with \(1/\gamma^{2}\). Cooling at lower energies is generally easier. The parameters being considered for this concept are shown in Table 2.
#### 3.1.1 Preliminary test
We start the optimization procedure by allowing only a few parameters to float: the transverse beam sizes \(\sigma_{x}\) and \(\sigma_{y}\), the solenoid field strength \(B\), and the number of electrons in a bunch. We set the target cooling time to \(T_{0}=20\,\mathrm{minutes}\). The initialization code for this is shown in Appendix A and the results are listed in Table 3. We see that the optimization has
Figure 3: Parameter scan of the horizontal dispersion parameter performed in JSPEC compared to single-point sampled values in Betacool.
yielded a cooling time within a second of 20 minutes. While this satisfies the target time, we notice that the electron density is quite high from an operational standpoint.
#### 3.1.2 Alternate optimizer parameterization
Examining the 1-dimensional dependence of the cooling time on a particular parameter may inform excursions away from optimum values based on such practicalities. While the friction force models may inform the dependence on some parameters, for example a linear dependence on \(n_{e}\), non-linear relationships may be observed with sampling in a parameter scan. In Figure 4 increasing \(B\)-field values, holding all other parameters fixed, yields shorter cooling times, but the relationship is asymptotic. A user may judge whether a 2 Tesla solenoid is sufficient for the cooling goals.
We can now fix the B-field value and run the optimization algorithm again, with a different set of floating parameters. The free parameters were the electron bunch length \(\sigma_{s}\), the number of electrons in the bunch \(n_{e}\), the transverse electron temperature \(T_{\perp}\), the longitudinal electron temperature \(T_{\parallel}\), and the \(\beta_{v}\) and \(\beta_{h}\) of the cooler. The target was a 20 minute cooling time in the longitudinal direction, and then separately in the vertical direction, to match the EIC magnetized electron cooling design study [2]. Results are summarized in Table 4.
This satisfies the target cooling time with the inverse of the initial
\begin{table}
\begin{tabular}{l c} \hline Parameter Name & value \\ \hline Species & Proton \\ Ion Energy [GeV] & 23.8 \\ Bunch Intensity [\(10^{10}\)] & 2.6 \\ Beam Current [A] & 0.69 \\ \(\beta\) horizontal [cm] & 150-200 \\ \(\beta\) vertical [cm] & 250-200 \\ \(\epsilon_{\text{Horiz}}\) (Norm.) [\(\mu\)m] & 2.7 \\ \(\epsilon_{\text{Vert}}\) (Norm.) [\(\mu\)m] & 0.25 \\ \(\Delta p/p\) [\(10^{-4}\)] & 10.3 \\ Bunch Length [cm] & 60 \\ Length of cooling section [m] & 130 \\ \hline \end{tabular}
\end{table}
Table 2: Table of low-energy cooling parameters. Values from [2].
\begin{table}
\begin{tabular}{l c} \hline Parameter Name & optimization result \\ \hline \(\sigma_{x}\) [\(\mu\) m] & 48.67 \\ \(\sigma_{y}\) [\(\mu\) m] & 317.94 \\ B field [T] & 1.93 \\ \(N_{e}\) [\(10^{10}\)] & 0.94 \\ Cost function \(C\) [min] & \(4.29\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 3: Table of optimized parameters after the preliminary optimization for 20 minute cooling times for beam conditions shown in Table 2.
looking at the long-term dynamic behavior of an ion bunch that passes through the cooler on multiple orbits. An approximate bunch of macro-particles representing the whole ion bunch may be simulated with JSPEC. Each of the macro-particles evolves independently, and summary statistics and emittance are then calculated from the ensemble.
### Cooling for EIC
We can use the optimization algorithm to explore the use of magnetized electron cooling within the EIC hadron storage ring. Here we simulate the lowest operating energy range within the full range of ion masses. The optimization algorithm was initialized with the fixed values
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter Name & \(s\) optimization & \(y\) optimization \\ \hline \(\beta_{y}\) [m] & 99.8 & 100.1 \\ \(\beta_{y}\) [m] & 98.8 & 102.0 \\ \(N_{e}\) [\(10^{10}\)] & 1.06 & 1.04 \\ \(T_{\perp}\) [eV] & 0.014 & 0.011 \\ \(T_{\parallel}\) [eV] & 0.01 & 0.013 \\ \(\sigma_{s}\) [cm] & 4.48 & 4.90 \\ Cost function \(C\) [min] & \(1.12\times 10^{-4}\) & \(1.96\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Table of optimized parameters for 20 minute cooling times in the longitudinal direction and the vertical direction for beam conditions shown in Table 2.
Figure 4: Parameter scan of the electron bunch length parameter around the central value determined from the optimization algorithm. The grey vertical line marks the value suggested by the optimizer, while the dashed horizontal line denotes the boundary between ion beam heating (positive values) or cooling (negative) values.
shown in Table 5, with either protons or gold ions. The cooling target time was set to be 20 minutes in the longitudinal direction in both cases. The optimized parameters are presented in Table 6.
## 4 Conclusion
We have shown techniques that can be used to optimize the design of an arbitrary magnetized electron cooler. A Nelder-Mead Simplex optimization algorithm was introduced to the JSPEC magnetized electron cooling simulation code in order to sample many possible cooler configurations. We then demonstrated these techniques with examples relevant to the proposed electron ion collider. The optimized sets of parameters were then validated through dynamic simulations, also generated with JSPEC. Readers may simulate the optimized sets of parameters for themselves using the cloud-based Sirepo interface for JSPEC available at [https://sirepo.com/jspec](https://sirepo.com/jspec).
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter Name & proton & Au\({}^{+}\) \\ \hline Ion Energy [GeV] & 41 & 41 \\ \(B\)-field [T] & 2.07 & 2.35 \\ \(N_{e}\) [\(10^{10}\)] & 1.22 & 0.156 \\ \(T_{\perp}\) [eV] & 0.011 & 0.013 \\ \(T_{\parallel}\) [eV] & 0.009 & 0.008 \\ \(\sigma_{s}\) [cm] & 5.8 & 5.8 \\ Cost function \(C\) [min] & \(4.8\times 16^{-5}\) & \(1.26\times 16^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Table of optimized parameters for 20 minute cooling times in the longitudinal direction for beam conditions shown in Table 5.
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter Name & proton & Au\({}^{+}\) \\ \hline Ion Energy [GeV] & 41 & 41 \\ Bunch Intensity [\(10^{10}\)] & 2.6 & 0.036 \\ Beam Current [A] & 0.69 & 0.41 \\ \(\beta\) horizontal [cm] & 90 & 90 \\ \(\beta\) vertical [cm] & 7.1 & 4 \\ \(\epsilon_{\text{Horiz}}\) (Norm.) [\(\mu\)m] & 2.7 & 3.0 \\ \(\epsilon_{\text{Vert}}\) (Norm.) [\(\mu\)m] & 0.25 & 0.3 \\ \(\Delta p/p\) [\(10^{-4}\)] & 10.3 & 10.0 \\ Bunch Length [cm] & 7.5 & 11.6 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Table of EIC parameters. Proton and gold ion values are taken from Table 3.3 and Table 3.5 in [1] respectively.
Code for defining a parameter scan
Much of the previous appendix conveys for a parameter scan. The changes which initialize a parameter scan, in this case for the magnetic field strength \(B\) (bfield), are shown here:
... section_optimization bfield = 1.0e-4 bfield = 10.0 steps = 100
section_run total_expansion_rate optimize_cooling
Here the repeated definition of the variable initializes the parameter scan and activates the steps variable, which defines the granularity of the scan. The output is stored in a plaintext scan.txt file showing the cooling rate for \(x,y,\) and \(s\) directions at each step.
## Appendix B Input code for pre-injection cooling optimization
The full plaintext input file parsed by JSPEC is shown below, for an optimization problem. This input file requires the lattice file to be in Mad-X Twiss parameter format (.tfs file type), and that the file be present in the working directory. Values defined in section_optimization over-ride values for the same parameter defined in earlier sections. All parameters that are not defined in the section_optimization are fixed, and are not allowed to vary with the optimization algorithm.
section_scratch ion_mass = 938.272 ion_ke = 23800.0 ion_gamma = 1 + ion_ke/ion_mass section_ion charge_number = 1 mass = ion_mass kinetic_energy = ion_ke norm_emit_x = 2.5e-06 norm_emit_y = 2.5e-06 momentum_spread = 0.001 particle_number = 2.6e10 rms_bunch_length = 0.6 section_ring lattice = Lattice.tfs section_ibs nu = 100 nv = 100 log_c = 20.6
coupling = 0.0
section_cool length = 130.0 section_number = 1 magnetic_field = 5.06 bet_x = 150 bet_y = 150 disp_x = 0.0 disp_y = 0.0 alpha_x = 0.0 alpha_y = 0.0 disp_dx = 0.0 disp_dy = 0.0
section_e_beam gamma = ion_gamma tmp_tr = 0.0001 tmp_l = 0.01 shape = bunched_gaussian radius = 0.009 current = 4.0 sigma_x = 0.0002 sigma_y = 0.0002 sigma_z = 0.07 length = 0.05 e_number = 5e10
section_ecool sample_number = 100000.0 ion_sample = MONTE_CARLO force_formula = PARKHOMCHUK
section_run create_ion_beam create_ring create_e_beam create_cool section_simulation ibs = on e_cool = on time = 2000.0 step_number = 100 output_file = JSPECdump.SDDS model = RMS ref_bet_x = 10.0 ref_bet_y = 10.0 ref_alf_x = 0.0 ref_alf_y = 0.0 ref_disp_x = 0.0 ref_disp_y = 0.0
ref_disp_dx = 0.0 ref_disp_dy = 0.0
section_optimization sigma_x = 2e-4 sigma_y = 2e-4 bfield = 2.0 n_electron = 1.5 axis = s time = 20
section_run total_expansion_rate optimize_cooling
The best fit parameters will be stored in a plaintext file in the working directory called bests.txt.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.